Oct 6, 2014
Most organizations understand the importance of maintaining essential business processes following a disaster, whether it's an extreme weather event, power outage or hardware failure.
In fact, there are many incidents that cause unscheduled database downtime, creating huge problems for online and transactional systems. However, despite being aware of the issue, companies appear to still be unprepared for large-scale disasters that would impede normal operations.
Earlier this year, a study by Backup My Info! (BUMI) showed that while 88 percent of enterprises have a disaster recovery (DR) plan in place, less than half are truly ready to cope with a catastrophic event.
The survey's questions were posed to managed service providers, vendors and channel consultants, with 25 percent confirming their clients take a week to resolve data errors. Worryingly, 4 percent completely ignore data problems.
This lack of a proactive approach could be due to the fact businesses rarely check whether data errors are occurring. Just over one-fifth (21 percent) examined their data monthly, while 8 percent performed an annual review.
Jennifer Walzer, CEO of BUMI, said there was a distinct lack of urgency in many businesses when it comes to comprehensive DR capabilities.
"While today's complex environments support data accessibility from a broad range of devices located within and beyond the firewall, they are also vulnerable when a disaster or outage occurs," she stated.
"Organizations must continue to examine their data backup and recovery processes and make adjustments to ensure business continuity and mitigate the risk of downtime."
Lack of preparedness
BUMI's results followed this year's Disaster Recovery Preparedness Council (DRPC) report, published in March, which found 73 percent of companies are likely to fall short of performance targets in a crisis.
According to the council's data, 36 percent of businesses experienced the loss of one or more critical applications, data files or virtual machines for at least a few hours at some point over the last year.
Close to one in five organizations reported they lost important programs over a period of days, while 25 percent of those surveyed said they had lost all or part of their data center for hours or days.
Just over 2 percent said the data center had been lost permanently, suggesting those companies may never have recovered following the disaster.
The DRPC commented that these outages can be extremely expensive, with almost 20 percent of respondents saying downtime cost them somewhere between $50,000 and $5 million.
There were a number of problems preventing optimized DR, including a lack of planning. More than 60 percent of businesses did not have a fully documented procedure for coping with worst-case scenarios.
Of those that had a plan in place, only 40 percent felt it had worked when the enterprise faced its most severe incident.
One of the primary problems the DRPC report identified was shortages in appropriate funding - only 35 percent of organizations believed enough money was spent on preparing for disasters.
Conversely, 25 percent said there was no allocation for DR functions at all, while almost 10 percent confirmed they were significantly underfunded.
However, the report noted that many businesses were able to reduce the cost of testing recovery plans by investing in real-time replication software, which provided standby failover options.
"High scoring organizations were implementing DR, planning DR, or revising/migrating their DR plan," the council said.
"Organizations that merely write a plan and file it away are simply not as secure in their recovery as those that regularly test, update, and document their DR plans."
Businesses looking to improve were advised to build a comprehensive strategy for important systems, define appropriate recovery time objectives and recovery point objectives, and automate testing procedures.