Data is a vital business asset for most organizations. Although high-availability technologies significantly reduce the probability of data loss, events such as technical failure, virus attack, deliberate sabotage or employee error can still destroy data. Therefore, organizations must plan for data recovery and implement mechanisms for managing the data recovery process.
When recovering data, every second counts. Fast recovery enables organizations to meet SLA requirements, to minimize the cost of the unavailability of data, and to get back to a working environment as soon as possible.
Here are some best practices for fast and effective data recovery:
Match recovery management technology to data value
Organizations hold data of varying value. Based on downtime costs, firms should use a mix of the following technologies for different environments:
- Traditional backup and restore
- D2D2T hierarchies, possibly including virtual tape libraries (VTLs)
- Inter-site data replication
- Data rewinding
Automate the recovery process
Recovery costs, particularly when manual intervention is needed, can be significant. By automating the recovery process as much as possible, cost is brought down and the probability of failure is reduced. Automation is even more important in complex and distributed environments.
Centralize recovery management
As soon as an organization outgrows a single office, backup and recovery operations become more complex. Wherever possible, organizations should seek to centralize recovery management processes, ideally by centralizing the data that is needed to recover operations. Centralized recovery management enables consistently applied backup policies and keeps physical control of data and backup sets.
A key factor for centralizing recovery management is an integrated suite of management tools and associated policies that require minimal staffing requirements. These tools should integrate with the existing alert and notification infrastructure to provide timely warning of data protection issues.