Kroll fingers human error

Kroll Ontrack has drawn attention to the fact that human error is responsible for an increasing number of incoming enterprise data recovery requests.


The firm attributes the trend in human error incidents to more complex storage systems coupled with depleted resources to replace equipment, train IT staff and maintain optimal staffing levels.


“While advanced storage options such as virtualization and cloud computing offer corporations storage optimization, human processes are still at the root of these solutions, instructing the technology as to how to perform,” said Jeff Pederson, manager of Ontrack Data Recovery operations, Kroll Ontrack.


“The complexity of these systems often require a steep learning curve, and with reported IT spending at a low, human error is increasingly common.”


The most common enterprise human error cases Kroll Ontrack sees include:



  • Pulling the wrong drive. While trying to replace a failed disk in a RAID array, a healthy disk is accidentally removed;

  • Reformatting a disk. During a server migration, the wrong SAN LUN is accidentally reformatted.

  • Restoring corrupt/old backup data. A server containing a business-critical database is deleted by mistake and is restored with a corrupt or incomplete backup prior to realizing the backup is not sound;

  • Rebuilding a bad array. Following a multiple drive failure in a RAID array, an attempt to force the failed drives back online and rebuild the configuration is made, whereby damaging or corrupting the data on the array.

  • Deleting data. Files, volumes, virtual machines or a SAN LUN is deleted by accident and there is no backup or the backup is old or corrupt.

Examples of human error cases and subsequent enterprise data recoveries by Kroll Ontrack in 2009 included:



  • A support engineer forgot to turn off his replication software before formatting the volumes on the primary site. Unfortunately, this mistake resulted in overwriting the backup.

  • An entity with a 10 drive Raid 5 array had a drive die without notice for three months. When a second drive died, the server crashed, deeming all data unavailable. An organization accidentally ran a script during a test project that deleted all 38 virtual machines from two arrays.

  • A company leasing cloud computers accidentally detached a “virtual” storage volume in the cloud environment – similar to pulling a cable from an operational volume.

  • Twenty VMFS volumes were “quick initialized” on a backup server. While the virtual machines and ESX servers continued to run, the backup server stopped operating. Rebooting the ESX risked downtime and data loss.
Business Solution: