The Distribute IT Fiasco: Risk Management Done Wrong

“It is not the strongest species that survive, nor the most intelligent, but the ones most responsive to change” – Charles Darwin.

In today’s business world, where organizations face ever-escalating customer demands and expectations and little room for downtime, logic dictates that businesses today are seriously revamping their business continuity and risk management plans, or developing one if they did not have any.

This is even more pertinent given what we have witnessed in recent months in the areas of data breaches, hack attempts and the underground “war” being waged in cyberspace that has put most of the world’s powerful organizations on the defensive.
Business continuity management is usually regarded as “the capability to assist in preventing, preparing for, responding to, managing and recovering from the impacts of a disruptive event”. (Business Continuity Management, Australian National Audit Office, 2009)

We have always been told that to remain competitive we must build a resilient IT infrastructure, or risk our competition having us for lunch. Apparently, the folks at Distribute IT were not listening.

As few may be aware, Distribute IT, one of Australia’s web hosting providers got hacked on June 14, 2011 and practically went out of business overnight. In what could only be described as weird, absurd or the greatest display of corporate irresponsibility, the company did not have sufficient redundant backups to save its or most of its customers’ data. The company did not take offline backups and was forced to shamefully admit that:

Our Data Recovery teams have been working around the clock in an attempt to recover data from the affected servers shared Servers [sic]. At this time, we regret to inform that the data, sites and emails that were hosted on Drought, Hurricane, Blizzard and Cyclone can be considered by all the experts to be unrecoverable… our greatest fears have been confirmed that not only was the production data erased during the attack, but also key backups, snapshots and other information that would allow us to reconstruct these Servers from the remaining data.

Aptly named servers apparently, because nothing good usually comes out of an encounter with drought, blizzard, hurricane or cyclone unless you heed safety warnings and take appropriate measures! As the company explained to its customers, the hack and its aftermath left them with “…little choice but to assist you in any way possible to transfer your hosting and email needs to other hosting providers.”

Business continuity management is supposed to be an essential part of an organization’s overall approach to effective risk management. It is or was the overall responsibility of DIT’s executive to raise awareness and implement some form of resilience into the infrastructure and sadly, it failed woefully in that regard.

It is amazing that despite what we have experienced this year in terms of hacks, breaches and what not by the likes of Google, RSA, Comodo, Barracuda, and City Group to name a few, Distribute IT did not think it was pertinent to take precautions and bolster the security of its servers. The company has since been acquired by NetRegistry, but questions remain.

Distribute IT was ICANN accredited, but it appears that there is no form of auditing performed by the organization to determine whether registries are doing enough to secure their systems and preserve customer data.

Second, is the check-box “methodologies” of risk management experts creating a false sense of security and the ability to recover in the minds of clients?

How do information security “experts” do a better job of encouraging better risk and security decisions? Or avoid making the assumption that an organization will always recover if its risk controls fail?

Distribute IT is a small business compared to other providers in the industry, but it is not too far-fetched to think that we couldn’t see similar sorts of existential threats to larger, IT-dependent businesses that might not be as risk savvy as a financial entity, for example – heck even those are feeling the pain – just ask CitiGroup or Bank of America, or Commerica Bank.

This unfortunate incident is yet another example of what happens when businesses ignore the risks that they shouldn’t. This situation will continue as long as executives think that security is all about installing firewalls and running the latest antivirus software.

As is always the case, it is only after a tragedy happens that people spring to action, despite several warnings that could have prevented the problem in the first place. Of course, there is always the reminder by company executives that they have tape and/or offline backups, but how many have taken the time to do a proper risk assessment?

Are we truly in an era when people can claim that “[t]here is no security, there will be no security. The horse has bolted, and it’s not going to be the infrastructure that’s going to change, it’s going to be us”?

Are these recent spate of breaches and hacks that have been exposed just old occurrences coming to light? US Department of Homeland Security advisor Jeff Moss Tweeted recently, “When I heard RSA had a shiny new half million dollar Hardware security module (HSM) to store seed files I wondered where had they been stored before”.

Next PostRead more articles