“For all the optimism surrounding the potential of computing in the cloud – lower costs, better performance, easier scaling – it isn’t a perfect system. No matter how distributed and redundant the architecture or how rigorous the backup system, when it comes right down to it, there’s a complex series of hoops through which the data has to jump to travel between the user and where it actually resides on a piece of physical hardware. And when a segment of that process fails, all the benefits of the cloud suddenly seem all the less magical.”
As quoted from ReadWriteWeb’s article on the downside of cloud computing. The author gives reference to a recent unfortunate situation for Ylastic, a company that provides a single front-end to manage Amazon Web Services. The incident (read the full article on the details of the incident), resulted in the loss of data for Ylastic and it’s customers and also that Ylastic discovered that the data could not be recovered. They were forced to recover from an earlier snapshot, that contained only a subset of the data. Read the original article here: http://www.readwriteweb.com/archives/dark_side_of_the_cloud.php
After reading this article, I feel that any incident, that results in loss of data can happen even in an Enterprise IT environment and not only in cloud. I am sure there are real life cases to justify this. Of course, when it comes to the cloud, the data failure points definitely increase due to the number oF tunnels that the data must travel through. So, be it cloud or on-premise, as the article says, it’s still the basics of data management that matters most. The Dark Side of Data Management!
What are the steps / mitigation plans you take to safeguard your customer’s data?