Now that the cloud has established itself as a safe, viable, and reliable IT infrastructure alternative, one of the fundamental questions from the early days – “can we save money by migrating to the cloud?” – has morphed into “what are the economics of migrating to the cloud?”
Unfortunately, that’s where things get, well...cloudy. That’s because so many IT departments and other stakeholders make the same cloud-vs.-premise cost-comparison mistakes repeatedly. Often the comparison is limited to a simplistic, premise-based hardware/software vs. infrastructure-as-a-service (IaaS) analysis.
To get to the real numbers for a true comparison of premise- and cloud-based infrastructures, you need to dig a little deeper. Start with things such as software licenses, organizational efficiencies, redundancy and disaster recovery, human resources, and other not-so-obvious costs that can affect IT operations and are necessary to enable your IT ecosystem to hum along smoothly.
In addition to the pesky, peripheral costs such as management and maintenance that can affect your total-cost-of-ownership (TCO) there are some more fundamental ones that enterprises still manage to overlook on a regular basis. Let’s take a look at them right now:
Some companies dismiss capacity concerns with relative ease from both a premise and cloud perspective. Premise-based system proponents will argue that they can just add more servers and seats to handle all those new computational concerns. But with those additional servers comes the need for more space, more electricity, more cooling – you get the picture. Adding capacity in-house is more than just adding a little hardware.
Sure, you can fatten your capacity by throwing money at it, but how long can your budget sustain that? And then there’s the less visible but highly painful cost of lower productivity and IT chokepoints that can impact output and profitability if you attempt to make-do with what you have.
Okay, let’s assume you’re willing to bite the bullet and buy new servers as a die-hard premises-leaning IT professional. Wait a minute, though. It’s not just a one-time purchase – there’s something called the “5-year rule” that basically contends the average lifespan of a server today is about five years. After that reliability begins to noticeably drop off, requiring those servers to be replaced before they affect productivity. That kind of recurring cost can have a significant impact on total cost of ownership (TCO).
While there are some who take comfort in maintaining control over a premise-based, proprietary IT system and can rationalize that it reduces risk, the reality is that the cloud is now a far more secure, reliable, and redundant space for IT activities. Most cloud services providers today realize their businesses win or lose based on security and reliability. They invest heavily in the latest cybersecurity technology and hardened, redundant, geographically-dispersed data centers that provide virtually bombproof data protection.
If the thought of keeping your IT system close by under your watchful eye still makes you feel warm and fuzzy, consider this. CA Technologies conducted a survey of 200 US and European firms and estimated that businesses worldwide lost a total of $26.5 billion in annual revenue to IT downtime. That averages out to over $1 million a year for large enterprises and about $91,000 for mid-sized businesses. Even small firms lost an average of $55,000 in revenue from system crashes and other events. Ouch.
Energy is a cost that can be hard to wrap your arms around because there are direct and indirect expenses – how much it costs to run your servers as well as the HVAC expenses associated with running an in-house datacenter that generates heating and cooling expenses as part of your overall facility costs. However, the US Energy Information Administration produced a cost-analysis tool a couple of years ago that does a good job of calculating datacenter energy consumption. The agency determined that the average in-house server sucks up, in total, approximately $732 a year. That’s one server. And that’s energy only – it does not include the ancillary costs of operation and maintenance.
So how much money can you save by migrating to the cloud?
That depends. But there is no question that you can save money in the cloud. If you map out all the cost of operating and supporting an application on internal computer systems, then the economics of the cloud are very good. Capital investments in hardware, software and network infrastructure are replaced by more efficient cloud solutions available through a web browser.
Labor costs go down, too. Hardware investment, management, and maintenance shifts to the cloud services provider, along with the personnel and training needed to run it.
New capabilities and increased, more-flexible functionality become available through software-as-a-service (SaaS) so that you can meet new customer demands and market trends quickly and efficiently. The result? Increased productivity and profitability.
So the question now is not should you migrate to the cloud, but when. And the sooner you migrate, the sooner you start saving.
As a next step, you may be interested in our earlier article that discusses the pros and cons of migrating to the cloud. Also, for those who are just contemplating a move to the cloud, our eBook titled A Guide to Adopting Cloud Computing for IT Leaders may be of interest. Of course, you may always reach out to Boston Data Group for a personalized discussion about the economics of migrating your IT infrastructure to the cloud.Image Copyright: 123RF Stock Photo