They never said it would be easy. Years ago, most of us chose to pursue a career in IT and for the majority of those years, we collectively did whatever was required to keep our company’s computing needs satisfied. We practiced a highly specialized set of skills that were garnered through years of experience and by learning from our peers via observation or in some cases, listening to folklore. Universally recognized as an intricate set of skills passed down through the ages, IT was one of the only disciplines within an organization that was rarely challenged, questioned or trivialized. “Keep IT running” at any cost was the name of the game, with availability and uptime (referred to as “SLA”) our primary focus.
Basically the long-time IT professionals had the secret recipe to their IT sauce and everyone loved the way it tasted. We ‘sold’ our magical skills like hot cakes and IT organizations grew and grew. No surprise, the associated costs of computing grew and grew accordingly as well. It was a gradual growth and over the course of those years, few executives understood the actual magnitude of their aggregated investments in IT. They didn’t even ask. IT was viewed as a sunk cost and a necessary burden to the organization. Even when the capital asset costs for IT exceeded $250 Million, the Executive Team rarely looked! They never saw a single purchase order for $250 Million, so why would they look? All they saw was a bunch of smaller POs over a long period of time.
This all changed in 2008 with the fiscal melt-down across the planet. Everything that had an initial or ongoing cost to the company was fair game. Lots of questions about IT were now being asked. The fundamentals approaches being used in IT were also being questioned and new innovations (i.e. The Cloud) were being advertised during the Super Bowl, causing every CEO/CFO to ask the question, “Would this Cloud-thing save us money or make us more competitive?”. IT responded with very few real answers because they simply didn’t have a clear understanding themselves about their existing cost-model. They also knew that their existing processes were very unpredictable and mostly ad-hoc and reactive. In addition, their on-going ‘strategy’ was articulated too broadly, lacking specific alignment to the business itself.
So we’ve spent the past 6 years or so looking for the means to answer those questions with defendable and accurate answers. The first attempt was to look for easy short-path answers with big shiny return on investment. We all grabbed for the low-hanging fruit and were proud to answer those ‘approach’ questions with a fiscal results answer. In short, we deflected the original questions. We took the easy route by trimming costs where we could do so easily and highlighting those savings. Savings are a good thing, but because we didn’t change the WAY we behaved, they were just a Band-Aid. We hoped the executive team would forget their original questions, which they did for years. To a large degree, IT professionals held on dearly to the promise that a stream of newer generations of technical widgets would continue to be their crutch and assumed that the raw cost savings from each widget would prevent the original questions from surfacing again. And it worked for years. Finding a cheaper solution that was more compatible with our existing IT structure was easier than thinking about a more strategic and wide-reaching approach. The original question about “how” and “why” was again put on the shelf for another few years.
So here we are, 2014, and the tide has changed. Everyone is talking about their strategies for computing, the hybrid mix of computing styles, and the need to quantify our costs for computing, at the unit of work level. A couple of years ago, Ebay demonstrated to the world that determining the IT costs per unit of work was not only possible, but realistic if you set you mind to it. They released their DSE dashboard to the world to demonstrate the business impact of computing. Every Fortune 1000 CIO/CFO/CEO wants their own “DSE”…
This is exactly where DCIM comes in. Although there has been a lot of confusion in the category, DCIM is not a tactical technician’s tool. It is the deep set of capabilities which enable a management strategy that provides insight into the “computing function”. With DCIM, you can see the BIG PICTURE complexion of your data center, the business costs and impacts, the impact of making change, and perhaps more important, the actual costs of NOT making change. Keeping the data center running is important, but keeping it running AT WHAT COST is actually the new measure of success in IT. A robust DCIM suite is the enabling technology for that. It connects to existing ITSM systems, presents usage and capacity in context and helps the various different audiences understand more about what is going on in IT.
While I realize that this recipe of how every device in the data center has been a prized secret for years, it is the Transparency (HOW we do things, and WHY we do things) that is already becoming the true measure of success looking forward. What about Availability and Uptime? You bet, still very important, but those are a derivative of running your IT business properly. DCIM is your strategic weapon to do so, aligning your business needs with the technology that supports them.