(Note: this blog is a excerpted from the full article in Data Center Knowledge: http://bit.ly/1kBsPAf

I recently had the opportunity to not only exhibit at the Gartner Infrastructure and Operations show, but to attend several sessions in an effort to soak up the latest and greatest in the industry. I was pleasantIT Services Portofolio Building Blocksly surprised to hear that Gartner was saying a lot of what we, at Nlyte, have been observing. Notably:

IT services are built on assets and processes – In order to build an IT service portfolio, you need a catalog of IT services, which are built upon processes and assets, upon which services and finally value can be created.

Figure 1: Nlyte’s interpretation of a presentation by Debra Curtis and Suzanne Adnams of Gartner.

Change is a collaborative process – Professor Eddie Obeng of the Henley School of Business gave the opening keynote, “Transforming with Confidence.” I was thinking about how his animated presentation could apply to someone thinking about bringing DCIM into their organization, but might be meeting resistance because their audience isn’t familiar with DCIM. Professor Obeng indicated that in order to see change, you need to reduce the level of fear, and data can help reduce that.

Business value trumps all – Jeff Brooks, the event co-host made this point in his session, “Tell the Story with Business Value Dashboards.” It came up repeatedly to the extent that infrastructure and operations teams need to convey what 99.5 percent uptime means to the business and therefore to the executive team.

Business value has a specific “speak” – Business value metrics of transactions per hour, capacity utilization to plan and unplanned downtime can be greatly affected by things the infrastructure and operations team focuses on: Connectivity, Security and Compliance, Service Support and Continuity, as well as Hardware and Software – all things that DCIM helps manage.

More expensive assets have lower total cost of ownership – Jay Pultz’s session exhibited that two-thirds of the cost increase is due to staff maintaining a “cheaper” server.

Share
 

The New IT model: A Seller & A Buyer

On August 5, 2014, in View from the Top, by Mark Harris
Delivery of IT Services is now a Seller/Buyer game

Delivery of IT Services is now a Seller/Buyer game

The IT world has been stood up on it’s head in the past few years. In years past, corporate users simply wanted what seemed like a never ending level of capabilities which were usually only partially defined in nature, and IT wanted nothing more than to satisfy as many of those needs as humanly possible and as soon as they could get to it. Corporate users were entirely captive to their corporate IT organizations, so everyone just made the best they could with the cards they were dealt.

Times have changed and we are transitioning to a buyer/seller dynamic. Your IT organization is now the SELLER of IT “products”, and your users are now BUYERS.  Luckily, your buyers are still giving their IT organizations the first crack at solving their needs.  They are asking them to come up with a plan for the IT services they need, commit to it’s cost, timeframes and support levels and then deliver it as agreed. Just like any seller/buyer relationship, seller must be able to meet the need of buyers.

Make no mistake, this is NOT a simple re-labelling exercise of the old relationships users have with their IT organization. That relationship is gone. This is different and the IT organization’s very long-term existence is at risk. A handful of years ago, Gartner made the prediction that 20% of Business will own no IT assets by 2012. BYOD, Cloud, SaaS, Virtualization were all part of the trends cited which are contributing to this decrease. Whether you agree with the numbers or the timing is mostly irrelevant, it is directionally sound. This is the time for IT organizations to put their business hats on and think about delivering services as if they were an external vendor trying to sell their wares to new customers. What they’ll find is that as a seller or supplier of products, they have the same needs to innovate, engineer, position, market and support their products, albeit with a ‘slight’ advantage of having a historically-captive audience. No longer entirely captive, that audience has somewhat of a preference to at least try to shop for their IT products internally first. How much of a preference? According to Ellen Messmer at PriceWaterhouseCoopers, 30% of all IT spending is happening outside of the IT organization. Other various analyst estimates put the figure between 15% and 35% but ALL agree that the number is BIG and it is growing! How can this be? Well it turns out that if a seller doesn’t offer the right products, available in the right timeframes and at the right cost, customers will go elsewhere. That’s why ONE-THIRD of the spending in IT is happening as a “Shadow-IT” process today!

So what does this mean to IT organizations that wish to stay in business? It means that today is a great time to re-evaluate your core approaches to service/product delivery.  The products you choose to deliver, the costs to do so, and how agile your infrastructure is to allow rapid turn on new applications.  Your “buyers” are willing to pay a certain price for each and every need they have and hopefully find one of the IT organization’s offerings to be a fit. The buyers will require delivery of those in a timely manner (their time by the way).  In fact they may be willing to pay a hint more and take delivery a bit slower to allow internal IT to be the seller, but only a tiny bit. Ultimately, they will vote with their dollars.

So that brings me to DCIM. Once an IT organization determines which products it wants deliver as its core business, they must think about the COST to deliver those products, the PRICE they will charge their customers and the speed at which they can deliver each. Whether these offerings are EMAIL services or STORAGE services or COMPUTING services, everything is a “product” that must have a higher VALUE to your customer than the price you will be charging them, must cost less to deliver than the amount you charge and must be deliverable in the timeframes the buyers set.

So there are two questions that should be at the top of  YOUR mind at this moment. The first one is “How can I determine the price I wish to charge my buyers if I don’t know my true COSTS are to implement those products?”  The key is we are not talking about approximations or best guesses here. It’s not just about estimating simple costs like it was in the old days. This is not Monopoly Money. IT spending is real money and is being spent externally a third of the time already due to the lack of alignment between buyer’s needs and seller’s capabilites!  So an IT organization must be very precise when considering the costs models and these must be all-inclusive to be a value business planning tool.  Sellers (remember, that is the IT organization)  that wish to stay in business can’t just guess what their cost is and then sell their products above their guess. They must KNOW what their actual costs are. Some of the basic costs are easy to recognize and to add up. These usually consist of the costs of hardware and software licenses. Just a little bit harder to see would be the cost of power for running the active gear and the cost of management and administrators.  But there are many ‘hidden’ costs that simply don’t get included in the traditional cost analysis and in this new model of IT, these can’t go unaccounted for any longer. The cost of the building, cooling and water. Costs of financing and depreciation. Warranty costs, support costs, cost of everything!  All of it needs to be accounted for before you are able to determine what your ‘selling’ price should be.

The second question is “How can I manage my data center capacities more proactively, to allow new applications to be implemented faster?” This is all about building a data center structure that is more agile. A structure that can be modified at a minute’s notice.  Adding or removing devices should be simple. Understanding the cascading effect and upstream/downstream relationships should be matter of fact. The business use of each and every device should be very clear and well defined.

That is where DCIM comes in. A comprehensive DCIM suite allows you to understand exactly what is in production (right down to the patch cable color) as well as quantify all of these costs involved in delivering IT services (product offerings). A solid DCIM solution allows you to create a financial model that supports your product catalog of IT products. Each IT service/product that you wish to sell can be quantified to determine the true cost to deliver that to each customer and the timeframes involved. The DCIM suite also enables you to plan and manage technology refreshes of aging equipment, which affects costs as well. DCIM enables the IT organization to shorten delivery times. DCIM really is a fundamental business planning tool for those IT professionals looking forward, rather than backward.

And what happens if you don’t implement DCIM and find your costs are different than you originally guessed, or if your turn-around times are too long? Buyers go elsewhere. A third of them to be exact. Buyers want the right products, at the right price and in a reasonable timeframe. In the “New IT” model, buyers have the ability to go elsewhere for these IT products, and IT organizations must adapt to delivering their offerings in a competitive market according to buyer’s rules. DCIM is the enabler to do so.

Share
 

Connector This, Connector That

On July 29, 2014, in View from the Top, by Matt Bushell
Nlyte DCIM has off-the-shelf integrations (connectors) for industry leading CMDBs

Nlyte DCIM has off-the-shelf integrations (connectors) for industry leading CMDBs

If you’ve been following Nlyte at all over the past year, you would have observed several announcements regarding Connectors to our DCIM suite.  You might ask, “What’s all the fuss?” or, “What’s wrong with your suite that you need all these Connectors?” or “Yet another connector?”  or, “aren’t all connectors the same?” So allow us to explain. The answer to the last question is a resounding, “no, not all connectors are the same.”  Part of the appeal of Nlyte’s march on connectordom is that they are flexible and don’t require re-testing and re-programming every time software is updated – all that’s required is perhaps some reconfiguration settings and that’s it.  To answer the other questions, let’s examine the roots or beginnings of Data Center Infrastructure Management itself. According to Wikipedia, DCIM deployments over time will integrate information technology (IT) and facility management disciplines and data center infrastructure management (DCIM) is a category of solutions which were created to extend the traditional data center management function to include all of the physical assets and resources found in the Facilities and IT domains.  This goes back to the rationale behind Nlyte’s recent wave of connector rollouts. The fuss is about including or connecting to the IT domain.  But not just any part of IT, the ones that have to do the most with a Data Center’s Physical Infrastructure, and from the industry’s leading products.

Let’s start with configuration management databases (CMDBs). They host information on configuration items (CIs) or assets.  Again, citing our good friend Wikipedia:

Its contents are intended to hold a collection of IT assets that are commonly referred to as Configuration Items (CIs), as well as descriptive relationships between such assets. When populated, the repository becomes a means of understanding how critical assets such as information systems are composed, what their upstream sources or dependencies are, and what their downstream targets are.

Any DCIM system not connected to an organization’s CMDBs is asking that organization to silo its information. Instead, a connected CMDB-DCIM system allows each to enrich each other.

Another key piece of ITSM software is Change Management systems. Reviewing its Wikipedia entry shines some light on the subject:

The objective of change management…is to ensure that standardized methods and procedures are used for efficient and prompt handling of all changes to control IT infrastructure, in order to minimize the number and impact of any related incidents upon service.

Sounds scarily similar to a large part of what DCIM is supposed to do, right? Manage moves, adds and changes within the data center?  Again, by not connecting/synching/coordinating with change management systems you run the obvious risk of siloing your IT team from your Data Center team, and becoming uncoordinated, with the potential for great loss (too great to cover here).

And I didn’t even consider virtualization hypervisors, lest you not know where your VMs are running, on which machines, should there be a power, space or cooling issue.

So stay tuned, the team at Nlyte is dedicated to building the bridge between DCIM and ITSM!

Nlyte offers Connectors for these products:

CMDB/Discovery – for bi-directional configuration item (CI) / asset information sharing and reconciliation:

Change Management – for workflow process management and communication:

Virtualization Hypervisors – for simplifying the management of physical and virtualized resources:

Sensors – tightly integrate asset location and/or environmental performance information:

Power and Rack Planning:


Share
 

“220, 221, Whatever it Takes!”

On June 11, 2014, in Uncategorized, by Mark Harris
DCIM: Whatever it Takes!

DCIM integration with ITSM systems is critically important and NOT an adventure to be executed with brute force.

“220, 221… Whatever it takes!” was a famous line of dialog from one of my favorite classic movies, “Mr. Mom” starring Michael Keaton and Teri Garr. In the movie, Michael’s character finds himself between jobs and his wife becomes the bread-winner. During his hiatus from work, he occupies his time with odd projects around the home. At one point during the movie, Teri Garr’s boss (played by Martin Mull) comes over to the house to pick her up for a business trip and in a classic demonstration of male egos in contest, Michael’s character, holding a chainsaw in one hand states to Martin that he is “re-wiring the house”, which apparently is his way of expressing his masculinity (even though he ostensibly is just watching the kids). Mull’s character queries, “220?” (referring to a common voltage standard used for high power appliances) to which Michael’s character replies, “220, 221, Whatever it takes!”. Clearly Michael’s character had no clue what the question even meant (there is no such thing as “221”), but in his desire to appear knowledgeable, he responded in a fashion that would have sounded good to the uninformed.

Why do I offer this excerpt from this classic movie here on Nlyte’s blog? Let me explain. I was at dinner a few days ago with an industry analyst and we were discussing the DCIM marketplace. While we spent nearly an hour volleying observations about the maturation of the end-users’ DCIM needs, one of the ancillary topics this analyst started was the fact that a great many of the 50 or so vendors who self-declare their participation in the DCIM segment are still very confusing to end users with their over-zealous desire to say “YES”. These DCIM vendors simply want to say YES, to every question regardless of what it is. They want to say “Yes” to every question even if they don’t really understand it. And most concerning, the questions that are answered with this “yes” most often are the ones that deal with integration with external systems, the very core of DCIM success and part of the long-term strategy needed to optimize a Data Center.

Why is this so troubling? In anyone’s book this should be a huge RED FLAG and a really bad practice since it sets the stage for failed DCIM projects and miss-set expectations. Effective integrations must be well architected and flexible. Complex systems change all the time, whether it be an ERP, Service Desk, CMDB or DCIM offering, enterprise-scale software versions change regularly. Without a formal and well architected approach to integration with crisp interface demarcations, large amounts of hardwired, spaghetti code will be generated which ultimately becomes the Achilles heel to the end-user’s progress. We have all probably lived through one or more horror stories seen with complex ERP integrations from years ago which effectively stranded their buyers on older versions of these enterprise systems due to the use of brute force, hardcoded integrations. What typically happens, is the end user is faced with the high cost and complexity of constantly re-writing these hardwired integrations each time target systems versions need to be changed. The result? Many of these customers simply opt to keep using their older version systems, missing all of the innovation and new capabilities that modern buyers are enjoying.

That brings me to Nlyte’s newest announcement made this week about our newest release of our integration framework. In the process of working with hundreds of end-user customers over nearly 10 years, we have become very good at eliminating all of the programming typically required to connect systems. Our latest version of the framework, shipping as part of Nlyte today, allows our family of external system connectors to be entirely programming free and thus continue to deliver their intended value regardless of the changes being seen in software versions on either side of the connector. Our new integration framework forms the basis for our available connectors and allows us to introduce new high quality connectors for other 3rd party systems quite rapidly. Customers that license any of our connectors can be assured that the solution they purchased today continues to work tomorrow. They are designed to do so.

“220, 221… Whatever it Takes!” We know what it takes, so ask us to prove it to you today! You can find us at trade shows in Orlando and Las Vegas this week, and in Santa Clara next week. (See our events calendar)

Share
 

This year is shaping up to be one of the most transformative years in IT’s history. Sure, I know you’ve heard it all before. In fact it seems everyone always says ‘this year is going to be different’. You may have heard it before, but most likely NOT from ALL of the vendor players all at the same time. Ask your VP of IT or CIO and I am sure that they will agree, THIS YEAR REALLY IS DIFFERENT since the fundamental basis of IT is being changed in all respects right under their feet! There are simply so many moving parts technically; free-air cooling, ARM chips, BYOD, Cloud, Modular, Wireless, all coupled with a tremendous amount of quantifiable business goal-setting being added on top.

I was presenting a session two weeks ago in Washington D.C. to a large federal audience. My thesis for the discussion was that efficiency must be tangible to be recognizable. To be able to become more efficient at doing something, a unit of measure of that ‘something’ must be identified and I suggested that the ONLY real units of measure (that mattered in 2014) were those values that touched their customers or constituents. I suggested that every organization has a unit of work that benefits their customers and could be measured. Adopting this thought process, each IT organization must first think long and hard about what their unit of ‘value’ was and what work would be needed to deliver that unit of value. In essence, what was the bounded ‘transaction’ that they are chartered to deliver which would have direct value to THEIR constituents or customers? Many people in the audience didn’t think about ‘transaction’ as something that they could articulate, but my main point was exactly this and really was the task at hand. At the IRS for instance, the unit of work could be processing a 1040 tax form. Or at the Census Bureau it could be to count and verify an American citizen. In the commercial world, it could be the cost of an email message or to sell an item in a storefront. Every organization has a unit of delivered work, and in the BIG picture, reducing the cost for IT to handle that unit of work (at the same quality and reliability) is a good thing. It’s really simply math and yet it’s not just about buying lower cost servers, or building a data center in Oregon where power is lower priced. It’s about the entire burden that the IT structure adds to the cost to do work. It’s technology, people, process, culture and a litany of other factors that can be quantified, rolled-up and then calculated to determine if the whole thing is getting more efficient on a per unit of work basis.

Gartner I&O Summit

Change I&O Today to be Indispensable Tomorrow

Gartner is having a 3 day I&O event which calls attention to this transformation already deeply underway, in key areas such as Creating Value, IT culture change, Metrics, Prioritization and Investment strategies, and Improving I&O Maturity. From Gartner’s write-up on the upcoming I&O conference being held mid-June in Florida, “Successful digital enterprises cannot be built upon a brittle Infrastructure and Operations (I&O) foundation – now is the time for I&O leaders to reject outmoded operating models, antiquated structures and technologies, and declining business relevance, and assume a key position of influence over the digital future of your enterprise“. I couldn’t have said it any better myself. There are simply so many ways to make real progress in reducing the cost to deliver services to the business and most of them are complementary. Most of them will deliver real quantifiable value but many of them DO REQUIRE CHANGE. New ways of doing things, new technologies, new goals.

For those of you keeping a pulse on the business of IT, this is just another aspect of IT Service Management. Service Management is the ability to deliver services at the right level and adding modern economics into the traditional definition, delivering those services at the right cost per unit. “Services” to you can be anything that is quantifiable. Service Management when viewed from the top-down, takes into account all of the direct and indirect costs. For example, to deliver a “transaction”, a certain amount of server equipment is needed, but just as important is the need for bandwidth to move the transaction. Power is needed to power the servers, switches and storage. What about the costs to cool those devices and the real estate cost per square foot? How about the costs of warranty and repair? Perhaps the overhead associated with billing and collections, remediation and administration. And don’t forget the costs associated with resiliency and/or redundancy.

Did I forget anything? You bet! There is a fundamental need to keep the entire IT structure ON THE CURVES of technology. What we used to call “Tech Refresh” becomes critically important today since staying on ALL OF THE CURVES reduces the cost of processing each of those transactions. That is one of the most overlooked opportunities for the NEW IT management chain to realize… just how important it is to keep the whole infrastructure as close as possible to the modern production curve. The highest value of a server is in the first 3-4 years of its life and everything after that period COSTS MONEY. Same thing with a switch or storage array. So while all the technology moving parts present themselves individually, the challenge is how do you create the processes to take advantage of them all in a production world? That my friends is the REAL task at hand: Designing and implementing the processes needed to stay on the curve. (Shameless plug: Call Nlyte. That’s what WE DO for our customers.)

This year really *IS* different, and I expect to hear a wealth of information about these topics in Florida. If you are going to be at the show, stop by the Nlyte booth and say Hi!

Share
 

Privacy Policy + Terms of Usage

© 2003 - 2014 nlyte Software, Ltd. (unless otherwise stated) - all rights reserved.