“220, 221, Whatever it Takes!”

On June 11, 2014, in Uncategorized, by Mark Harris
DCIM: Whatever it Takes!

DCIM integration with ITSM systems is critically important and NOT an adventure to be executed with brute force.

“220, 221… Whatever it takes!” was a famous line of dialog from one of my favorite classic movies, “Mr. Mom” starring Michael Keaton and Teri Garr. In the movie, Michael’s character finds himself between jobs and his wife becomes the bread-winner. During his hiatus from work, he occupies his time with odd projects around the home. At one point during the movie, Teri Garr’s boss (played by Martin Mull) comes over to the house to pick her up for a business trip and in a classic demonstration of male egos in contest, Michael’s character, holding a chainsaw in one hand states to Martin that he is “re-wiring the house”, which apparently is his way of expressing his masculinity (even though he ostensibly is just watching the kids). Mull’s character queries, “220?” (referring to a common voltage standard used for high power appliances) to which Michael’s character replies, “220, 221, Whatever it takes!”. Clearly Michael’s character had no clue what the question even meant (there is no such thing as “221”), but in his desire to appear knowledgeable, he responded in a fashion that would have sounded good to the uninformed.

Why do I offer this excerpt from this classic movie here on Nlyte’s blog? Let me explain. I was at dinner a few days ago with an industry analyst and we were discussing the DCIM marketplace. While we spent nearly an hour volleying observations about the maturation of the end-users’ DCIM needs, one of the ancillary topics this analyst started was the fact that a great many of the 50 or so vendors who self-declare their participation in the DCIM segment are still very confusing to end users with their over-zealous desire to say “YES”. These DCIM vendors simply want to say YES, to every question regardless of what it is. They want to say “Yes” to every question even if they don’t really understand it. And most concerning, the questions that are answered with this “yes” most often are the ones that deal with integration with external systems, the very core of DCIM success and part of the long-term strategy needed to optimize a Data Center.

Why is this so troubling? In anyone’s book this should be a huge RED FLAG and a really bad practice since it sets the stage for failed DCIM projects and miss-set expectations. Effective integrations must be well architected and flexible. Complex systems change all the time, whether it be an ERP, Service Desk, CMDB or DCIM offering, enterprise-scale software versions change regularly. Without a formal and well architected approach to integration with crisp interface demarcations, large amounts of hardwired, spaghetti code will be generated which ultimately becomes the Achilles heel to the end-user’s progress. We have all probably lived through one or more horror stories seen with complex ERP integrations from years ago which effectively stranded their buyers on older versions of these enterprise systems due to the use of brute force, hardcoded integrations. What typically happens, is the end user is faced with the high cost and complexity of constantly re-writing these hardwired integrations each time target systems versions need to be changed. The result? Many of these customers simply opt to keep using their older version systems, missing all of the innovation and new capabilities that modern buyers are enjoying.

That brings me to Nlyte’s newest announcement made this week about our newest release of our integration framework. In the process of working with hundreds of end-user customers over nearly 10 years, we have become very good at eliminating all of the programming typically required to connect systems. Our latest version of the framework, shipping as part of Nlyte today, allows our family of external system connectors to be entirely programming free and thus continue to deliver their intended value regardless of the changes being seen in software versions on either side of the connector. Our new integration framework forms the basis for our available connectors and allows us to introduce new high quality connectors for other 3rd party systems quite rapidly. Customers that license any of our connectors can be assured that the solution they purchased today continues to work tomorrow. They are designed to do so.

“220, 221… Whatever it Takes!” We know what it takes, so ask us to prove it to you today! You can find us at trade shows in Orlando and Las Vegas this week, and in Santa Clara next week. (See our events calendar)

Share
 

This year is shaping up to be one of the most transformative years in IT’s history. Sure, I know you’ve heard it all before. In fact it seems everyone always says ‘this year is going to be different’. You may have heard it before, but most likely NOT from ALL of the vendor players all at the same time. Ask your VP of IT or CIO and I am sure that they will agree, THIS YEAR REALLY IS DIFFERENT since the fundamental basis of IT is being changed in all respects right under their feet! There are simply so many moving parts technically; free-air cooling, ARM chips, BYOD, Cloud, Modular, Wireless, all coupled with a tremendous amount of quantifiable business goal-setting being added on top.

I was presenting a session two weeks ago in Washington D.C. to a large federal audience. My thesis for the discussion was that efficiency must be tangible to be recognizable. To be able to become more efficient at doing something, a unit of measure of that ‘something’ must be identified and I suggested that the ONLY real units of measure (that mattered in 2014) were those values that touched their customers or constituents. I suggested that every organization has a unit of work that benefits their customers and could be measured. Adopting this thought process, each IT organization must first think long and hard about what their unit of ‘value’ was and what work would be needed to deliver that unit of value. In essence, what was the bounded ‘transaction’ that they are chartered to deliver which would have direct value to THEIR constituents or customers? Many people in the audience didn’t think about ‘transaction’ as something that they could articulate, but my main point was exactly this and really was the task at hand. At the IRS for instance, the unit of work could be processing a 1040 tax form. Or at the Census Bureau it could be to count and verify an American citizen. In the commercial world, it could be the cost of an email message or to sell an item in a storefront. Every organization has a unit of delivered work, and in the BIG picture, reducing the cost for IT to handle that unit of work (at the same quality and reliability) is a good thing. It’s really simply math and yet it’s not just about buying lower cost servers, or building a data center in Oregon where power is lower priced. It’s about the entire burden that the IT structure adds to the cost to do work. It’s technology, people, process, culture and a litany of other factors that can be quantified, rolled-up and then calculated to determine if the whole thing is getting more efficient on a per unit of work basis.

Gartner I&O Summit

Change I&O Today to be Indispensable Tomorrow

Gartner is having a 3 day I&O event which calls attention to this transformation already deeply underway, in key areas such as Creating Value, IT culture change, Metrics, Prioritization and Investment strategies, and Improving I&O Maturity. From Gartner’s write-up on the upcoming I&O conference being held mid-June in Florida, “Successful digital enterprises cannot be built upon a brittle Infrastructure and Operations (I&O) foundation – now is the time for I&O leaders to reject outmoded operating models, antiquated structures and technologies, and declining business relevance, and assume a key position of influence over the digital future of your enterprise“. I couldn’t have said it any better myself. There are simply so many ways to make real progress in reducing the cost to deliver services to the business and most of them are complementary. Most of them will deliver real quantifiable value but many of them DO REQUIRE CHANGE. New ways of doing things, new technologies, new goals.

For those of you keeping a pulse on the business of IT, this is just another aspect of IT Service Management. Service Management is the ability to deliver services at the right level and adding modern economics into the traditional definition, delivering those services at the right cost per unit. “Services” to you can be anything that is quantifiable. Service Management when viewed from the top-down, takes into account all of the direct and indirect costs. For example, to deliver a “transaction”, a certain amount of server equipment is needed, but just as important is the need for bandwidth to move the transaction. Power is needed to power the servers, switches and storage. What about the costs to cool those devices and the real estate cost per square foot? How about the costs of warranty and repair? Perhaps the overhead associated with billing and collections, remediation and administration. And don’t forget the costs associated with resiliency and/or redundancy.

Did I forget anything? You bet! There is a fundamental need to keep the entire IT structure ON THE CURVES of technology. What we used to call “Tech Refresh” becomes critically important today since staying on ALL OF THE CURVES reduces the cost of processing each of those transactions. That is one of the most overlooked opportunities for the NEW IT management chain to realize… just how important it is to keep the whole infrastructure as close as possible to the modern production curve. The highest value of a server is in the first 3-4 years of its life and everything after that period COSTS MONEY. Same thing with a switch or storage array. So while all the technology moving parts present themselves individually, the challenge is how do you create the processes to take advantage of them all in a production world? That my friends is the REAL task at hand: Designing and implementing the processes needed to stay on the curve. (Shameless plug: Call Nlyte. That’s what WE DO for our customers.)

This year really *IS* different, and I expect to hear a wealth of information about these topics in Florida. If you are going to be at the show, stop by the Nlyte booth and say Hi!

Share
 

The world of IT is changing rapidly and nearly every conceivable IT offering is now being introduced by various vendors “AS A SERVICE”. Doing a quick Google search, and talking to your main IT vendors, you’ll see Desktop as a service, Internet as a Service, Storage as a Service, EMAIL as a Service, Platform as a Service, Network as a Service, Data Center as a Service and the grand-daddy of them all, Software as a Service (which I would humbly offer is where the “X as a Service” started). In fact one of the first wildly popular SaaS offerings was Salesforce.com who proved that users of enterprise-class solutions would be happy to have certain types of applications offered per month and per user and delivered instantly.

Nlyte's DCIMaaS On-Demand Delivers!

Nlyte’s DCIMaaS On-Demand Delivers!

DCIM is no different and the strongest of the DCIM players, like Nlyte, have realized this and introduced their offerings for “DCIMaaS”. It just makes sense. Any well built web-based application that needs to be used by large numbers of users scattered across many locations is a perfect candidate to be offered as a service. DCIM as a Service from Nlyte was introduced last fall and is gaining tremendous attention by users and analysts alike.

What are the benefits in purchasing DCIM as a Service? The primary benefit of DCIMaaS is the ability to manage costs in a predictable fashion. Nlyte’s On-Demand offering can be instantly deployed at whatever scale is needed and by users that are located at any location within the organization. Nlyte regularly finds dozens of users in several locations benefitting from our software, and those users are supporting 4 or more large data centers, along with a fair number of smaller or co-lo centers. DCIMaaS changes the economics of DCIM since there is a relatively small initial investment, and in general DCIMaaS changes all costs from a capital expense line item to an operations expense line item (eliminating the need to depreciate investments over time). Whereas larger customers may struggle when faced with capital purchases of $250K or more for DCIM on-premise and all of the planning and additional management required, they may find it much easier to justify a monthly expense of say $10K or so knowing that they do not have to build or maintain the new application and all backend management is handled for them. DCIMaaS also provides a means to get full-featured high-value capabilities without the traditional deep commitments required for high cost purchases. Historically many customers have been forced to resolve this issue by purchasing much more limited and lower cost solutions or even worse, using generic brute-force tools like AutoCAD, Excel or Visio to handle this critical need. And then there is the on-going maintenance of yet another software package. DCIMaaS from Nlyte assures that the software upgrades are handled transparently on the backend and users will always have the latest and greatest versions of the software. And as if all of these reasons weren’t enough, there is no step function to deal with; as the user population and scale of the solution grows, the end-user of a SaaS solution like Nlyte’s On-Demand continues to be serviced without augmenting costly infrastructure or adding additional IT staff.

Whereas the initial “As A Service” frenzy started 15 years ago with applications like Salesforce.com, that was only the beginning of this common sense approach to IT and today entire IT departments are now being purchased as a service. In some cases it’s the magical lure of virtually unlimited and robust storage that is handled ‘by someone else’. In some cases its the immediate availability of business services without the need to become experts overnight. In nearly all cases, IT looks at the service-oriented means of delivery value as an approach to reduce cost and complexity, while increasing reliability and overall performance.

Nlyte introduced our On-Demand offering last fall and immediately found the level of interest to be high. The area of DCIM is a relatively new category of solution, and the enterprises of the world now look at Nlyte’s On-Demand SaaS as a means to ‘get there’ quickly and efficiency without sacrificing anything. Most importantly, its really up to the end-user how they would like our solution; On-Premise or On-Demand. Same great software, same great value!

Now if we could just figure out how to get “Starbucks As A Service”, I would be a happy camper…

Share
 

Workflow is NOT a 4-letter word…

On March 19, 2014, in Uncategorized, by Mark Harris

Mention the term WORKFLOW to many long-term data center operators, managers and technicians and you’ll see a change in their facial expression. Perhaps a little fear, maybe some pain and just a hint of embarrassment. The look will be subtle, and you may not even notice it at first, but it’s there. You’ll hear a polite pause, perhaps a smile, and over the course of the next 60 seconds, you’ll feel the conversation be steered (actually tugged) awkwardly towards ANY other topic. Why? It’s part of the dirty little secret that has been kept by many IT organizations for years: Data Centers are built in a highly complex fashion, both hardware and software, and are perfected over long periods of time. Today, many data centers are so complex that once the data center is up and running and its contained applications are functional, most users do NOT want to do anything that could disrupt that harmony. In essence, CHANGE becomes a 4-letter word.

DCIM Workflow is Critical

DCIM Workflow is Critical

So the horror seen in their eyes comes from the fact that connecting “change” to “workflow” is just a single transformation.  Workflow goes hand in hand with discipline, planning and repeatable and defendable best practices. Whereas five years ago there were rewards for midnight cowboys that drove down to the data center in the middle of the night and did whatever was needed to restore services, those types of events are looked at with great scrutiny today. How could it happen? Why did it happen? What was done to restore operations? Where is the documentation of what change was made? What else was impacted? Did the midnight changes solve all the symptoms or did it also create new? All very valid questions, and part of reasons for workflow.

Data Centers can be optimized by streamlining the processes associated with change in the context of all of the other changes being made. Workflow is the technology that manages this change, and is the critical toolset missing from the vast majority of enterprise data centers today. Workflows enable changes to be planned and executed more accurately and in less time. Workflow allows tasks to be repeated again and again by any organization or individual. Workflows provide the foundation for change and can enforce good behavior, all while providing continuous  ”As-Built” documentation for every aspect of the data center. With this current knowledge, impactful decisions can be made at any point in time.

So back to that ‘look’ on the faces of many data center professionals when asked about “workflow”. It’s the look that says “I know that workflow will help everyone in my company, and the use of workflows decreases costs and shows a level of maturity and high level of discipline, BUT… I JUST HAVEN’T GOTTEN TO IT YET because there are simply so many day to day diversions competing for my limited time and resources”. They know it’s the right thing to do, to get in front of the chaos. They know that using workflows will make everyone’s job easier, reduce costs, decrease unplanned downtime, etc. All of the goodness.

So if you are a Data Center professional considering workflow, you should ask yourself, “if NOT now, When?” Can you really afford to delay implementing DCIM with Workflow? Workflow is not going away and forms the cornerstone of best practices. It’s good business. Andy Lawrence at The 451 Group says, “ it is difficult to achieve the most advanced levels of datacenter effectiveness without extensive use of DCIM” with workflow.

And remember, there are LOTS of 4-letter words that can be used to describe WORKFLOW, starting with “GOOD”!

Share
 

Committed or Curious?

On February 24, 2014, in Uncategorized, by Mark Harris
Data Center Best Practices

Data Center Best Practices

I was in Washington DC last week meeting with quite a few government agencies to discuss all the ways Nlyte can be used to enable data center consolidation, capacity planning and optimization. There was a tremendous interest in looking forward with regards to data processing in the federal government and each agency is handling their planning a bit differently, but they all agreed that the existing data center structures that they currently operate are fairly inefficient. Gear is dated, processes are inconsistent and timeframes to execute even the simplest of tasks are enormous.

Although there were a wide variety of data center types in those discussions, the vast majority of those people I spoke with seemed to have a genuine interest in the Nlyte approach to optimization, and specifically proactive asset lifecycle management. They recognized how important knowing which assets they had, where they were, how they were being used and when they should be retired. They also needed processes to manage all that change. They fundamentally would love to have the same and granular level of control that their agencies have in more common business management practices, such as budgeting and policy enforcement.

However, given that they collectively seemed to have such a high level of interest in taking big business-savvy steps forward through the use of ”DCIM”, it was a bit surprising to me that several of them also seemed to be confused about how to execute pro-active strategy initiatives while having to continuously deal with reactive daily tactics. If I had to sum up the tone of those discussions, they all had a common theme associated with the relative difficulty (and hence the importance) of process re-engineering required to make any data center (of any type) operate more efficiently. In fact, several folks said, “I have been curious about DCIM for a long time, but this sounds like I will have to retrain and redesign the people and processes that I use today in my data centers to take advantage of DCIM”. Politely, I confirmed that they were correct…. and that I was thrilled that they zero’d in so quickly on the BIG opportunity for their data center optimization challenge. I acknowledged that they could continue to make any number of tactical moves, like raising the temperature of the data center, and save a few dollars along the way, but that the BIG optimization available to them required commitment to process efficiency and workflow management.

Data Center optimization is all about managing change more effectively, and is the direct result of identifying inefficient behavior(s) and then correcting those. Nlyte’s asset lifecycle management solution is the most capable offering in this space to help manage that change and provides a means to model good behavior and then enforce it. In a nutshell, well designed processes to deploy new gear, remediate existing gear, and then retire aged gear is what this is all about. Once these new and well-conceived processes are adopted, users find their ability to support new applications and plan for their own future becomes significantly easier.

Yes, it does take a certain amount of discipline to reap the benefits possible through DCIM. Times have changed and today there is significant support throughout the ranks to think different, think smarter. It’s not just about keeping everything running, it’s about keeping it running at the right cost. The Nlyte solution is one of the easiest ways to think smarter and enable that BIG cost-savings once you have decided that it is the right thing to do. People have to be willing to re-think how they solved their data center change problems perhaps 10 or more years ago and instead challenge themselves to solve the data center challenge using modern means and business metrics.

The question people have to ask themselves: Is it better to hold fast to a sinking ship, one that is well recognized to be inefficient, or is it better to take the leap and swim as fast as possible to the rescue ship with modern proven approaches? Methinks there is a short-term versus long-term answer to this question. What is your timeframe?

Share
 

Privacy Policy + Terms of Usage

© 2003 - 2014 nlyte Software, Ltd. (unless otherwise stated) - all rights reserved.