Workflow is NOT a 4-letter word…

On March 19, 2014, in Uncategorized, by Mark Harris

Mention the term WORKFLOW to many long-term data center operators, managers and technicians and you’ll see a change in their facial expression. Perhaps a little fear, maybe some pain and just a hint of embarrassment. The look will be subtle, and you may not even notice it at first, but it’s there. You’ll hear a polite pause, perhaps a smile, and over the course of the next 60 seconds, you’ll feel the conversation be steered (actually tugged) awkwardly towards ANY other topic. Why? It’s part of the dirty little secret that has been kept by many IT organizations for years: Data Centers are built in a highly complex fashion, both hardware and software, and are perfected over long periods of time. Today, many data centers are so complex that once the data center is up and running and its contained applications are functional, most users do NOT want to do anything that could disrupt that harmony. In essence, CHANGE becomes a 4-letter word.

DCIM Workflow is Critical

DCIM Workflow is Critical

So the horror seen in their eyes comes from the fact that connecting “change” to “workflow” is just a single transformation.  Workflow goes hand in hand with discipline, planning and repeatable and defendable best practices. Whereas five years ago there were rewards for midnight cowboys that drove down to the data center in the middle of the night and did whatever was needed to restore services, those types of events are looked at with great scrutiny today. How could it happen? Why did it happen? What was done to restore operations? Where is the documentation of what change was made? What else was impacted? Did the midnight changes solve all the symptoms or did it also create new? All very valid questions, and part of reasons for workflow.

Data Centers can be optimized by streamlining the processes associated with change in the context of all of the other changes being made. Workflow is the technology that manages this change, and is the critical toolset missing from the vast majority of enterprise data centers today. Workflows enable changes to be planned and executed more accurately and in less time. Workflow allows tasks to be repeated again and again by any organization or individual. Workflows provide the foundation for change and can enforce good behavior, all while providing continuous  ”As-Built” documentation for every aspect of the data center. With this current knowledge, impactful decisions can be made at any point in time.

So back to that ‘look’ on the faces of many data center professionals when asked about “workflow”. It’s the look that says “I know that workflow will help everyone in my company, and the use of workflows decreases costs and shows a level of maturity and high level of discipline, BUT… I JUST HAVEN’T GOTTEN TO IT YET because there are simply so many day to day diversions competing for my limited time and resources”. They know it’s the right thing to do, to get in front of the chaos. They know that using workflows will make everyone’s job easier, reduce costs, decrease unplanned downtime, etc. All of the goodness.

So if you are a Data Center professional considering workflow, you should ask yourself, “if NOT now, When?” Can you really afford to delay implementing DCIM with Workflow? Workflow is not going away and forms the cornerstone of best practices. It’s good business. Andy Lawrence at The 451 Group says, “ it is difficult to achieve the most advanced levels of datacenter effectiveness without extensive use of DCIM” with workflow.

And remember, there are LOTS of 4-letter words that can be used to describe WORKFLOW, starting with “GOOD”!

Share
 

Committed or Curious?

On February 24, 2014, in Uncategorized, by Mark Harris
Data Center Best Practices

Data Center Best Practices

I was in Washington DC last week meeting with quite a few government agencies to discuss all the ways Nlyte can be used to enable data center consolidation, capacity planning and optimization. There was a tremendous interest in looking forward with regards to data processing in the federal government and each agency is handling their planning a bit differently, but they all agreed that the existing data center structures that they currently operate are fairly inefficient. Gear is dated, processes are inconsistent and timeframes to execute even the simplest of tasks are enormous.

Although there were a wide variety of data center types in those discussions, the vast majority of those people I spoke with seemed to have a genuine interest in the Nlyte approach to optimization, and specifically proactive asset lifecycle management. They recognized how important knowing which assets they had, where they were, how they were being used and when they should be retired. They also needed processes to manage all that change. They fundamentally would love to have the same and granular level of control that their agencies have in more common business management practices, such as budgeting and policy enforcement.

However, given that they collectively seemed to have such a high level of interest in taking big business-savvy steps forward through the use of ”DCIM”, it was a bit surprising to me that several of them also seemed to be confused about how to execute pro-active strategy initiatives while having to continuously deal with reactive daily tactics. If I had to sum up the tone of those discussions, they all had a common theme associated with the relative difficulty (and hence the importance) of process re-engineering required to make any data center (of any type) operate more efficiently. In fact, several folks said, “I have been curious about DCIM for a long time, but this sounds like I will have to retrain and redesign the people and processes that I use today in my data centers to take advantage of DCIM”. Politely, I confirmed that they were correct…. and that I was thrilled that they zero’d in so quickly on the BIG opportunity for their data center optimization challenge. I acknowledged that they could continue to make any number of tactical moves, like raising the temperature of the data center, and save a few dollars along the way, but that the BIG optimization available to them required commitment to process efficiency and workflow management.

Data Center optimization is all about managing change more effectively, and is the direct result of identifying inefficient behavior(s) and then correcting those. Nlyte’s asset lifecycle management solution is the most capable offering in this space to help manage that change and provides a means to model good behavior and then enforce it. In a nutshell, well designed processes to deploy new gear, remediate existing gear, and then retire aged gear is what this is all about. Once these new and well-conceived processes are adopted, users find their ability to support new applications and plan for their own future becomes significantly easier.

Yes, it does take a certain amount of discipline to reap the benefits possible through DCIM. Times have changed and today there is significant support throughout the ranks to think different, think smarter. It’s not just about keeping everything running, it’s about keeping it running at the right cost. The Nlyte solution is one of the easiest ways to think smarter and enable that BIG cost-savings once you have decided that it is the right thing to do. People have to be willing to re-think how they solved their data center change problems perhaps 10 or more years ago and instead challenge themselves to solve the data center challenge using modern means and business metrics.

The question people have to ask themselves: Is it better to hold fast to a sinking ship, one that is well recognized to be inefficient, or is it better to take the leap and swim as fast as possible to the rescue ship with modern proven approaches? Methinks there is a short-term versus long-term answer to this question. What is your timeframe?

Share
 

There are a great number of various data center computing models in play today. Some companies are building new data centers. Some are consolidating old. The use of Co-Location facilities is on rise, and so is modular. And don’t forget the Cloud for some applications. At the end of the day, work is being handled by any number of combinations of data centers, each with gear that needs to be actively managed. The choice of computing platform is usually selected based upon a set of business reasons (such as cost, security, performance, etc). For some reason there is a bit of confusion still about how and why Nlyte’s DCIM offering would be used across all of these styles. Well let me say with a definitive “YES” that Nlyte is the perfect complement to ALL of these styles, and in any combination of them. Let me explain…

Nlyte manages the lifecycle of physical assets for the purpose of capacity planning, resource management and cost containment. Our solution allows any type of physical asset to be described in great detail and in real-time in the context of everything around it, and then each of their lifecycles managed over long periods of time using business process workflows. Nlyte manages change in the physical layer. Nlyte scales to any number of racks, in any number of locations. Your equipment inside a Co-Lo or Modular structure looks like an “in-house” data center when it comes to managing change. Sure the processes are a bit different, but it can and SHOULD be managed pro-actively.

So in terms of data centers, every location counts as one. Have a traditional data center downstairs? It’s a data center. Have you deployed racks inside a Co-Lo? It’s a data center. Have some gear installed inside modular enclosures in your regions? Yup, they count too. You end up with a number of “data centers”, regardless of how they were constructed or financed. Nlyte is the right answer to manage ALL of your assets regardless of where they are installed.

The Nlyte solution fits anywhere there are racks of gear, and is usually acquired by the owners of that gear. In the case of in-house, co-lo and modular, the end-users are the owners of the gear. Sure it’s installed in various types of structures, and in various locations, but the hardware assets are still owned by the end-user. This defines the applicability of Nlyte. Nlyte works with hundreds of customers today that have a mix of in-house and co-lo and modular. For us it’s just standard business.

Perhaps the confusing part is the ‘cloud’. Again, think about who owns the gear. In the case of private clouds, the end-user still owns the gear and hence Nlyte is still a great fit and a perfect complement to in-house data centers. In that case of Public Clouds, Nlyte is still a perfect fit, the only difference being that it is the “Cloud providers” themselves that own the physical assets, so Nlyte is talking directly with many of those Public Cloud providers to show how Nlyte can manage THEIR physical assets. They still have the same issues that  end-users do, just at a much larger scale. In fact, they are even more sensitive to asset lifecycles because the economics of that gear are directly related to the profit a Cloud provider can expect. In the Public Cloud business world, provisioning, decommissioning and/or refreshing gear quickly is the name of the game.

Regardless of computing style, Nlyte’s advance asset lifecycle management solution fits the bill!

Share
 

The 5 W’s of DCIM

On January 30, 2014, in DCIM Strategy, by Matt Bushell

To some people, DCIM is not so mysterious, they’ve been following the industry, perhaps have some form of it,5 W's or participate in it (most likely as a vendor.), This post explores the, “Who, What, When, Where and Why of DCIM”, to demonstrate a “what is possible” point of view.

WHO – as in, “Who is DCIM for?”  This is my favorite part, because DCIM touches so many people: IT (right up to the CIO should care), Infrastructure & Operations, Finance (for sure), Data Center Management (obviously), and if an organization is large enough, an office of Corporate Social Responsibility (think, “Energy Conservation”.)

WHAT – as in, “What can a good DCIM system do?”  This is perhaps the broadest ranged question, and is often open to interpretation. Certainly physical infrastructure (lifecycle) asset management – who owns what, for how long, and where is it, with what running on it (that’s pretty much the 5 w’s right there.) Capacity Planning – what-if scenarios for consolidation, migration, and IT project roll-outs.  Monitoring – as in monitoring the power, temperature, cooling of your data center and their assets

WHEN – as in, “when should I be looking at DCIM?”  Often times a large event triggers this thinking: a migration, a consolidation, a new data center. All valid reasons. But because any data center worth its salt is constantly moving out older, under-performing assets and moving in replacements on a daily basis, the answer should be, “as soon as possible.”  More on this in the “Why” section.

WHERE – as in, “where can I or should I deploy DCIM”? With, ahem, certain software-as-a-service offerings, DCIM no longer has to be a “whole data center” to get the ROI, or have a minimum # of racks to get going. So the answer is simply, “every and anywhere you have data center physical infrastructure, you should have a DCIM system.”

And lastly, perhaps most importantly…

 WHY – as in, “why do I really need a DCIM system”?  You may be thinking that your existing way of management works just fine. Or that it is so hard to justify new software in your company that you’ll never be able to procure. The truth is, DCIM has a strong ROI. Nlyte clients see 50% savings in migrations and consolidations and the postponement of Data Center Build Outs by years, and in huge power savings by getting old equipment out of service. So, there’s a lot on the table that’s being left by NOT having a DCIM system – give it a try!


Share
 

The Industry’s Proven DCIM delivered as a Service

On January 20, 2014, in Announcements, by Matt Bushell

Nlyte On-Demand DCIM as a ServiceYou can say that Nlyte arguably started DCIM, rolling out our first product in 2004. But you *may* not know that we also have the world’s largest DCIM single-instance deployments in terms of the number of racks and assets under management. Well now you can also say that Nlyte has that same amazing DCIM suite now available as a Service!  Last month, on December 9th, at the Gartner Data Center Conference in Las Vegas,  Nlyte announced Nlyte On-Demand so companies of all sizes could acquire DCIM rapidly, and without major up-front costs.  I dare say that if there had been a “Best of Show” award, Nlyte On-Demand would have won it hands-down. Seriously, being able to immediately use the results of 300 person-years of engineering dedicated solely to the purpose of optimizing asset lifecycles in the data center is incredible.

The timing was prescient because at the Gartner Tradeshow, we got all kinds of inquiries as to what DCIM can do, and we were proudly able to discuss both on-premise and our just announced On-Demand version. Same great taste, just different purchase models based upon a user’s needs. Whether it was a higher education institution, a division of a large company or a small-to-medium company, they all felt part of the DCIM community thanks in part to just knowing that there is an appropriate offering out there for them. And in the case of potential buyers of our new On-Demand service offering, it wouldn’t be daunting to get the subscription-based expense approved. And like any good software-as-a-service, we have a hands-on trial to test out the product, directly! They say the proof is in the pudding, so by all means, come try On-Demand. It tastes great!

Don’t forget to tune into our upcoming webinar on Wednesday the 22nd where we will go over all of the virtues of On-Demand and discuss many of the successes that our customers are telling us about daily.

Share
 

Privacy Policy + Terms of Usage

© 2003 - 2014 nlyte Software, Ltd. (unless otherwise stated) - all rights reserved.