With the advent of the Nlyte 7 launch, Nlyte’s CEO, Doug Sabella, was interviewed by the Huffington Post to discuss key considerations for managing assets in the data center and to provide insight into how the new Nlyte 7 platform can maximize the financial benefits from the optimized utilization of power, space and assets.
The discussion kicked off with reference to an anti-data center series penned by New York Times last fall. The series “Power, Pollution and the Internet” basically called out data centers as bad for the environment and the cause of “brown clouds.”
At Nlyte, clearly we believe that it is not that data centers are bad for the environment but that most are relying on old technology, dated or analog inventory tracking methods and many companies over-rely on Power Usage Effectiveness (PUE) metrics as their main point of energy measurement. Relying on PUE alone can inadvertently cause companies to use more energy and increase data center costs because this measurement does not factor in all of the layers that now exist in the data center.
To really maximize efficiencies in the data center, companies must be able to assess inefficiencies across all physical, virtual and IT logical layers.
The Nlyte software platform addresses each of the aforementioned layers, enabling customers to plan, track, and maintain all of the mission-critical aspects of a data center. This includes management of all resources: power, cooling, network, miscellaneous combined with the three ‘S’s of space, servers, and storage.
Nlyte recently rolled out Nlyte 7, the industry’s first data center infrastructure management (DCIM) solution to seamlessly integrate with all of an organization’s business IT management fabric. To better align with our customers’ business interests, we now offer the following features:
- Central Data Repository – Provides contextual relationships between all enterprise data center attributes, giving IT comprehensive views and information on their data center.
- Business Intelligence – Nlyte’s industry-unique BI engine provides dashboard and reporting information for trending and what-if scenario planning, with a rich set of included dashboards and reports plus the flexibility for the user to define and create their own.
- Physical and Logical Row Viewing –Nlyte users can now define and build logical row views of physical and logical equipment for detailed analysis of multiple cabinets side by side. User selections can be based on business requirements, whether by geography, line of business, or any other parameter that might be needed. This feature also provides multiple cabinet views of grouped pods or user-defined areas.
- Cabinet Device Overlay Reports – Extends resource visualization from floor plans to rack elevations, identifying potential hot spots, resource consumption, equipment types, organizational ownership, etc.
- Data Model Integrity – Data models are always accurate resulting from automated reconciliation by using Nlyte Reconciliation in conjunction with BMC’s ADDM or other discovery products.
Click here to read the entire article.
If you’re anything like me, whether it is your work email or your personal email account, you are constantly receiving emails, reading or scanning most of them, and then simply just leaving them in your inbox. To stay. Never to be deleted. You might put them into a mail folder and call that “managing your email.” In my case, my personal Yahoo! account is free, and I’m nowhere near my space limit there. My work account is limited, and since my IT group doesn’t support archiving, I’ve been unable to delete enough emails to fall below my 600mb limit, and have just been granted 300mb more space. But that’s email – pretty low impact.
So what’s the corollary to a datacenter you ask? Well, think about it: your IT and business groups are constantly submitting requests for new applications, not to mention your finance department is counting on depreciated assets to fall off the books and thus has the Capex in place for new infrastructure to be rolled in. You’re so busy walking the floor or checking your spreadsheets or in-house solution (or worse, inexpensive DCIM solution with no controls) to keep up with demand that you don’t effectively clean out your inbox: decommissioned servers stay put and become what we in the industry call “zombie” servers (they are dead but are not properly buried and thus still consume energy – in the case of real zombies, human brains, but in the case of servers, about 60% of a peak power draw – not to mention critical space and network connections, too.) So you have trouble placing the new assets AND still have the old ones in place AND you’re consuming precious data center resources, which, believe you-me, are quite a bit more costly than expanding an email account!
But wait, there’s more (bad news). If you treat your data center like an inbox, you are just receiving things, when other people want to send them, while you’re unaware of when you’re about to receive something. So you might get flooded with requests, with no rhyme or reason, and as a data center manager, be unprepared and thus seemingly unresponsive for such an influx. There’s no SLA for an email response (instantaneous?) But for a server installation, there are huge repercussions that a business is constantly exposed to. Will that application have available hardware? Do I have power/space/cooling for that hardware? When can I count on the business savings of that new app? Am I leveraging the full depreciation cycles of my hardware, or are things just sitting on my receiving dock, unbeknownst to me?
So you can clearly see that there are significant items at stake in your data center, which are quite a bit more serious than those you face with your email inbox. What process exists to decommission items, on time, what process exists for me to receive something, with full notifications, how can I let someone know when I can receive and act on something? Do then, ask yourself, why again is your organization still treating your data center like an email inbox?
Everyday I talk to company after company about DCIM topics. Even those of you that have been closely following the DCIM marketplace understand that it is a rapidly changing and progressing new category of management software and the discussions you may have had just 60 or 90 days ago may have changed. New discussions may have acquired a new flavor, new constituents or even a re-prioritized set of needs.
Two things I find today which are common regardless of who I am talking to. First and foremost, most IT organizations NOW understand where they REALLY are with regards to Physical Infrastructure Management solutions. They know the types of available documentation and spreadsheets and other materials which describe their data center. They also know that there are huge gaps in what they actually know regarding their physical infrastructure and what they actually need to make progress in optimizing it. Luckily, they also are becoming familiar with many of the concepts of DCIM and understand that DCIM starts with an accurate spatially-aware representation of all of their computing assets in highly granular fashion - and it doesn’t stop with the pretty pictures that represent that model! It’s about long term asset change management, business analytics, intelligent placement scenarios and overall data center resource prediction.
Secondly, the key players on their team are getting fairly articulate about these longer-term business management goals for the data center, and have a large amount of recently acquired executive-level oversight. Whereas the data center used to be a delivered services oriented ‘black-box’, it has become a highly visible and strategic corporate asset. Combining this knowledge, they understand that DCIM is the means to get to their next level of pro-active management. DCIM is a strategy, not a tactic.
Now we all know that the shortest path between any two points is a straight line, but in the case of DCIM, it is not so apparent how to draw that line. There are simply too many vendors with self-proclaimed DCIM offerings that are pulling customers left and right. Luckily for us all, the natural selection process is working. Some of these ‘DCIM’ startups are vanishing. Some startups are re-tooling and re-focusing their products & messages to be more tangible value oriented. Some are simply getting back to their knitting and stating more clearly what they actually DO without artificially connecting their value to DCIM.
In the end, the DCIM madness is beginning to subside. Sure there is still a lot of noise, but the volume of that noise is becoming bearable. Do your part and ask your DCIM vendor candidates the HARD questions, ask to SEE what they do, challenge their answers, be very specific. Here at Nlyte, we are proud of the last 10 years of development in our DCIM-specific solution. We even have a Patent on the intelligent placement of assets based on data center conditions. Ask NLYTE to demonstrate what we say!
As an industry, we are almost out of the rapids… I can see smooth sailing ahead!
This past week an article was published by the New York Times which discussed the impacts of data centers upon the planet. It correctly pointed out that more converging digital age progress (like smartphones and tablets) begets more digital media users, which requires more information accessibility and more data center capacity. It goes on to say that this additional data center capacity consumes more energy, land and water and puts larger amounts of carbon in the air.
Perhaps missed, the BIG point to be made is that the challenge we are facing is NOT about the ABSOLUTE AMOUNT of energy or other resources being consumed that is the problem, it’s the efficiency on a per-transaction basis that is the real opportunity. The Green Grid started the efficiency measurement game 5 years ago with PUE, but hasn’t stopped there. They have since published both Carbon and Water metrics (CUE and WUE) and are in the final stages of releasing the DCeP metrics which will focus on transactional overhead. Transactional overhead is a concept that relates to the costs to process data, not just to run equipment blindly. Users in the digital age consume information, not equipment. How that information is provided to users is part of the process of innovation at the fingertips of IT professionals today. Per Unit of Work efficiency is really the topic that matters.
Nlyte has ALWAYS believed that efficiency in the data center is about the business processes over long periods of time regarding resource planning. Nlyte’s DCIM software allows full lifecycle management of resources, all the way from from the applications and virtualization, down through equipment hardware and power chains. It’s all about pro-active resource management and Nlyte has focused on this capacity management task since day-1. The New York Times article successfully raises the awareness of the impact of the converging digital era upon the planet, and hopefully helps to accelerate the adoption of mission-critical, bet your business style ERP applications for this critical resource management task. DCIM is the ERP for the data center! That’s what we do (and we have hundreds of the biggest names on this planet using our solutions everyday to do this!)
In light of the Amazon Cloud outages last month, one instantly has to think about what Amazon, or really any cloud provider should be considering to mitigate such risks. In general, cloud service providers tend to focus on pure IT Management and virtualization. But what about the actual infrastructure itself: how is it being managed, and in the event of a catastrophic failure, how should it be brought back online in the most expeditious manner? An area where cloud service providers may very well be at risk is in the process of managing their infrastructure – that is, do they have proper processes, procedures and methodologies in place?
In the most recent Amazon outage, according to the Wall Street Journal, “Generators kicked in but failed to stabilize the load. Power went off to part of the data center. Then a software bug delayed recovery.” No doubt a Data Center Infrastructure Management (DCIM) solution could have helped mitigate this. DCIM can track the power chain and can also remind IT and Facilities teams to test the simulation of a generator failure beforehand, under more controlled conditions when traffic is known to be low, thus identifying and eliminating the risk ahead of time. Redundancy may be mandated by the company, but only good process management, the kind of management that a DCIM solution can provide (among other capabilities), can enforce it.
There has been too much focus lately on DCIM as a monitoring mechanism only. The problem is that real time monitoring is too late to solve for problems like the one that happened at Amazon. For DCIM to provide real value, it has to be more about establishing good process and procedures that help run an efficient, lower cost and lower risk data center by avoiding problems before they happen.