VMware’s annual extravaganza, VMworld was held a couple of weeks ago in San Francisco and has grown to more than 20,000 attendees. This is a really big deal for the industry and VMware has the right to be proud of what they have done. Ten years ago they seeded the industry with commercially viable dynamic capacity and these approaches have now been adopted to varying degrees by all Enterprises worldwide.
The VMworld attendees ranged from CIOs and CFOs, to System Administrators and Programmers. This wide diversity in job function is no surprise as the range of topics being discussed in great detail also ranged from the Business and Financial planning for IT, to Disaster Recovery and Planning, optimization strategies for dynamic capacity and everything in between. The conference also sported a huge trade-show floor, chock full of vendors who had really tuned their products and messages to embrace the world of virtualization. Not just virtualization of servers, but virtualization of storage and networks, and even the hybridization of computing to include the Cloud. The whole event had a core theme about modular capacity and how to add capacity in realistic building-block increments. 250 vendors packed the trade show floor and spoke directly to the audience’s capacity needs. It was impressive.
Amongst all of the exciting vendor demonstrations and discussions, one vendor in particular stood out by carrying a message not previously seen before in a conference like this. It was (wait for it)… Nlyte Software. Nlyte was telling a story about the coordination and placement of virtualized load upon physical resources. Nlyte was the ONLY vendor on the show floor telling this often overlooked story, and by the looks of people visiting their booth, it was very well received. As a pioneer in the DCIM category, they are masters at managing the physical layout and connectivity of everything in the data center. They manage the lifecycles of devices from the time they come into service all the way through their decommissioning. Nlyte’s story as it applies to the virtualized world is straight-forward and not contrived- as workloads are being moved around dynamically, it is critical to coordinate that movement with the underlying availability of resources needed to power and cool those devices. The technical discussion about why this is so critical centers on the power consumption curves of devices as work is applied to those devices. In simple terms, the more work being performed, the more power consumed and the more heat which is generated. Allow this automated movement of workloads enough to the same part of a data center and it is very reasonable to expect catastrophic failures in processing due to inadequate power or limited cooling capacity. This is where Nlyte comes in. Nlyte provides the coordination of dynamic placement of virtualized Guests with the underlying resources needed to power and cool their Hosts, and does this in real-time.
Virtualization is clearly not only here to stay, but has now be applied to storage and networks. All of this abstraction is focused on decoupling the hardware from the software which is a great thing for application developers and service delivery, but creates an critical requirement to actively manage the physical layer as a resource of the processing itself, much like CPU cycles, memory or disk space. Gone are the “Good ‘ol days” where data centers are being over built, over-provisioned and over-cooled. In those days, everything physical seemed ‘infinite’ in nature so not action was needed.
Today, those resources have actual costs which are non-trivial, so data center ‘right-sizing’ is the standard practice. As data centers are right-sized, the resources must be coordinated with the workloads. Abstraction is a great thing, as long as the fundamental foundations are not forgotten.