As IT shops deploy an increasing number of virtual servers, the challenge of managing both these servers and the physical servers in their presence grows. Here, we explore solutions with Jerry Carter, CTO of Likewise Software, who explains the role of a well-conceived server management strategy in cloud computing, how to avoid virtual server sprawl, and tools for managing physical and virtual servers:
Q. To have a successful cloud strategy, how important is it that an IT shop well-manages both its physical and virtual servers and its associated storage systems?
Jerry Carter: It’s critical in order to scale the IT infrastructure to be able to meet the business needs of the company. The concept of virtualizing the machine is about rising up the stack. You virtualize the machine and then you start running multiple VMs on one piece of hardware. Then, you virtualize the storage, but you must isolate the management of the storage resources from the application that is using that storage. If you can’t abstract those resources from the application, then you end up managing pockets of data. When you are moving from physical environments to virtual ones, you must have a solid [data/IT/storage] anagement strategy in mind; otherwise, your problem gets worse and management costs rise. You might end up cutting power consumption and gaining space through consolidation, but you might increase the number of images you have to manage.
Q. How big of a problem is managing just the storage aspect of this?
J.C.: At VMworld in August, a speaker asked, ‘When you have performance problems with your VM, how many would say that over 75 percent of the time it’s a problem involving storage?’ This huge number of people raised their hands. If you just ignore the whole storage capacity and provisioning problem, then how can you manage policy to ensure you are securely protecting the data that represents the business side of the company? You must be able to apply consistent policy across the entire system; otherwise, you are managing independent pockets of storage.
Q. In thinking about managing physical and virtual servers, should IT shops start with private clouds before attempting public clouds?
J.C.: The reason people start with private clouds has to do with their level of confidence. They have to see a solution that provides a level of security for information going outside their network; not every critical application knows how to talk to the cloud. So a private cloud strategy gives you the ability to gateway protocols that those applications can actually understand, along with the back-end cloud storage APIs. For instance, look at the announcement Microsoft made involving the improvements to its Server Message Block (SMB) 2.2 for Windows Server 8. Historically, server-based applications like SQL Server and IIS have required block storage that is mounted through iSCSI for the local apps to work. So what Microsoft has done is positioning in the cloud to be a competitor for blocks via those server-based applications.
Q. Is it important that Windows Server and Microsoft’s cloud strategy succeed to broadening the appeal for managing physical and virtual servers?
J.C.: If you look at most of the application workloads on virtualized infrastructures, something like 90 percent of them are running on VMware servers carrying Microsoft workloads. VMware is trying to virtualize the operating system and the hypervisor. Microsoft’s position is to enable those application workloads because those are the things really driving business value. I think it is very important that Windows Server 8 succeeds, but I think the more critical aspect is it supports SMB 2.2.
Q. What is the state of software tools for managing physical and virtual servers right now?
J.C.: I think the maturity and growth of virtual machine management has been tremendous. When you look at things like vSphere and vCenter from VMware that allow you to manage the hypervisor and the individual VMs, be able to deploy them rapidly and then spin them up or down on an as-needed-basis, it is impressive. But the problem that remains is in treating VMs as independent entities. What business users really care about is what is inside the VM, but the hypervisor doesn’t really deploy policy to the guest. There are some clever people doing interesting things, but generally it’s still a broad set of technologies for dealing with heterogeneous networks. I don’t think [management tools] have advanced as fast as the VM management infrastructure has.
Q. With so many more virtual servers being implemented than physical servers, how do you manage virtual server sprawl?
J.C.: First, people have to realize that it is a problem. Not only do most people have large numbers of VMs deployed that are no longer in use, but they don’t have a handle on what is actually inside of those VMs. There could be data that was created inside a particular VM for development purposes. But you have all this other data used to build a critical document just sitting out on a VM. I think the security implications are huge. If you are using individual VMs for storage, then you can have huge amounts of business-critical, sensitive data sitting on these underutilized VMs. If you aren’t doing something to manage the security, then you are vulnerable to data leakage. Some managers think, ‘All my users are storing data on the central file server, so this isn’t a problem.’ But inadvertently, users copy data locally for temporary work. The best way to address it is to have a plan in place prior to allocating new VMs, where you can apply a consistent authentication and security policy. That way, users know what they are allowed to do within a particular device.
Q. What adjustments must IT make for things like data protection as they rapidly provision more physical and virtual servers?
J.C.: Data protection can’t exist without data centralization. You can’t manage individual pockets of storage or individual VMs themselves. Another issue IT has is users don’t start removing unnecessary files from disks until the disk starts to get full. But with storage capacities ever increasing, disks never fill up, so there is never any reason to go back in and clean them up. So you can end up with the same problem caused by this virtual machine sprawl.
Q. And does unstructured data rear its ugly head in this scenario too?
J.C.: Yes. I think the biggest problem is the amount of unstructured data that exists out there; people don’t really have a handle on that. About 75 percent of data out there is unstructured. The question I always propose to users is: Given all of the data you have in your network, what is the one thing you most want to know about it? The answer is usually, ‘Well, I’d like to know I can correlate the locality of my data to prove to someone I am meeting their SLAs,’ or, ‘I want to know when people are accessing data outside their normal usage pattern.’ They need to understand data about their existing data and what access people actually have.
Q: What is a common mistake IT people make as they grow their physical and virtual servers’ infrastructure up and out?
J.C.: Just as it’s impossible to manage individual pockets of applications, it is impossible to manage individual containers of storage within a single, large storage infrastructure. You must have something that addresses the overall problem. This is what people fail to understand when they move from a smaller infrastructure to a massive one. With this machine and storage sprawl, any cracks in the existing management techniques, software or policies become chasms as the size of the problem increases from something small to very large.
Q. Is IT looking more seriously at open-source solutions for managing physical and virtual servers?
J.C.: Open source will continue to play a predominant role in a lot of IT shops. But one of the real challenges is the amount of investment made in developing expertise on individual solutions. Another is finding people with expertise when you have turnover. There is a lot of great open-source technology available, but it is not always in product form. People become experts in the technology, but it can be risky to rely on technology expertise rather than product expertise, which can be replicated. The question is: Can it address the whole problem, or is fixing individual pain points a better way to go? I think the jury is still out on that.