Aberdeen Group Analyst Offers Tips on Protecting Virtualized Environments

There’s a lot riding on server virtualization, and the risk of disruption only increases as IT shops deploy more virtualized applications on fewer physical machines. A loss of a single box can bring down multiple applications, so organizations that hope to preserve the availability of apps need to protect the environments in which they run. Dick Csaplar, senior research analyst of storage and virtualization for Aberdeen Group, has been looking into these issues. He recently discussed steps enterprises can take to address downtime concerns.

Q: What got you interested in the topic of protecting virtualized environments?

Dick Csaplar: Last year, we found through our survey process that we passed the milestone where, now, more than 50 percent of applications are virtualized. You now have to start thinking about applications being virtualized as the rule, not the exception. With virtualization, you have greater server consolidation and density, so a single server’s worth of downtime impacts more applications than in the past.

The other thing that was of interest: I was at Digital Equipment Corp. back in the day, and the PDP 8 was kind of the first corporate-affordable minicomputer. That led to the growth of the corporate data center concept for even midsized corporations. The concept of downtime was co-birthed at that moment. Today, more business processes are computer-based, so downtime costs companies more than ever. Protecting against computer downtime has been, and continues to be, a major focus of IT and will be for the foreseeable future. Things happen, and you have to be prepared.

Q: Are there steps organizations should take as they go about protecting virtualized settings?

D.C: The first thing you have to think about: It’s not one-size-fits-all. You just don’t say, “I’m going to put all applications on a fault-tolerant server to get maximum uptime.” Quite frankly, it’s expensive and unnecessary. You want to tier your applications — which ones really require high levels of uptime and which ones, if you are down for a half a day, are not going to kill an organization.

The highest-level tier is the absolutely mission-critical applications like email and database- and disaster-recovery applications. It is worth investing in a high level of uptime because when the email system goes down, for example, people stop working. But with test and development environments, even if you lose them, there was no data in there that was really corporate-critical. Most organizations — about 60 percent, our research shows — don’t bother to protect their test and dev applications.

And there’s a middle tier of apps where you’ve got to do the math: Is downtime protection worth the investment?

Secondly, you need to have an idea of the cost of downtime. That sets the right level of investment for your disaster recovery and backup strategy. If the cost of downtime is measured in hundreds of dollars, obviously you can’t justify spending tens of thousands of dollars to keep applications up. If it is measured in tens of thousands of dollars, you should invest a relatively large amount of money in your downtime protection.

The cost of downtime varies by application. Ballpark it. Get it to an order of magnitude. Get a sense of what those things cost, and that will guide you to the appropriate level of protection. You are right-sizing your solution.

Q: What are the technology alternatives?

D.C.: On the software side, the hypervisors themselves have high-availability technology that is embedded. It monitors applications and, if it detects an app is not performing, it will restart the application on a new server. It’s very cheap. But you do lose the data in transit; any data in that application is gone.

Then you have software clusters. You have to pay for that, but it’s better protection in that the data gets replicated to other locations. Then there is the whole area of fault-tolerant hardware: fault-tolerant servers.

Q: Can an enterprise reuse their existing application protection technology in a virtualized environment, or do they have to use technology geared toward virtualization?

D.C.: That depends on the level of protection you want and what you currently have. One of the best technologies for application protection is image-based backup. It takes a picture of the entire stack — the hypervisor, the application and the data. That image can be restarted on any new server.

Image-based backup tends to work better in highly virtualized environments. That doesn’t mean that the more traditional backup and recovery tools don’t work. They can, but you have to specifically test them out.

And that would be another thing to consider: having a formal testing policy. About half of the survey respondents we’ve researched don’t have a regular testing program for backup and recovery. They back up all of this stuff, but they don’t know if it would work in an emergency. There has to be a formal testing process at least quarterly.

Q: Any other thoughts on protecting virtualized environments?

D.C.: We are talking more about business processes here than we are talking about tools. We’re talking about best practices for keeping apps up and running, and most of them have to do with good data protection processes.

A lot more needs to be done than just throwing technology at it. You have to do your homework and you really have to know your business.

Photo: @iStockphoto.com/Kohlerphoto

by John Moore