Changing the Ways Data Centers Are Built

Enterprise IT departments have spent the past several years flocking to the cloud largely for the flexibility it provides, both in terms of technology and economics.

In the case of public clouds, where a third party owns and operates the IT infrastructure, that flexibility includes shifting CapEx and OpEx to the cloud provider and gives access to cutting-edge IT products the enterprise wouldn’t be able to afford on its own.

With private clouds, the enterprise owns the data center(s) and other infrastructure for reasons that typically include concerns about security and performance risks, which can arise when a third party hosts everything. Although the enterprise bears the CapEx and OpEx of owning the private cloud infrastructure, it retains the flexibility to quickly and cost-effectively shift IT resources across different business units to meet each one’s needs. For example, instead of providing the accounts receivable department with dedicated IT resources that lie mostly idle outside of the monthly billing cycle, that equipment can be shifted to other business units during peak periods. That increases ROI.

A related trend is the rise of data center appliances, servers preloaded with applications that meet each enterprise’s unique business requirements. With data center appliances, either the hardware vendor or the enterprise can be responsible for managing that infrastructure. That business model makes it possible to free the enterprise IT staff for other activities — or in the case of small business, forgo having an IT staff at all.

The rise of clouds and appliances is reshaping the data center. Here’s how.

Collocating for Access to a Community
When an automaker builds a factory in a state where it has no facilities, its parts suppliers typically add their own plants literally next door. The logic: It’s cheaper and easier to meet just-in-time supply-chain requirements when you’re across the street than across the country.

Data center operators and their customers have started borrowing that strategy. For example, financial services companies are increasingly collocating in the same data centers. That approach avoids the latency that would occur if they were in separate facilities scattered around a country or the world.

Although that may not sound like a major benefit, bear in mind that financial services is an industry where a network bottleneck can cost millions if it means a stock trade is executed after the price has increased. Data center collocation also saves money by minimizing the amount of data shipped over wide-area connections, and staying local means fewer security risks.

Content creators and distributors are looking for similar opportunities, including sharing data centers with Internet exchanges to avoid the cost and latency of shipping their bits over wide-area connections. “They’re collocating their infrastructure,” says Ali Moinuddin, director of marketing at Interxion, a data center operator whose 1,200 customers include the world’s five largest content delivery networks.

Whatever the industry, the business drivers are similar: “They want to reside in the same data center so they can cross-connect within the data center and share data and services — without having to leave the data center,” says Moinuddin.

Virtualization: 5 Times the Performance, 66 Percent Savings
The downside to collocation is that organizations can wind up with their IT assets and operations concentrated in one or just a few sites. That design runs counter to the post-Sept. 11 strategy of spreading IT over a wide geographic area to minimize the impact of a major disaster.

CIOs and IT managers also have to ensure that their collocation and geographic-dispersion strategies don’t violate any laws. For example, some governments restrict certain types of data from being stored or transmitted outside their borders.

To avoid running afoul of such laws, enterprises should have SLAs that dictate exactly how and where their cloud-based operations can be switched if a data center goes down. “That’s something that has to be very carefully managed,” says Aileen Smith, senior vice president of collaboration at the TM Forum, an industry organization for IT service providers.

Redundancy and resiliency are also driving the trend toward virtualization, where fibre channel storage, I/O and a host of other processes are being disconnected from hardware and moved up into the cloud. This strategy can be more cost-effective than the two historical options: building a multimillion-dollar data center packed with state-of-the-art hardware and software designed to minimize failure; or having an identical backup site that’s used only in emergencies, meaning those expensive assets sit idle most of the time instead of driving revenue.

Instead, virtualization spreads server and application resources over multiple data centers. One way it reduces capital expenses is by allowing the enterprise or its data center provider to use less expensive hardware and software. This strategy doesn’t compromise resiliency because if one or more parts of a data center goes down, operations can switch to another site for the duration. At the same time, there’s no backup data center stocked with nonperforming assets.

How much can enterprises reasonably expect to save from virtualization? F5 Networks estimates virtualization can yield five times the performance at one-third the cost. “If I can put 10 of those low-cost servers in a virtualized resource pool, I’ve got five to 10 times the power of the most powerful midrange system at a third of the cost,” says Erik Geisa, vice president of product management and marketing for F5 Networks. “By virtualizing my servers, I not only realize a tremendous cost savings, but I have a much better architecture for availability and ongoing maintenance. If I need to bring one server down, it doesn’t impact the others, and I can gracefully add in and take out systems to support my underlying architecture.”

It’s not your father’s data center anymore.

Photo Credit: @iStockphoto.com/luismmolina

by Tim Kridel