Cloud and virtualization are taxing data centers

The data center is transforming — modernizing to meet business demand as technologies such as software-defined architecture, cloud and virtualization take hold. This modernization is also being driven by CIOs and IT executives taking a hard look at their computing needs and asking whether they want to own and/or operate data centers any longer, industry experts say.

It’s a big issue. According to new research by Synergy Research Group, spending on enterprise data center equipment is static while spending on service provider data centers is booming. And Gartner predicts that a software-defined data center’s programmatic capabilities specifically regarding application programming interfaces and/or command line interfaces will be required for 75% of global 2000 organizations seeking to adopt modern IT approaches such as DevOps by 2020.

As the workloads companies are trying to support are increasingly linked to analytics and other data-intensive apps, those with data centers are facing a huge dilemma, observes Rick Villars, IDC’s vice president of data center and cloud. Customers are struggling over whether it is better for them to buy servers and storage and put it in their own data center to run, or if it makes more sense to buy infrastructure in a data center run by a vendor that can optimize the apps for the workloads they need.

Of all the data center square footage in use today, 77% of that space is owned and operated by enterprises for their own use, while 23% is owned by a service provider that is selling the space to other companies, says Villars. In contrast, 10 years ago it was closer to 90% of data centers being enterprise-owned, he says, and by 2020 “it will probably be closer to 50-50.”

More and more companies are recognizing that they don’t want to become service providers and they don’t want to build a new data center, he says, noting that IDC frequently hears from clients that “this is not where our strategic investments should be spent.”

At the same time, “most companies are saying … we want to take advantage of converged infrastructures and solid state storage and virtualization to run those existing apps for a lower cost … and do more with less,” he says. Some of the newer technologies are more advanced than the data center they are housed in, Villars adds. “We call that data center obsolescence.”

Pay for what you use

In what may be a sign of the times, Aligned Data Centers (ADC) has started marketing a “pay-for-use” data center. It’s a usage-based pricing model designed for enterprises, service providers and government agencies looking to control data center costs while gaining faster time to market. ADC is banking on the fact that this approach will be popular with customers that have become accustomed to a cloud model of buying on demand and paying only for what they use — with the ability to add and decrease usage based on their needs.

Typically, data center customers are required to sign long-term contracts for power they may or may not use since data centers have static power requirements and fixed densities. Forecasting future power demand is difficult because not many companies can predict what their IT gear will look like in five to 10 years. They may have more or less, depending on several factors, including user demand, adoption of cloud and innovation within the IT stack, among others.

ADC’s model gives customers the ability to adjust capacity based on their business needs, says Mark Bauer, managing director for the data center solutions team at Jones Lang La Salle, a tenant broker. He says this approach appeals to customers he is working with.

Customers generally “like to hit 80% of usage” but most only have hit 40% to 50% in their current data centers, Bauer says. Companies are telling him “We’re looking for flexibility, we’re looking to get a commitment for power, but we want a commitment from the operator to have that power available to us with the ability to take it down or up as needed,” he says. Most traditional data centers can’t accommodate that.

Even if companies opt to build their own facility, customers acknowledge, “we’re having to build out more than we need and this model offers us something new,” Bauer says.

The pay-as-you-go data center offering came about because customers “have become accustomed to the cloud model of buying on-demand and paying for what you use,” says Jason Ferrara, chief marketing officer at ADC. “What we’re talking about here is deploying capacity incrementally — in much smaller blocks based on customers’ requirements.”

In the traditional colocation model, customers are locked into long-term leasing contracts with their data center providers, averaging anywhere from seven to 15 years. This means customers need to predict their IT demand as many years in advance as their contract lasts, explains Ferrara. Predicting IT loads is hard given the constantly changing technology landscape.

“This is the major reason for the industry’s waste problem, including comatose servers, because many companies over predict how much infrastructure they are going to need,” he says.

ADC’s pay-for-use data center is essentially consumption-based pricing for colocation. It can minimize the up-front commitment for power and space by up to 70% by not locking customers into a fixed ramp schedule and charging only for what they use, Ferrara says. The model lets customers secure the capacity they need for future growth, add new capacity quickly and pay based on the power they are using.

The model helps reduce stranded capacity and reduces customers’ data center costs, Ferrara claims. ADC also provides additional savings by operating its data centers at a relatively low power usage efficiency (PUE) ratio (1.15 guaranteed), which Ferrara says means customers have a lower power bill each month.

ADC customers are being asked to commit to 300 kilowatts, which Ferrara says is one-third less capacity than what a traditional data center requires. ADC reserves the capacity the client needs on day one and then bills the customer based on how much it uses, says Ferrara. The amount of power and space the client is allocated is based on their current and future data center requirements; every customer is different. ADC’s data centers support up to 25 kilowatts per rack so clients can grow within their existing space without having to add more. Then, if a company increases its demand to only 600 kilowatts, that’s all it’ll pay for, he says. Customers are required to commit to a three-year contract.

ADC recently opened a 30-megawatt data center in Plano, Texas that uses 85% less water than traditional data centers of the same size and capacity, according to the company, and has broken ground on a 65-megawatt data center in Phoenix, Ariz.

Owning versus outsourcing

Nightingale, a cloud-based electronic health care record provider, sells two products. Both are apps for storing patient data, handling billing and scheduling. The first is a production system that’s based on Windows Server, MS SQL and VMware and runs in a Toronto-based data center. The second is a second-generation, redesigned virtualized system based on Linux, Java and PostgreSQL and hosted by data center operator CenturyLink. Nightingale owns the hardware in the Toronto data center; CenturyLink owns and provides the virtualized infrastructure for the second system, says Ijaaz Ullah, vice president of IT and privacy officer at Nightingale. He declined to name the Toronto data center.

“We have a few hundred thousand dollars of hardware” sitting in the Toronto data center; the bigger concern is “in a few years we have to replace” the hardware, he says, adding that Nightingale’s IT staff is responsible for all maintenance and operations. “When we did the second product we took that out of the equation and moved to a hardware provider, which is CenturyLink.”

He says he likes that CenturyLink gives them as many virtual machines as they need when they have to scale up capacity. Eventually, the first system will be replaced by the newer product “and then we can sunset the old one. That hardware has already been replaced once and we’ll come to a crossroads in three to five years when the hardware will have to be replaced again.”

When that happens, Ullah says they will move the second-generation system to a cloud provider “so we don’t have to worry about that hardware again. The cloud hosted model is much better suited” for their business needs, he says.

“There’s a big move from capex to opex and we don’t have to make these massive investments in hardware,” Ullah says. “So it’s an easier sell to finance. As we grow, I can tell you that each additional user we add will cost us $6 per month.”

In the future, he says Nightingale will probably outsource even more of its IT needs, such as desktop and phone support. “We’re in the cloud business and if we didn’t trust in cloud services how could we expect our customers to do the same?”

Managing the transition

Even with all the movement to cloud or colocation, some enterprises still opt to continue to build new data centers, explains Dan Harrington, research director of enterprise data centers at 451 Research Group.

Recently, Harrington conducted research asking clients what they would do if they ran out of capacity tomorrow. The number one response, he says, was they would consolidate their IT infrastructure and use more virtualization to take advantage of the space they already have. The number two response was to use more cloud and hosted software as a service and platform as a service, he says. Number three was to use colocation, and lastly, about 20% of respondents said they would build a new data center.

Finance and healthcare-oriented companies are more likely to build their own data centers due to regulatory and compliance concerns, Harrington notes.

451 Research also asked clients about their workloads and found that around 8% of the time people are deploying workloads in the cloud. Asked about their expectations for deploying workloads in three years’ time, respondents said they expect 21% of apps to reside in the cloud; 14% plan to use colocation; and 65% will reside at their own data center due to pre-existing investments, according to Harrington.

“The reality is most enterprises have a whole bunch of data centers spread out around the world … so there are a lot of preexisting investments.” Yet, he says the research firm finds that a lot of those investments are not being utilized as much as they could be.

Harrington says virtualization and multicore processors have made existing IT infrastructure so much more efficient that “you’re able to do so much more with these servers and storage than in the past,” and there isn’t a big need for extra capacity. “I think a lot of people miss that when they’re trying to push cloud on everyone,” he says. “If I’m an enterprise with all these servers that are so efficient and have consolidated equipment down to a couple of racks and I deploy what I need there and have a half-empty data center, I’m not not going to use that space.”

But small- and medium-sized companies are not very likely to build a data center, Harrington adds, since it’s not their core competency. The same goes for enterprises that need computing capacity in other parts of the world, so they will look at colocation, then buy and maintain their own infrastructure in other facilities so they don’t have to build their own data centers.

Colocation, cloud picking up steam

In the future, when it comes to utilizing data centers, “You’re either outsourcing to colocation or you’re outsourcing to cloud,” maintains Harrington. “It’s hard to say” if colo is bigger than cloud currently, or which model will be more popular going forward. With the number of apps being deployed, he suspects that cloud is growing faster than colocation right now, but colo is also on the rise.

If a company decides it doesn’t want to own its own data center any longer, the question becomes how to best manage the timing of the transition, says Villars. Transitions are usually tied to the age and status of a company’s current data center. “If you built one five years ago the finance department is still amortizing it. So there will be financial reasons you can’t” make the transition.

He advises companies thinking about outsourcing to a data center provider to first do an analysis of where they are at from a capital standpoint. The second step is to “rationalize your existing systems before you move” — meaning figure out the reason for the number of servers and configurations you have. It’s important to significantly reduce the physical number of servers.

Customers may have many as 30 to 50 different amounts of memory and storage, and some servers may have multiple processors.

The next “rationalization” is to look at your application portfolio. Companies might have multiple databases so consolidate your data sets and the number of apps doing the same job so you don’t have as much to patch and maintain. A key best practice, he says, before you make the move: “Clean up your environment and limit the scope — then you can shift to a third-party data center faster.”

A lot of companies use the move as an excuse to do system rationalization, he says, so if you try to do both at the same time it will lead to significant downtime. “Just moving to a new data center isn’t going make it better; you have to get rid of the inefficiencies first,” Villars says.

The transition to colocation will also be much less painless if you’ve moved from your data center to a service provider, especially if you’ve already done the upfront work of virtualization and utilizing cloud services catalogs, he says. That makes for a much easier migration to cloud.

Outsourcing to a data center provider has been pretty straightforward for Nightingale’s Ullah. “Everything is about using the best tools for the job and the best company to provide the services for you,” he says. “If we have an issue or need help we call the provider. That allows us to focus solely on our product, which we do best,” versus having to deal with “all the extra noise.”