Punya data center sendiri lebih aman, murah dan terkontrol



After 48 years in the same building, VSE Corp. moved about a year and a half ago into a completely modern data center and recycled the old IT equipment into a warm disaster recovery site.
The Alexandria, Va.-based technical services company transitioned from a mix of physical and virtual servers to nearly 100% virtualization and became LEED Gold certified. Through thorough vetting and competing vendors, VSE kept data center costs and performance on track.
Here, David Chivers, vice president and chief information officer at VSE, discusses why in the age of cloud computing they built their own data center, the technologies that allowed them to gain efficiencies and the IT platform trends they chose to ignore.
Can you describe what your infrastructure is like now?
David Chivers: In the old building, it grew over time, from mainframe to a mix of physical servers and virtual servers. Now with our modernization, we have 365 servers and only about 10 physical servers left, and eight of those are for support of VSE's backup and recovery system. We use Cisco [Unified Computing System] blades and Dell servers, with [Hewlett Packard] and Solaris still in place for specialty applications that need to run on these systems.
We also upgraded to an EMC VNX 7500, bringing our existing VNX 5500 to the DR site. And we use a Cisco backbone and VMware vSphere 5.1 for virtualization.
David Chivers, VP and CIO, VSE Corp.
PHOTO COURTESY OF VSE CORP.
David Chivers, VP and CIO, VSE Corp.

We spent about two years studying the hardware available on the market, including those that would give us the Gold LEED certification. This involved reaching out to suppliers and signing nondisclosure agreements so we could discuss what they had in the pipeline.
One example is the APC in-rack cooling we're using. It is a six-inch extension from the rack that forms our 'cold aisle' without actually cooling a whole aisle. The new data center is three times bigger than the old one, and we pay about a third of our old electrical bill to power it. It was easy with the new technologies available.
Why did you decide to build a new data center, rather than move to a colocation provider or put your workloads on the cloud?
Chivers: I have about 1PB of data to host, which would be expensive on the cloud. At the time of the build, we also had client server database applications that would not work in a cloud setup.
Data integrity is also important. We're a public company and [Sarbanes-Oxley Act] regulated, and we need our data secure. Not many cloud hosting companies can handle [Federal Information Security Management Act] compliance either, so the options would be limited. At the end of the day, we really don't want to put data out [on the cloud].
Our goal was modern IT and good DR. From a cost, security and performance standpoint, in-house data center made the most sense.
Software-defined data center is not on our three- to five-year roadmap today, but it's an option that's on the table. We don't want to be the early adopters and find that not everything we want to do is supported. With a mix of vendors involved in software-defined data center, you could end up with the software companies blaming your hardware for poor performance, and pushing upgrades.
What are your biggest ongoing data center costs?
Chivers: Application maintenance and licensing. Technology -- hardware -- gets faster over the years and also gets cheaper. A blade server today costs less than it did four years ago. But server-based apps have costs that go up and up. Microsoft licensing costs go up every year, for example. Years ago, the app costs were probably 10 times less than today. And it's very difficult to drop one application and switch to another.
Application licensing is hard to budget for as well, because licenses are so complicated. Hardware, infrastructure and even labor costs are simpler and more manageable.
Can you tell me about your purchasing decisions? Did you consider all-in-one converged infrastructure products?
VSE data center
PHOTO COURTESY OF VSE CORP.
VSE's new building in Alexandria.
Chivers: We didn't look at converged infrastructure boxes. One reason is that every purchase we make has to be 'competed,' since we work as a federal contractor. But cost isn't the only factor; you have to do the analysis.
For example, we were running out of storage and room for more hard drives on the old storage array network. We competed the out to Dell with their Compellent SANs and to EMC and their VNX SANs. Dell offered a lower-cost package, with a free SAN for the DR site and a discounted array at the production site, but the effort and risk to migrate out of the EMC VNX and move our data to the Dell Compellent system was not worth the small difference in price.
The same problems arise with package deals like converged infrastructure -- how will it affect what is already in my data center? If we were starting fresh, with nothing, then CI would be an option.
We competed every element of the data center, but kept in mind how it all had to work together. Converting virtual machines across platforms can spike our costs and complexity because blade servers talk to the network differently. We chose Cisco servers, and needed Cisco's engineers to help get everything up and running as it is supposed to.
Are there areas where you can cut costs in operations?
VSE server rack
PHOTO COURTESY OF VSE CORP.
The set up in one of VSE's racks
at the on-premises data center.
Chivers: With virtualization, we have lowered our CPU and memory requirements. I had tons of storage before but couldn't use it well. The EMC VNX array has three types of storage tiers, so we cansupport today's workload and tomorrow's.
We also chose Evolven IT Ops for configuration control, using it at first just for auditing and then taking advantage of the analytics capabilities, cutting down on root cause analysis time. It gives me visibility into the entire infrastructure. My engineers for VMware, for storage, etc., will use the analytics tools provided by those vendors, but this way I can look at everything on one pane. My engineers and I can work together to look at different levels.