Data Center is our focus

We help to build, access and manage your datacenter and server rooms

Structure Cabling

We help structure your cabling, Fiber Optic, UTP, STP and Electrical.

Get ready to the #Cloud

Start your Hyper Converged Infrastructure.

Monitor your infrastructures

Monitor your hardware, software, network (ITOM), maintain your ITSM service .

Our Great People

Great team to support happy customers.

Friday, November 14, 2014

Pertimbangkan ekspansi data center Anda.


Capacity planning needs to provide answers to two questions: What are you going to need to buy in the coming year? And when are you going to need to buy it?
To answer those questions, you need to know the following information:
  • Current usage: Which components can influence service capacity? How much of each do you use at the moment
  • Normal growth: What is the expected growth rate of the service, without the influence of any specific business or marketing events? Sometimes this is called organic growth.
  • Planned growth: Which business or marketing events are planned, when will they occur, and what is the anticipated growth due to each of these events?
  • Headroom: Which kind of short-term usage spikes does your service encounter? Are there any particular events in the coming year, such as the Olympics or an election, that are expected to cause a usage spike? How much spare capacity do you need to handle these spikes gracefully? Headroom is usually specified as a percentage of current capacity.
  • Timetable: For each component, what is the lead time from ordering to delivery, and from delivery until it is in service? Are there specific constraints for bringing new capacity into service, such as change windows?
From that information, you can calculate the amount of capacity you expect to need for each resource by the end of the following year with a simple formula:
Future Resources = Current Usage x (1 + Normal Growth + Planned Growth) + Headroom
You can then calculate for each resource the additional capacity that you need to purchase:
Additional Resources = Future Resources ñ Current Resources
Perform this calculation for each resource, whether or not you think you will need more capacity. It is okay to reach the conclusion that you don't need any more network bandwidth in the coming year. It is not okay to be taken by surprise and run out of network bandwidth because you didn't consider it in your capacity planning. For shared resources, the data from many teams will need to be combined to determine whether more capacity is needed.

Current usage

Before you can consider buying additional equipment, you need to understand what you currently have available and how much of it you are using. Before you can assess what you have, you need a complete list of all the things that are required to provide the service. If you forget something, it won't be included in your capacity planning, and you may run out of that one thing later, and as a result be unable to grow the service as quickly as you need.

What to track

If you are providing Internet based services, the two most obvious things needed are some machines to provide the service and a connection to the Internet. Some machines may be generic machines that are later customized to perform given tasks, whereas others may be specialized appliances.
Going deeper into these items, machines have CPUs, caches, RAM, storage and network. Connecting to the Internet requires a local network, routers, switches and a connection to at least one ISP. Going deeper still, network cards, routers, switches, cables and storage devices all have bandwidth limitations. Some appliances may have higher-end network cards that need special cabling and interfaces on the network gear. All networked devices need IP addresses. These are all resources that need to be tracked.
Taking one step back, all devices run some sort of operating system, and some run additional software. The operating systems and software may require licenses and maintenance contracts. Data and configuration information on the devices may need backing up to yet more systems. Stepping even farther back, machines need to be installed in a data center that meets their power and environment needs. The number and type of racks in the datacenter, the power and cooling capacity and the available floor space all need to be tracked. Data centers may provide additional per-machine services, such as console service. For companies that have multiple datacenters and points of presence, there may be links between those sites that also have capacity limits. These are all additional resources to track.
Outside vendors may provide some services. The contracts covering those services specify cost or capacity limits. To make sure that you have covered every possible aspect, talk to people in every department, and find out what they do and how it relates to the service. For everything that relates to the services, you need to understand what the limits are, how you can track them and how you can measure how much of the available capacity is used.

How much do you have

There is no substitute for a good up-to-date inventory database for keeping track of your assets. The inventory database should be kept up to date by making it a core component in the ordering, provisioning and decommissioning processes. An up-to-date inventory system gives you the data you need to find out how much of each resource you have. It should also be used to track the software license and maintenance contract inventory, and the contracted amount of resources that are available from third parties.
Using a limited number of standard machine configurations and having a set of standard appliances, storage systems, routers and switches makes it easier to map the number of devices to the lower-level resources, such as CPU and RAM, that they provide. Next: How much are you using now?

How much are you using now

Identify the limiting resources for each service. Your monitoring system is likely already collecting resource use data for CPU, RAM, storage and bandwidth. Typically it collects this data at a higher frequency than required for capacity planning. A summarization or statistical sample may be sufficient for planning purposes and will generally simplify calculations. Combining this data with the data from the inventory system will show how much spare capacity you currently have.
Tracking everything in the inventory database and using a limited set of standard hardware configurations also makes it easy to specify how much space, power, cooling and other data center resources are used per device. With all of that data entered into the inventory system, you can automatically generate the data-center utilization rate.

Normal growth

The monitoring system directly provides data on current usage and current capacity. It can also supply the normal growth rate for the preceding years. Look for any noticeable step changes in usage, and see if these correspond to a particular event, such as the roll-out of a new product or a special marketing drive. If the offset due to that event persists for the rest of the year, calculate the change and subtract it from subsequent data to avoid including this event-driven change in the normal growth calculation. Plot the data from as many years as possible on a graph, to determine if the normal growth rate is linear or follows some other trend.

Planned growth

The second step is estimating additional growth due to marketing and business events, such as new product launches or new features. For example, the marketing department may be planning a major campaign in May that it predicts will increase the customer base by 20 to 25 percent. Or perhaps a new product is scheduled to launch in August that relies on three existing services and is expected to increase the load on each of those by 10 percent at launch, increasing to 30 percent by the end of the year. Use the data from any changes detected in the first step to validate the assumptions about expected growth.

Headroom

Headroom is the amount of excess capacity that is considered routine. Any service will have usage spikes or edge conditions that require extended resource usage occasionally. To prevent these edge conditions from triggering outages, spare resources must be routinely available. How much headroom is needed for any given service is a business decision. Since excess capacity is largely unused capacity, by its very nature it represents potentially wasted investment. Thus a financially responsible company wants to balance the potential for service interruption with the desire to conserve financial resources.
Your monitoring data should be picking up these resource spikes and providing hard statistical data on when, where and how often they occur. Data on outages and postmortem reports are also key in determining reasonable headroom.
Another component in determining how much headroom is needed is the amount of time it takes to have additional resources deployed into production from the moment that someone realizes that additional resources are required. If it takes three months to make new resources available, then you need to have more headroom available than if it takes two weeks or one month. At a minimum, you need sufficient headroom to allow for the expected growth during that time period.

Resiliency

Reliable services also need additional capacity to meet their SLAs. The additional capacity allows for some components to fail, without the end users experiencing an outage or service degradation. The additional capacity needs to be in a different failure domain; otherwise, a single outage could take down both the primary machines and the spare capacity that should be available to take over the load.
Failure domains also should be considered at a large scale, typically at the data-center level. For example, facility-wide maintenance work on the power systems requires the entire building to be shut down. If an entire datacenter is offine, the service must be able to smoothly run from the other data centers with no capacity problems. Spreading the service capacity across many failure domains reduces the additional capacity required for handling the resiliency requirements, which is the most cost-effective way to provide this extra capacity. For example, if a service runs in one data center, a second data center is required to provide the additional capacity, about 50 percent. If a service runs in nine data centers, a tenth is required to provide the additional capacity; this configuration requires only 10 percent additional capacity.
The gold standard is to provide enough capacity for two data centers to be down at the same time. This permits one to be down for planned maintenance while the organization remains prepared for another data center going down unexpectedly.

Timetable

Most companies plan their budgets annually, with expenditures split into quarters. Based on your expected normal growth and planned growth bursts, you can map out when you need the resources to be available. Working backward from that date, you need to figure out how long it takes from "go" until the resources are available.
How long does it take for purchase orders to be approved and sent to the vendor? How long does it take from receipt of a purchase order until the vendor has delivered the goods? How long does it take from delivery until the resources are available? Are there specific tests that need to be performed before the equipment can be installed? Are there specific change windows that you need to aim for to turn on the extra capacity? Once the additional capacity is turned on, how long does it take to reconfigure the services to make use of it? Using this information, you can provide an expenditures timetable.
Physical services generally have a longer lead time than virtual services. Part of the popularity of IaaS and PaaS offerings such as Amazonís EC2 and Elastic Storage are that newly requested resources have virtually instant delivery time.
It is always cost-effective to reduce resource delivery time because it means we are paying for less excess capacity to cover resource delivery time. This is a place where automation that prepares newly acquired resources for use has immediate value.

Advanced capacity planning

Large, high-growth environments such as popular Internet services require a different approach to capacity planning. Standard enterprise-style capacity planning techniques are often insufficient. The customer base may change rapidly in ways that are hard to predict, requiring deeper and more frequent statistical analysis of the service monitoring data to detect significant changes in usage trends more quickly. This kind of capacity planning requires deeper technical knowledge. Capacity planners will need to be familiar with concepts such as QPS, active users, engagement, primary resources, capacity limit and core drivers.
This excerpt is from the book The Practice of Cloud System Administration: Designing and Operating Large Distributed Systems Vol 2 by Thomas A. Limoncelli, Strata R. Chalup and Christina J. Hogan, published by Pearson/Addison-Wesley Professional. Reprinted with permission of the authors and publisher.


Thursday, November 13, 2014

VMWORLD - CRACKING OPEN THE DATA CENTER

VMWORLD - CRACKING OPEN THE DATA CENTER

Peter Judge went free range and hatched this report.
12 November 2014 by Peter Judge - DCD
VMWorld - cracking open the data center
Gelsinger: "That boiled egg is like a data center"
VMware has its annual get-together in August in San Francisco, but a few weeks later the stories get extended and embellished at the European edition of VMworld in Barcelona.
This year was no exception with a quiverful of slogans, and a barrage of announcements, all of which added up to an assertion that VMware would like to take charge of pretty much everything from the hardware in your data centers upwards.
The big slogan “Brave New World of IT” brought an unintentional dystopian reference to Aldous Huxley’s book, but the dark side was hard to find in an event which promised  “No Limits”.
 
Cloud provider 
Top European news was a data center in Germany for the vCloud Air hybrid cloud offering. It was significant because users are increasingly demanding “data sovereignty”, the right to know their data is local and - hopefully - safe from snooping by the NSA. 
Is a German data center enough? Of course not. The logical conclusion is a need for data centers in every country - and probably for all the major cities within most countries. VMware’s head of cloud Bill Fathers was clear that VMware isn’t going to manage that - but assured Datacenter Dynamics that vCloud users are all right, because the exact same offering is delivered by partners within each country. 
 
Good egg
Pat Gelsinger is one of tech’s most approachable and friendly CEOs, but he always looks ill-at-ease on stage, even with a keynote encrusted with carefully crafted corporate soundbites. This time round, the memorable bit was the boiled egg. 
As well as “No Limits” and the  “Brave New World of IT”, Gelsinger promised “The Power of &”, which means users don’t have  to make choices (between, say, public and private cloud).
So far so conventional. Then Gelsinger launched sideways. “I’m a farm boy,” he drawled, and told us how he likes a soft-boiled egg for breakfast. And you know what? That boiled egg is like a data center.
“A data center is hard and crunchy on the outside, but soft and gooey on the inside,” said Gelsinger. It has a highly secure outer shell, but the servers inside are soft. If attackers can hit one virtual machine (VM), they can get through the whole data center.
To counter that, VMware wants to hard-boil the inside of the data center, placing a kind of firewall on each of the VMs. This will apply policies on traffic between VMs, and should limit the impact of any attack inside the data center, he said.
VMware designs your hardware
EVO: RAIL caused a stir in August. It’s a 2U hardware module for data centers, including four servers, each with solid state and hard drive storage, along with 10G network connections and Intel Xeon processors.

In Barcelona, HP joined the bunch of vendors signed up for EVO: RAIL, and repeated the big message. This is all about the hardware - but all the hardware is the same. Whoever you buy it from, you get the same design of commodity kit, loaded with the same VMware stack.
VMware promises it’s so easy you can have VMs up and running in 15 minutes - and to prove it, invited VMworld visitors to try for a record time on EVO: RAIL hardware. 
There are plenty of hyper-converged infrastructure (HCI) players offering similar kit and most claim to be unfazed by EVO: RAIL. Nutanix told us the arrival of the VMware spec just endorses and legitimizes the concept.
For the big hardware players, it’s a different story. They all say they can differentiate their EVO: RAIL hardware with a layer of their own management or application software, but they’d really rather make their own.
HP backed its VMware-badged EVO: RAIL product with the same box, under its own ConvergedSystems badge, promising the HP version would be better, more flexible, and could be set up quicker.
Why are they going this way? Because hardware is a commodity and software will eat it. EVO: RAIL - if it works - could standardize data center hardware the same way Microsoft’s Windows Phone and Google’s Android have standardized the hardware they run on - by setting a series of specifications for compliant devices.
As with commodity phones, commodity data center hardware will - we are assured - provide users with a consistent experience, and make it easy for software vendors to issue regular patches for their devices keeping them more stable and secure.
Taking it to the RACK
VMware admits it can’t push this all the way to large data centers: its EVO: RACK specification - still a tech preview - promises to deliver “zero to app” in two hours for whole racks in big data sites.

For this, however, VMware is having to open up a bit more. The EVO: RACK specification - or large parts of it - will be opened up to the Open Compute project, the web-scale open source movement for efficient hardware launched by Facebook.
Flexing desktops
At the desktop, VMware announced VMware Horizon Flex, a desktop virtualization product that combines local virtualization (using Fusion on the Mac and Player on the PC) with central control.

This is VMware’s bid to support Bring Your Own Laptop, allowing IT managers to install and control their own secure VMs on machines belonging to end users.
For those who want to absorb the desktop into their data centers, the company also rebranded its desktop as a service product as Horizon Air.
It has to be said that VMware went a bit overboard on the rebranding, and we’re struggling to keep up. It now has a new umbrella term for all its management stuff: vRealize. This brand  includes vCloud Automation Center, vCenter Orchestrator, vCenter Log Insight and IT Business Management Suite - we hope that is clear.
OpenStack and Docker - am I bothered?
Much has been made recently of the supposed threat to VMware from OpenStack, the open source cloud platform which has grown massively of late. If users can build and manage multi-vendor virtualized environments using OpenStack, the argument goes, why would you pay for a proprietary version from VMware?
VMware infrastructure boss Raghu Raghuram took that head on, repeating VMware’s support for the VMware Integrated OpenStack (VIOS) which runs on vCenter. 
Far from finding OpenStack a threat, Raghuram made a bald assertion: VMware is actually  the best way to run OpenStack, because of the extra features and mature management.
And then there is Docker, the system which uses minimal containers to separate apps, without having to dish out all the features and overhead of a whole virtual machine. 
Containers are the royal road to let customers converge their internal IT in DevOps, we’ve been told, because Operations can provide them as a service, for Development to work in, and eventually deliver apps in, without tiresome recompilation and rebuilding.
If the cool kids are doing things in containers instead of VMs, isn’t that bad news for VMware? Not at all said Raghuram, in an echo of his OpenStack comments. 
Docker isn’t a threat to VMware, he said, VMware’s VMs are the best place to run Docker’s containers. The overhead of the virtual machine is slight, and the benefits of security and management are worth it, he assured the 9000 strong crowd in Barcelona. 
If VMworld has a dystopian side, it’s VMware’s urge to control, and something about Raghuram’s support for the OpenStack and container standards reminded this author that Microsoft had a strategy to support standards yet still make it hard for customers to leave.
Like Microsoft, will VMware Embrace and Extend the standards?

WebNMS Automates End-to-End, Multi-Vendor Network Services with Newly Released Symphony Orchestration Platform

WebNMS Automates End-to-End, Multi-Vendor Network Services with Newly Released Symphony Orchestration Platform

The Industry’s First Orchestration Platform That Combines Workflow Automation with Unified Network Management and Service Assurance for Carrier Ethernet, MPLS, SDN and NFV Networks
PLEASANTON, Calif.--()--WebNMS, a leading provider of network management and IoT solutions, today announced the launch of the Symphony Orchestration Platform, a unified software suite for automation of service provider network operations. Symphony enables carriers and other service providers to efficiently transform existing static service offerings into dynamic services and introduce new SDN and NFV-based services as they mature.
“With Symphony, we are building on our scalable and reliable WebNMS Framework — allowing our customers to efficiently describe and automate their unique workflows with simple and familiar tools and APIs.”
To meet the growing demand for dynamic network services, providers need solutions to transform their existing network and service management processes into orchestration systems that automate existing operations and rapidly incorporate new, on-demand services as the market evolves. Until now, service provider network management silos have inhibited effective orchestration. Therefore, unified network management is a prerequisite to workflow automation. But to realize the efficiency benefits of orchestration, providers also need a flexible, customizable framework to capture their unique workflows as automated business logic.
An industry first, Symphony solves both problems by combining unified network management with a workflow automation framework for dynamic service processing, provisioning and assurance across multi-vendor networks. The resulting orchestration solution enables automation of multiple existing services, such as Carrier Ethernet and MPLS, and integration of emerging services based on SDN and NFV.
Symphony eliminates operational silos by unifying all management functions across end-to-end, multi-vendor and multi-layer networks. Based on the industry-leading WebNMS Framework, the unified Symphony Orchestration Platform enables efficient, automated provisioning of multiple existing services. The initial release includes Metro Ethernet Forum (MEF) Carrier Ethernet and MPLS service solutions, and WebNMS plans to extend its solution library for other well-defined provider services. Symphony also includes a library of management protocols and multi-vendor equipment adapters to simplify integration with existing OSS/BSS systems and network equipment. REST APIs enable integration with SDN controllers and cloud orchestration applications for emerging SDN and NFV services. This unified network provisioning solution enables the second key component to orchestration — workflow automation.
As part of Symphony, WebNMS has introduced the Composer Workflow Framework that gives providers the capabilities to capture, design and automate unique workflows to orchestrate network operations. Composer helps operators free themselves from routine administrative tasks, including accelerating service fulfillment, assurance and maintenance through automation.
“WebNMS service provider customers need practical solutions that centralize operational control of multi-vendor, multi-layer networks,” said Prabhu Ramachandran, director of WebNMS. “With Symphony, we are building on our scalable and reliable WebNMS Framework — allowing our customers to efficiently describe and automate their unique workflows with simple and familiar tools and APIs.”
Symphony also integrates powerful visualization and big data analytics tools. The visualization tools enable providers to create workflow and network monitoring dashboards, 3D navigation troubleshooting views as well as service-aware customer portals with real-time performance monitoring and on-demand service requests. With an integrated Hadoop framework option, Symphony enables analytic workflow creation directly in the Composer framework. In this manner, network operators can extract valuable information from real-time workflow, service and network data to help optimize their businesses.
WebNMS is also integrating Symphony with solutions from key ecosystem partners, including network element, test equipment, OSS/BSS and SDN controller vendors. With Symphony, WebNMS extends its 15 years of solution leadership for over 400 service provider and system vendor customers.
Symphony Provider Orchestration Platform Key Features
  • Composer Workflow Framework automates network operations.
  • Unified network management architecture eliminates silos for end-to-end provisioning of existing SDN and NFV services.
    • Multi-vendor architecture provides independent solution with standards-based, extensible information modeling for open, third-party integration.
    • Integration with multi-layer element management system (EMS) reliably administers a wide range of both transport and packet network elements.
  • Turn-key solution library for standardized Carrier Ethernet (MEF) and MPLS network services, including automated service assurance per customer SLA.
  • Open, cloud-ready interfaces including RESTful APIs.
  • Intuitive portals give critical insight for network operations, help desk and customers.
  • Integrated big data repository option and integrated analytic workflow automation.
Availability
The WebNMS Symphony Orchestration Platform is available immediately for businesses. For more information and to request a demo, visithttp://www.webnms.com/webnms/symphony-orchestration-platform.html.
For more information about WebNMS, please visit http://www.webnms.com.
About WebNMS
WebNMS, the telecom software division of Zoho Corporation, specializes in platforms for network management, element management, service orchestration and workflow orchestration. WebNMS builds these solutions on flexible, extensible frameworks for network service providers, managed service providers and network solution vendors. With more than 25,000 deployments across the globe, the flagship WebNMS Framework is the most preferred and reliable multi-vendor management solution in the market today. For more information about WebNMS, please visithttp://www.webnms.com.
WebNMS is a trademark of Zoho Corporation. All other brand names and product names are trademarks or registered trademarks of their respective companies.
Tags: WebNMS, Zoho, Symphony Orchestration Platform, WebNMS Framework, Composer Workflow Framework, Carrier Ethernet, MPLS, SDN, NFV, unified network management, workflow automation, Metro Ethernet Forum, big data analytics, OSS/BSS, service management, service automation, element management, network management, cloud

Cloud Storage Appliance, membuat backup & recovery jadi mudah



Cloud storage appliances: Backup and recovery made simple

Summary: Why should you integrate cloud backup appliances into your IT environment? Because you've made this decision before.
cloud-storage-simple-thumb
Cloud. Cloud. Yay cloud!
If you're not an IT decision maker, I did not write this article for you. Go away. Your mother is calling you and wants you to clean up your room in the basement.
OK, now that we're left with just the adults in the joint, let me put this in very simple terms that I am sure any stressed out, overworked CIO or CTO can understand: Your storage is very expensive.
Like many organizations, you are probably always on the verge of having to buy another frame, another chassis, and trays of drives because you've got VM and filer sprawl. And the guy or gal who has the authority to sign the purchase orders to get you those new frames, chassis, network infrastructure, et cetera, likes to say no a lot.
They do this because they love to make you miserable. They enjoy it. They have a big giant rubber stamp embossed with "Denied" on it in a 1920s-style font with a pad of red ink next to them, and they relish every moment to use it when one of those POs comes across their desk.
Sound familiar? Do I get it? Are you still with me? Good.
If you can't get new storage frames, then you have to by definition free up that storage. Chances are you've got a lot of infrequently used files, but maybe because of regulatory reasons or other business drivers, you have to retain that information. So where to put it?
Where to put. It.
So in the olden days, you had to solve this problem with things like physical boxes of printed paper documents and DLT tapes, and because you didn't have enough physical real estate to store the stuff and that real estate was expensive, you shipped it offsite. In armored trucks, in many cases.
But unlike Iron Mountain or similar services, it's not expensive to retrieve that infrequently used stuff, and it also happens extremely quickly. It's also more secure than that armored truck.
Now, back in those days of yore, the 1990s, you used services like Iron Mountain to cart truckloads of that stuff out your door. And I am sure there were many conversations at the time about the pros and cons of doing that.
Certainly, one-off retrieval of documents and tapes wasn't cheap when it had to occur, and there were some trust issues about the transport of those documents and tapes offsite, but, overall, it was a net win for your company and a good idea, and you were probably wondering why after all was said and done, you did not do it sooner.
Cloud-based storage is the same deal. You use it to move all sorts of infrequently used stuff offsite, in a secure fashion, so you can free up space on that storage that's a pain in the ass and expensive to buy.
That's certainly the primary use case, but there are others, which I will get into momentarily.
However, unlike Iron Mountain or similar services, it's not expensive to retrieve that infrequently used stuff, and it also happens extremely quickly. It's also more secure than that armored truck.
No, really, it is. When stored in the cloud, be it Amazon's, Microsoft's, Google's, or anyone else's, these "Cloud Storage Gateways", as they are called, transport your data using military-spec network encryption protocols and then store it in an encrypted file format that is machine unreadable should anyone actually invade the target datacenter, which by the way is geo-redundant if you want to pay for that premium.
Armored trucks can be broken into, and there were a number of instances during the early 2000swhere major financial and government institutions simply lost DLT tapes on them and had major public fiascos.
Yes, I'm sure the NSA can tap your MPLS and OC lines, but, honestly, they have better things to do with their time.
So first of all, cloud storage is cheap. How cheap? Take a look at the Amazon S3 and Microsoft Azure price lists, for starters. It's way, way cheaper than your frames.
Now, you're probably thinking that you gotta use a whole lot of programmatic API junk to integrate this stuff with your line-of-business apps. Nope.
So all of these Cloud Storage services have APIs, but you can literally just drop one of these gateway appliances into a rack, or even run one as a virtual machine, and point your servers at it using an iSCSI connection over your IP network and let it do all that API stuff.
Your servers just see the gateway as just another LUN. A block storage device like all the others you have, just like on your SAN or your NAS filer.
There are many companies that make these gateway devices.
The vendors that make these gateways or have the functionality included in their storage systems include Amazon, Microsoft, CTERARiverbedEMCIBMF5TwinstrataBarracudaNasuni, and Panzura. I've linked to all of these so you can examine their offerings closely.
Obviously, Amazon and Microsoft have products that are optimized for their own clouds. Amazon's is provided as a free VM that runs on your on-premises VMware ESX or Microsoft Hyper-V systems, and Microsoft's StorSimple is three configurations of physical appliance containing a mix of SSD and SAS disk.
All of these solutions, including the cloud-agnostic ones listed above, can be used not only to cache and front-end your on-premises data and transparently offload and retrieve the infrequently accessed stuff to and from cloud storage, but they can also be used for disaster recovery scenarios.
Many of these appliances have snapshotting capability and essentially act as virtual tape libraries.
If your datacenter has a catastrophic failure, you can use another appliance/gateway at another location to remotely restore that data to a set of servers from that cloud storage.
This is also the part of the article where I tell you where I work for a company that owns a cloud and makes said gateway devices (Microsoft/StorSimple).
But you knew that already, so I'm not going to recommend anything in particular, but I will tell you what questions to ask your vendor so you get the functionality that you want. Here's a whole bunch:
  • What's the capacity/scale of the solution; ie, how much can be cached or stored locally on a per-volume (LUN) basis, and what is the maximum number of volumes that you can store per VM?
  • Can you do local snapshots? Can you do cloud-based snapshots?
  • Can you do incremental snapshots with storage optimization?
  • Is the restore process WAN optimized?
  • Do you provide application consistency for your data protection? (ie, VSS integration for enterprise services and databases)
  • Do you de-duplicate the primary storage and the snapshots?
  • How do you do data encryption to and from the cloud provider?
  • Do you supply a high-availability architecture for your gateway device?
  • Do you support multipath I/O (MPIO)?
  • Does your appliance support non-destructive upgrades?
  • Do you have an SLA for local storage performance on the appliance?
  • Is the gateway plug and play and self-contained?
  • Is the gateway certified for my vendor hypervisor of choice's VMs (VMware, Microsoft Hyper-V, KVM, Xen, Unix)?
Are you planning on bringing cloud-integrated storage using a gateway appliance into your IT environment? Talk back and let me know.

About 

Jason Perlow, Sr. Technology Editor at ZDNet, is a technologist with over two decades of experience integrating large heterogeneous multi-vendor computing environments in Fortune 500 companies. Jason is currently a Partner Technology Strategist with Microsoft Corp. His expressed views do not necessarily represent those of his employer.