How New Technology Can Boost DR and Business Continuity
ADD YOUR COMMENTS
Let’s talk a little shop today. One of the hottest conversations many business managers are having is how their organization can use the data center as a key element for their disaster recovery and business continuity strategies. Cloud computing, data replication and virtualization all play a major role in the disaster recovery and business continuity (DRBC) discussion. Still, the evolution of the data center and new types of resources requires administrators to take a look at their strategies and see where they can improve further.
Although business continuity and disaster recovery can overlap, they are really different IT objectives. With that in mind, the conversation about data center DR strategies has really evolved over the past few years. Where it was once reserved for only the big shops or ones with a lot of dollars to spend, modern IT infrastructure allows a broader range of companies to do a lot more, for a lot less. Smaller organizations are now leveraging private and public cloud environments for their DR needs. In fact, this influx of new business is part of the reason that many data center providers are seeing a boom in services requests.
What are the real driving factors here?
- Global traffic management (GTM) and global server load balancing (GSLB). The truth is simple: Without these types of modern global traffic controllers, it would be a lot more difficult to replicate and distribute data. Just take a look at what technologies like F5, NetScaler and Silver Peak are doing. They are creating a new logical layer for globally distributed traffic management. Not only are these technologies optimizing the follow of traffic, they are controlling where users go and what resources they can utilize. The digitization of global business now requires administrators to have intelligent devices helping optimize and load-balance traffic all over the world. With virtualization, both physical and virtual appliances are capable of spanning global resources and communicating with each other in real time. From there, they can route users to the appropriate data center as a regular function of the policy – or even can route entire groups to other live data centers in case of an emergency.
- Software-defined technologies. Working with software-based and virtual technologies certainly makes life easier. What we’re able to do now with a logical network controller in terms of creating thousands of virtual connections is pretty amazing. Furthermore, the ability to control traffic, QoS, and even deploy virtual security services makes these new types of technologies very valuable. Remember, the conversation isn’t just around SDN. Software-defined technologies also incorporate security, storage and other key data center components. We are creating logical layers which allow for improved communication between hardware components and global resources. This is the concept of virtualizing servers and services. Inter-linking nodes without having to deploy additional hardware is a big reason cloud computing and the boom in data center resources has become so much more prevalent.
- High-density computing. Shared environments and multi-tenancy are becoming regular platforms within the modern data center. After all, why not? The ability to consolidate and logically place numerous users on one shared system is pretty efficient. Plus, administrators are able to use highly intelligent blade systems to quickly provision new workloads and rapidly repurpose entire chassis in case of an emergency. Furthermore, converged infrastructure is seeing even more advancement as more SSD and flash technologies become incorporated directly into the chassis. Imagine now having the capability to deliver millions of IOPS and hundreds of terabytes of flash storage to key workloads distributed over your corporate cloud. Furthermore, replicating this type of system requires less resource utilization and focus more on server profiles. This isn’t only a highly effective use of server technology – it’s the utilization of advanced service profiles which are capable of virtualization – at the hardware layer.
- More bandwidth. More fiber, better local and external connectivity, and greatly improved network capabilities are allowing the data center to deliver massive amounts of data and lightening speeds. Already, we are seeing Google Fiber delivering unprecedented speeds to homes for very low prices. I, for one, can’t wait for that service to come to my city. The point is that bandwidth is become more available. Edge systems are capable of handling more traffic and very rich content. This increase in WAN-based resources is the direct reason that so many organizations are moving a part of their infrastructure into the cloud. Hybrid systems allow for fast data replication and the ability to stay up if your primary data center goes down. Furthermore, when you couple the above three drivers together, you get an environment which can replicate quickly and stay extremely agile.
Are there other advancements that have helped organizations achieve greater levels of redundancy? Of course there are. Everything from the software layer to new types of hardware advancements all help organizations better utilize resources. The bottom line is this – if you haven’t explored a cloud option for DRBC, maybe it’s time. The modern data center has become the home to a lot of really advanced technologies and service delivery models. All of these systems are working together to deliver more services, resources and a lot more agility for your organization.