Cloud balancing, interkoneksi data center dalam satu Cloud




The art of cloud balancing: Interconnect data centers to be one cloud

Cloud providers often face competing demands for achieving profitability: They must develop the right services and pricing for a given market, and they must optimize the infrastructure for those services without going broke. One particularly complicated facet of this challenge for cloud providers is cost-effectively load balancing, or "cloud balancing," traffic among a cloud comprised of multiple data center interconnections.
The multi-data center model carries significant benefits for cloud providers, particularly with regard to improving performance and uptime. Spreading resources across multiple data centers can help cloud providers target specific geographies through reduced network connection costs and latency, and improved reliability through redundancy and cloud balancing. At the same time, however, distributing resources can hurt the typical economies of scale in cloud computing that make its capital and operating expense model attractive. But the good news is that there are measures cloud providers can take to mitigate these risks.
WANs drive cloud balancing strategy
Most cloud operators would prefer to have redundant facilities from which to provide their services, and network operators that host their own internal features or applications in a cloud have the same desire. The question is how to make a distributed, multi-data center cloud look like a single resource pool -- available for all applications and services -- and the answer obviously must start with wide area network (WAN) connectivity.
There are two dimensions to cloud balancing multiple cloud data centers:
  • There must be a WAN connection between the data centers, and this connection must have sufficient capacity to maintain Quality of Service (QoS).
  • The WAN connection must link the data centers in a way that makes them addressable to each other -- in the same way that a local resource would be addressable within a single data center.
There's likely a common resolution to address both objectives.  In most cases, the ideal data center interconnection medium would be a Carrier Ethernet path linking Ethernet-based data center networks. The devil, as usual, is in the details.
The capacity needed to connect data centers will depend on the kind of traffic that is likely to transit the connection. The big challenge would be defining how applications and application components running in one data center access storage devices in another. For multiple data centers to appear as a uniform resource pool, that capability must be provided.
What connectivity is best for cloud balancing multiple data centers?
In a performance management sense, database access can be divided into two levels: logical and virtual/physical. Logical access means that a database server is sent a high-level request -- something close to the user or application, like a SQL query -- and the results are returned when the operation is complete. In virtual/physical access, storage device commands are sent to read and write disk devices as needed.
Obviously, using virtual/physical access over any network connection will create significantly more traffic than logical access, thus requiring fast trunks between data centers. In many cases, logical access can be supported using a much slower connection.
When the cloud operator is also the provider of local metro/WAN services, the inter-data center trunks likely use fiber or dedicated wavelengths, which offer the highest possible bandwidth and the lowest risk of congestion-based interference. Where fiber or wavelength connections are not an option, using 100 Mbps Ethernet or Gigabit Ethernet will be suitable in nearly all cases when database access is via "logical-level" commands.
For virtual/physical storage connectivity, 100 Mbps Ethernet is unlikely to be suitable; either multiple Gigabit Ethernet connections or a 10 Gbps Ethernet connectionwill likely be needed. Although this may seem expensive, that's not necessarily the case as long as the data centers share the same metro area.
To connect data centers outside of a metro area, WAN bandwidth cost will likely make traffic management and cloud balancing critical. As a result, pooling cloud resources across data center boundaries will be most effective if storage traffic across the WAN connection is limited. The best strategy is to steer cloud balancing efforts toward applications that use high-level database services, such as SQL queries, and offer cloud-based relational database management system (RDBMS) or similar high-level data service. The more restricted the connections are, the more effort cloud providers must expend on cloud balancing and limiting the cross-data center traffic.
Cloud balancing management: Physical and logical
Meanwhile, operations costs can mount in cloud balancing data centers unless a hierarchical management process is defined.
Physical infrastructure clearly must be managed locally -- where monitoring and servicing are practical -- but the combined resource pool must also be managed logically so that it can be made to function as a unit.
While most vendors have at least some capabilities in this area, there are differences among them that could be important as providers develop their cloud business models. Consequently, cloud providers must evaluate their vendors' ability to manage the connected data centers -- not just connect them -- before they invest.
About the author: Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982. He is the publisher of Netwatcher, a journal addressing advanced telecommunications strategy issues. Check out his blog, Uncommon Wisdom, for the latest in communications business and technology development.