Network architecture and capacity planning for server virtualization
Server virtualization not only challenges network managers with consolidation and latency considerations, it also places a heavier burden on the network infrastructure going into each physical server. When a company reduces the number of physical servers, the number of people served by each server can increase proportionately. Network managers need to look at implementing high availability and data replication software to minimize the impact a server problem could have on the network.
"The network manager is going to have to talk to folks and work with other IT silos such as the applications team and the server team to figure out the network process, especially for virtualization," said Stephen Elliot, enterprise systems management analyst with IDC. "It is a matter of being flexible and adapting for change that is coming down the pike."
There are certain tactical issues that have to be worked through. First and foremost, you must determine what is a switch and what is a server in a virtualized environment. Virtualization blurs the line between the two, since both end up running on the same physical box. "One has to consider," Elliot said, "how network and data center transformation is putting more pressure on network executives to think through the right network architecture for virtualizing applications and managing traffic and bandwidth."
"There are big questions around traffic management and handling policy," he said. "When we are talking about a virtualized application, many of the concerns we had during client/server don't disappear. Particularly on the networking front, these require further development, particularly in the realms of network address translation and how traffic moves through virtual switches."
One strategy is to implement a shared pool of servers for the virtual infrastructure, which allows a virtual server to migrate to any physical server in the pool. Another, significantly more complicated, strategy is to set up geographically dispersed disaster-recovery sites to which to migrate the virtual machines (VMs) in the case of a major disaster. In this case, the company would need some mechanism for moving the virtual images to the second site and mechanisms for routing the traffic to the new location.
VMware has developed a technology it calls VMotion for moving virtual machines between physical boxes, and other major VM players have developed similar technologies. Unfortunately, these solutions tend to be optimized only for LANs. In order to stretch this capability across a wider area, network managers would have to implement a VLAN across a wide area.
Just because the servers appear to have enough CPU horsepower and networking bandwidth to handle the new applications themselves does not necessarily mean they have enough capacity to handle the virtual switches, firewalls and intrusion detection systems that will run on the same hardware.
For example, Chris McDaniel, who is now Virtualization Solutions Architect at Nimsoft, helped consolidate 200 sparsely loaded Intel servers down to four very large ones when he was an IT manager at The Gap. Although the original servers theoretically had enough capacity to handle the working load of the applications, they still ran into significant problems.
"When you virtualize the physical switch, it adds a lot of processing overhead to the VM host," McDaniel noted. "The physical server is a finite resource with only so much RAM or CPU capability."
McDaniel recommends a maximum of no more than eight to 12 VMs per physical host. When multiple VMs are trying to write to the storage system simultaneously, it can create unacceptable delays.
The astute networking professional also needs to weigh conflicting messages from VM vendors (which encourage IT departments to virtualize everything) and physical switch and router hardware vendors (which err on the side of caution -- and increased hardware sales). "You will find conflicting messages from switch and router vendors versus virtualization vendors," said Simon Crosby, CTO of thin client vendor Citrix. "As we go down this path, there are winners and losers. The reason VMs have not landed is that the dominant equipment vendors' future gets curbed in this new environment. This will impact not just the number of servers, but the number of NICs and switches installed."
Separating networking functions
In order to maintain high levels of service within a virtualized environment, network professionals need to think about how to physically separate the different networking functionalities associated with each physical server. You have to divide the administration, storage, VMotion and production networks with multiple NIC cards and switching networks in order to maintain high levels of service across each one.
"If you have the system administrator accessing the host and you are backing up the VMs, you don't want that to interfere with fast access to data stores on the production network," McDaniel said.
The best strategy, he said, is to deploy a team of two 1 Gbps Ethernet cards for each function, plus an additional card to enable redundancy when one of these fails. Without this kind of redundancy, backups or big data writes can slow down performance in other areas of the network, which can affect the performance of VM applications or generate networking error messages.
McDaniel recommends teaming multiple 1 Gbps cards -- rather than using larger capacity 10 Gbps cards -- in order to provide higher reliability in the event of a NIC or switch failure. A physical or software problem with a larger single card can have a more severe impact on the network than a problem with one of many networking cards. In the event of a failure of one of these paths, applications like Nimsoft NimBUS management software provide a way of finding the available switch and routing traffic appropriately.
George Lawton is a freelance writer, based in San Francisco, who has written more than 2,000 stories for SearchWinDev.com, IEEE Computer, and Wired (among others) over the last 17 years. Before that, he helped build Biosphere II, worked on a cattle ranch in Australia, and helped sail a Chinese junk to Antarctica. You can read more about him at his website, www.glawton.com.