Best Practices In Data Center Space Planning

The increased focus on efficiency within the IT world brings with it higher demands for thoughtful, well-planned spaces to house IT equipment with consideration given to future needs. As a result, focusing on current typologies can sometimes fall short of future requirements. Efficiency can be realized in a number of ways that affect planning and growth considerations. Most organizations try to predict their future growth needs but are not always able to see changes in the industry which may affect them down the road. Some of the most important factors to be considered in data center planning are:
  • Consistent footprints across platforms
  • Utilization/consolidation/virtualization of IT systems
  • Network, power, and cooling typologies and planning
  • Future considerations (growth or collapse implications)
This article examines these facets of data center space planning looking at various options available today in these areas and also positive and negative attributes to consider when looking at various options. It also examines impacts of these decisions on performance within a hypothetical space. Increased pressure to focus on efficiency can prompt changes in an effort to improve a space. Sometimes such decisions have adverse effects on operational and capital costs. But decisions can be made which not only improve efficiency but ultimately also provide a lower investment in capital improvements as well as operational costs.

Typical Equipment Footprint (TEF)

One of the biggest barriers to effective data center layout planning is having an understanding of the IT equipment that is proposed for the space. In most of today’s data centers a high percentage of equipment is rack-mounted servers. Most of the racks which hold this equipment are based on a standard width of approximately 19 in. for the support rails that hold the servers or other equipment in place. The rack enclosure then allows additional space for internal wiring, air circulation, and exterior panels and doors. Total rack enclosure widths end up between 24 in. to 30 in. (for larger spacing for wiring strategies). Rack depths are also based on a typical depth between the front and rear support rails of approximately 29 in. Once again, more room is given in the total enclosure for wire management, airflow, and the front and rear doors of the rack. The overall depth of a typical server rack can be anywhere from 36 in. to 48 in. depending on the make and use of the rack.
Although a large amount of equipment in most data centers is rack mounted, there are still a number of other equipment types in the space which are standalone devices. This equipment can vary depending on the system requirements, manufacturer, and product. Typical standalone equipment in today’s data centers are storage devices (for SAN/NAS type arrays), mainframe and tape, or virtual tape systems for long term back-ups.
Another major IT equipment type typically found in today’s data centers are network systems equipment and distribution racks. Although some data centers place the network core systems outside the data center proper, often times these core areas (including primary and edge switching, distribution, appliances, and carrier equipment) are located directly and centrally in the data center. Most of this equipment is typically rack mounted, although in many cases these racks are open racks without any type of enclosure. These racks are usually single pole mounted (with only a single vertical support on either side of the rack) at 19 in. in width.
The different types of equipment present means that some variation in the footprint layout of IT equipment is common. Both storage and mainframes are typically wider than a 24- to 30-in. size and can even reach depths beyond 48 in. Typically, special planning of these systems must be considered as part of the IT layout,  not only to accommodate their size but also their airflow patterns, which can vary from the front to back airflow typically planned for. While this variation does occur, such equipment usually represents a smaller percentage of the overall equipment in the space. The most common planning footprint for an equipment rack is normally considered to be 24 in. in width and 48 in. in depth, or 8.5 sq ft (factoring up slightly for larger IT equipment). In most planning scenarios these dimensions will net the highest yield of typical layout space and allow for the most flexibility when planning out the data center space. This size also works well with an access floor system, which is typically set to a 24- by 24-in. grid (or in metric at 600mm x 600mm).

TEF Densities

When planning a data center it is important to understand the required power density currently being experienced and those anticipated in the future. Power density is a good metric to review both mechanical and electrical distribution solutions. Power densities have increased over time as IT equipment shrinks in footprint. However, these densities have not increased to the extent once imagined due to increased efficiencies in chip technologies. The best approach to planning a data center layout is to categorize the IT equipment into these three power density levels (more than one level may be present at the same time):
  • Low density. This level is more common in older spaces, smaller commercial sites, or sites with restrictive abilities to reach higher densities. This level would typically include TEF ratings of a range of 1 kW to less than 6 kW.
  • Medium density. This density is very common in today’s enterprise, government, and larger commercial sites. Often this is a result of an IT group’s conscious decision to avoid high densities due to the changes required to support it (spreading the load rather than stacking the load). This level would utilize a TEF rating range of 6 kW to less than 15kW.
  • High density. This density is less prevalent and often not maintained throughout a data center. Locations that do experience these densities are typically reserved for university or medical research facilities using HPC (high-performance computing), cloud service providers, and wholesale colocation providers. The rating range at this level is over 15 kW per TEF.

The levels noted above are often found mixed in the same facility. There may be a small amount of high density servers in a data center while an overwhelming amount of the total are low or medium density for the rest of the space. Most power and cooling solutions will work with both low and medium densities, but more consideration must be made when dealing with high densities.
It should be noted that the densities or kW/TEF should be based on actual power consumption, not nameplate power rating for IT equipment. Manufacturers of IT equipment provide information based on the maximum power consumption possible for the device. This occurrence is almost never realized in typical operation and also typically represents a fully configured, loaded, and utilized device.
In a high percentage of instances the connected load of a piece of equipment or equipment rack is substantially below the rated power draw of the manufacturer’s published information. The differential between the actual equipment power drawn vs. the manufacturer’s published information is known as the IT equipment diversity factor. In the past, the equipment diversity factor was sometimes as high as 60% (in other words the actual draw of typical equipment was only 40% of the manufacturer’s published data for the equipment).
Today’s IT equipment is more efficient through consolidation of equipment and virtualization of operating systems (locating multiple operating systems and associated applications on a single physical server which increases overall utilization of the equipment). This has driven down the diversity factor to the 30% to 40% range. It is important to understand a client’s current power loading (uninterruptible power supply [UPS] power loading vs. IT equipment nameplate ratings) to calculate the current diversity factor. This will allow for power growth planning that is not overstated and can be matched more closely to the actual power needs. This in turn better allows for the capacity needs of the mechanical system and distribution methodologies.

Typical Hot/Cold Aisle Arrangement In A Data Center

With the background information provided previously we can now examine typical techniques used in data center layout. In order to maintain an efficient plan to provide separation between supply air to the IT equipment and return air back to the mechanical systems, it is recommended that IT equipment be laid out in a hot aisle/cold aisle arrangement. This allows for supply air to be provided (typically through the access floor) to the front of the equipment where intakes are typically located in the cold aisle, and provide exhaust air from the back of the IT equipment to flow back to the HVAC system’s return in an open return fashion. At low and even medium TEF densities this typology can be effective, but this can be hampered by aspects of the IT equipment and how well the equipment rows are managed. There are a number of factors that can impact this basic technique and adversely degrade supply air temperatures to the equipment:
  • Equipment row gaps/missing blanking panels in racks: Hot and cold aisles are most effective when there is consistent separation between the airstreams. Any opportunities for mixing of colder supply air and warmer return air will dilute the supply air to the racks. Gaps in rows and a lack of blanking panels in the racks themselves allow for the mixing of air to occur.
  • IT equipment airflow deviations from a front to back direction: Some IT equipment does not operate with a front supply air intake and rear return air exhaust. Many storage arrays operate with a front intake and top exhaust path. Network switches often operate with a side intake and side exhaust. In either of these cases, supply air mixing with return air exhaust is much more likely to occur.
  • Location of HVAC equipment returns: The placement of HVAC equipment within the data center (typically CRAH or CRAC units) is important to help direct the return airflow. Should the airflow for the return be located in a manner that requires it to go past or through the cold aisle, then an opportunity for air mixing is introduced.
With these factors understood, basic layout techniques can be used to achieve reasonable results by maintaining hot/cold aisle arrangements in the data center, although certain aspects may not be able to be overcome. In cases where basic needs to provide good layouts are difficult (such as equipment types which are not conducive to front to back exhaust alone), other techniques can by employed to better achieve good airflow management and separation:
Hot aisle/cold aisle containment. In this strategy either the hot aisle or cold aisles are encapsulated to prevent these airstreams from escaping or mixing with the opposite type of airflow. Containment is typically achieved through a panelized system using a framework and plastic infill panels or a heavy vinyl sheet system which is hung from a horizontal bracket. These systems typically run from the underside of a dropped ceiling to the top of the IT equipment racks. Where racks are not present these systems may run all the way to the access floor.
  • Advantages: Limits potential for airflow mixing; Allows controlled and contained airflow for contained system; Mechanical units can be located outside data center (with plenum or ducted return).
  • Disadvantages: Rack uniformity or in-fill panels required; Doors at ends of containment required; sprinkler system patterning can be affected (fusible links may be required by AHJ); and affects working environment in containment area (colder or warmer temperatures experienced).

Return air plenum and IT rack chimneys. This is a simpler system than the containment systems described previously and exhibits many of the same benefits while eliminating some of the drawbacks. This system utilizes a return air plenum to which chimneys are run from the top of the IT equipment racks to the ceiling plenum. The HVAC unit return is also run to the plenum to draw the exhaust air back to the units. Chimneys are run from every cabinet.
  • Advantages: Limits potential for airflow mixing; allows controlled flow of return air back to HVAC units; eliminates the need for aisle doors; racks can be of varied heights; mechanical units can be located outside the data center.
  • Disadvantages: Equipment must be in an enclosure; sprinkler and lighting patterns can be affected; rear cabinet doors must be solid and increase exhaust temperatures within the rack.

Close coupled cooling systems. One of the easiest ways to assure lower potential for air mixing is to bring the supply air system to the IT equipment rack itself. There are a number of systems that can be located directly adjacent to the IT equipment rack (either over or next to) or capture the exhaust air as it leaves the rack (in a rear-door heat exchanger). These options offer the advantage of minimizing the distance from either the intake or exhaust to the cooling system and virtually eliminate the opportunity for mixing. However, they do have some issues relating to their use.
  • Advantages: Limits potential for airflow mixing; provides immediate cooling or heat dissipation at the IT equipment; reduces or eliminates needs for underfloor plenum.
  • Disadvantages: In-row and on-rack deployment affect rack densities (number of IT rack spaces per row) and depth (rear door exchangers affect rack depths); these systems typically require a heat exchange unit or distribution unit on the data center floor; there is typically no humidity control in these smaller units; in larger data centers, redundancy/reliability needs may drive up overall costs for implementation and lower TCO.

Air-handling systems with airside economization. This covers a number of different types of systems from more traditional building air handlers with economizers to enthalpy wheel systems and indirect evaporative cooling systems. In all these systems there are some common elements. They will reside outside the data center with a supply and/or return plenum directly adjacent to the data center (over or next to the space). They are generally pre-packaged systems with coordinated controls that are shipped and placed on-site rather than built on-site. Their efficiency is affected by outdoor environmental conditions and need to be matched to the climactic tendencies of the site.
  • Advantages: Can limit potential for airflow mixing with proper layout planning; does not require HVAC equipment in the data center; reduces or eliminates needs for underfloor plenum; in proper climactic conditions these systems can offer the lowest TCO.
  • Disadvantages: These packaged systems have large footprints that require exterior placement and access; these systems tend to have higher initial costs; certain types can have high water consumption needs; redundancy may be achieved through use of direct expansion (compressorized) operation.

Hybrid solutions. The various layouts noted above should not be considered exclusive to one another for the needs of data centers which may have mixed needs or fluctuating densities. Any and all of the layouts noted above may be used in conjunction with one another (some being more suited to work with others), and consideration of any of these should be made during the planning stages of any major data center renovation work or new construction planning.

Network, Power, And Cooling Typologies, And Their Effect On Layout Planning

A commonality with the layout techniques described previously is the planning for airflow to and from the IT equipment racks. Without allowing for this, data centers suffer from constant issues with general overheating and hot spots. There are considerations which must be made during planning for a data center relating to the network, power distribution, and cooling solutions which need to be explored in order to achieve the best possible outcome and proper operations.
Network/communications. Network and other communications wiring and equipment can greatly affect the data center layout and ultimate airflow to and from the IT equipment. There are a number of factors to review to make sure proper planning is achieved:
  • Network core: Location of the network core in the data center will greatly increase the amount of wiring required to come to and leave the space (use of copper vs. fiber can affect cabling size and amount). Consideration should be made to locate the core in a dedicated space outside the data center proper to reduce cabling loads.
  • Network cabling location: As data centers have evolved, network cabling has gone from a higher percentage of underfloor cabling paths to overhead cabling instead. By running the network cabling overhead, this reduces congestion in the underfloor air plenum (if access floor is used for this purpose), and also forces better cabling management with the cabling exposed to view. It’s important when planning for overhead network cable trays to coordinate with the IT equipment layout, as these trays should be run over the top of the racks. Tray height, cabling connectivity to the racks, and impact on other systems (such as lighting and fire suppression) should be considered.
  • Distribution switching: Network systems typologies can affect the pathways and number of cables required to allow for proper communications across the system. The use of end of row switching in lieu of top of rack switching can cut down considerably on the interconnectivity cabling requirements of the system. Also the use of fiber distribution instead of copper cabling can cut down on the cabling diameters and fill in cable trays, making the systems easier to handle and lighter overall.
  • Systems communications: In addition to the network needs of a data center, oftentimes IT planning fails to account for the various systems communications required for BAS/EPMS, security, fire alarm and detection, and other support systems for the data center. Often network security or jurisdictional restrictions require separate systems and/or pathways for this type of communications. Upfront discussions for these considerations are strongly advised.

Power distribution. Much of the discussions throughout the data center around cooling needs for IT equipment. Power distribution is equally important, but this too can affect airflow management. In laying out the power distribution in the data center the following should be reviewed:
  • Power cabling location: Traditionally, power cabling has been run in flexible conduit below the access floor to each IT equipment rack. In redundant power schemes, two or more cables can be run to each rack. The use of end-of-row remote power panels (RPPs) can cut down on cabling runs and distances. They also reduce underfloor cluttering and increase airflow.
  • Overhead busway distribution: As these systems are becoming more prevalent in the data center market, the use of busway distribution cuts down on the use of cabling and allows for easier change-out requirements for connectors when rack changes occur. Redundancy can be achieved in separate busways and the overhead use (in conjunction with overhead network cabling) frees up the underfloor plenum for airflow (if an access floor is even required).
  • Power distribution voltage and type: IT equipment manufacturers are making it easier to allow for 400V distribution to be used in the data center. If the equipment can accept this voltage, the need to transform power down to lower voltage is relieved eliminating equipment required (typically a power distribution unit [PDU]) in or around the data center, allowing more space available for the IT equipment. The use of DC voltages in lieu of AC power typically used is also being explored. Although both of these types of distribution are not widely used today, larger data center owners and operators are looking at these systems as a means to cut down on costs and increase reliability overall.


Cooling distribution: Although much of the concentration of this paper discusses data center layouts relating to mechanical systems, little has been discussed about the actual distribution methods including plenum make-up, perforated tiles, and other distribution means. The following are noted to be considered:
  • Access floor plenum: The access floor plenum as a supply pathway for cold air has worked in data centers since their inception. Over time we have seen the depths of these plenums increase. Largely this has been due to the congestion often created with other systems (network, power, fire detection and suppression, etc.) being located in the same area. Through extensive CFD modeling of underfloor cavities we have found that a clear depth of 18 in. is all that is required for good air circulation and static pressure requirements. This assumes the cavity is able to hold static pressure and only using perforated tiles for exit points for airflow. Access floor panel conditions and concrete slab sealing are important to maintaining an active plenum. Plenum depth, depressing the concrete slab, and other potential implications surrounding underfloor plenums should be explored.
  • Perforated tiles: Perforated tiles have long been used as a means to provide airflow directed at IT equipment racks. The original standard tiles only provided for 25% free area for airflow (in other words the tiles were 75% solid). These panels offered less rolling capacity than standard tiles and only provided airflow directly through the tiles (limiting airflow could be achieved through the use of dampers which impeded airflow even more). Newer perforated tiles allow for 56% and 68% free area of the tile allowing increased airflow. They also offer directing fins which turn the airflow toward the equipment rack rather than straight up into the aisle making better use of the air coming out of the floor. These tiles are made to meet or exceed the general floor rolling loads improving equipment use in the colds aisles. Airflow should be reviewed prior to planning being completed for data center layouts.
  • Cold aisle widths: Standard data center layout pitch (the distance from the center of one cold aisle to the center of the next as defined by ASHRAE) is seven tiles or 14 ft (4,200 mm for metric tiles). This allows for a 4 ft or two-tile cold aisle width. Depending on equipment densities, this aisle width may need to be expanded to three tiles (or 6 ft) if high densities are required. This also would affect item  “b” above in the perforated tile layout to assure proper coverage of supply air to the IT equipment racks.

Future Planning

This article focuses on many of the current data planning strategies being employed today. Although there are more possibilities, the ideas presented here represent the current mainstream thinking for data centers (both renovations and new planning). In terms of emerging trends, there are a number of planning techniques being explored and implemented that can be considered on the cutting edge of data center design. These ideas may become more mainstream as TCOs come down and more widespread acceptance takes hold:
  • Containerized data centers. Although many systems from different providers have been available for more than 10 years, these “data centers in a box” have not garnered the market attention once thought likely. As IT equipment densities and utilization increase, these may become more popular with quick time to market capability. Also, colocation providers are eroding the need for smaller commercial data centers to exist, lessening the attractiveness of owning your own box when someone else can manage it for you.
  • Elimination of access floor plenums. With the advent of cooling systems which can operate in the data center space (such as in-row cooling and air-handler/economizer systems), the need to provide an underfloor supply air plenum has been reduced. This, in combination with overhead network and power distribution, eliminates the need to use an access floor system altogether. This enables cost savings for both the floor itself and also in structural costs to depress a floor slab in new facilities or install stairs and ramps in existing sites. This situation has taken hold in many smaller sites and is becoming the norm in larger deployments for wholesale colocation providers and internet service providers.
  • Modular system deployment. Modular electrical and mechanical systems are similar to item containerized data centers above, but offer the flexibility to be used in both new deployments and existing sites. They provide improved development and construction as they are factory built to specific needs of the client. These are typically skid mounted for ease of transport, delivery, and installation. These plants can be designed and built for expansion over time and offer quick turn-around times from approval to delivery over traditional site built options. Space planning is made easier by having shop drawings to work from to confirm clearances and in cases where existing sites are being used; oftentimes the fabrication process can accommodate site specific needs if required. 

References

This article draws on information previously published in a variety of sources, most notably:
  • Schneider Electric — A Scalable, Reconfigurable, and Efficient Data Center Power Distribution Architecture (White Paper 129 Rev 1, 2011)
  • U.S. Dept. of Energy — Best Practices Guide for Energy-Efficient Data Center Design (2011)
  • ASHRAE — Thermal Guidelines for Data Center Processing Environments (2012, 3rd edition)
  • ASHRAE — Data Center Networking Equipment — Issues and Best Practices (2013)
  • TIA 942-A — Telecommunications Infrastructure Standard for Data Centers (March, 2014)
  • Internap — Critical Design Elements for High-Power Density Data Centers Presentation (Retrieved April, 2015)