Memudahkan manajemen cabling data center

Every day, consumers and businesses around the world benefit from computerized, web-based services and transactions, thanks to data centers. If a company wants to be on top of business, then it must be on top of information technology. With the vast quantities of information created, processed, saved, and delivered each day, IT professionals know that piece-by-piece computer upgrades are not the solution to increasing demands. They must take into account how all of the components come together – the whole system, which is the data center.
Data centers house the computing, data storage, and networking equipment that make routine tasks faster, easier, and more accurate. In addition to the servers, storage disks, switches, and other devices that make up the data center, consideration must be given to the cabling that connects it all together and the grounding infrastructure that keeps electronic equipment running as intended. This data center review is designed to answer the following questions:

– What kinds of data centers exist today?

– How are data centers evolving?

– What strategies will enable the cabling infrastructure to serve the data center effectively for years to come?

– How should the data center be grounded to ensure safety and signal quality?


According to the varied computing needs of the businesses they serve, data centers fall into two main categories: private (also called enterprise data centers) and public (also called Internet data centers). A private data center is one that is managed by the organization’s own IT department, and provides the applications, storage, web-hosting, and e-business functions needed to maintain full operations. If an organization prefers to outsource these IT functions, then it turns to a public data center. Public data centers provide services ranging from equipment colocation to managed web-hosting. Clients typically access their data and applications via the Internet. 

To keep equipment running reliably, even under the worst circumstances, all data centers are built with the following carefully engineered support infrastructures:

Enterprise Data Center
Assumption College, Worcester MA

    • Power supply and backup
    • Cooling and environmental control
    • Fire and smoke systems
    • Physical security
    • Connectivity to outside networks
    • Network Operations Center (NOC)
    • Cabling
    • Grounding

The more “mission critical” the application is, the more redundancy, robustness, and security required. Data centers can be classified by Tiers, with Tier 1 being the most basic and inexpensive, and Tier 4 being the most robust and costly. According to definitions from the Uptime Institute and TIA-942 (Telecommunications Infrastructure Standard for Data Centers), a Tier 1 data center is not required to have redundant power and cooling infrastructures. It needs only a lock for security and can tolerate up to 28.8 hours of downtime per year. In contrast, a Tier 4 data center must have redundant systems for power and cooling, with multiple distribution paths that are active and fault tolerant. Furthermore, access should be controlled with biometric readers and single-person entryways, gaseous fire suppression is required, the cabling infrastructure should have a redundant backbone, and the facility can permit no more than 0.4 hours of downtime per year.

Tier 1 or 2 is usually sufficient for enterprise data centers that primarily serve users within a corporation. Financial data centers are typically Tier 3 or 4 because they are critical to economic stability and, therefore, must meet higher standards set by federal regulatory bodies. Public data centers that provide disaster recovery / backup services are also built to higher standards.

Data center models have evolved significantly over the last 30 years. In the past, a data center may have consisted of a large mainframe tucked away in an organization, running software applications specific to some portion of the business. Later models had groups of servers spread out geographically, each one managed individually and serving its own location, having limited exchange with other servers.

The growth of the Internet has already made information technology more a part of our daily lives, and organizations continue to gear up their data centers to meet ever-increasing demands. Today, most organizations have a primary data center that serves as the central location for critical computing equipment. This is made possible by increased compatibility among equipment and advances in site-to-site networking. Now, remote business sites can easily access the main data center through sophisticated networks linked by routers and switches. Consider the following trends:

 – Debit and credit cards now account for 52% of in-store transactions, versus 47% for cash and checks, as reported in InformationWeek. Credit card payment processors are being upgraded to handle over 5000 transactions per second. (1) 

 – It is conservatively estimated that 60% of consumers now make automated bill payments. (1) Not only do banks have to add computing capacity to handle these transactions, but they must also respond to government mandates for higher standards in information technology. Because our economy depends so heavily on financial institutions, government initiatives including the Gramm-Leach-Bliley Financial Services Modernization Act of 1999 and the more recent Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S. Financial System are requiring them to take measures to ensure information security and business continuity. It is now common to find data center doors controlled by not just badge readers, but biometric scanners. And a single robust data center is no longer enough: financial institutions must have redundant coverage, which often means a duplicate “back-up data center” is built miles from the main data center. Many organizations are turning to third-party data center services for disaster recovery and outsourcing of specific business processes.

Health Care
 – In the past two years, the exchange of payment-related health data has accelerated significantly, in large part because of the federal Health Insurance Portability and Accountability Act (HIPAA). The shift from paper to electronic transactions is expected to reduce costs while maintaining patient privacy. With e-payment technology coming into place, many providers are looking for ways to handle care-related information electronically, as well. (2) 

 – Small, medium, and large enterprises depend on e-business applications for data management, speed, and customer interface. The list of applications continues to grow: e-mail, intranet and extranet, customer relationship management (CRM), supply chain management (SCM), enterprise resource planning (ERP), content management, payment services, wireless messaging, etc. Even through the economic downturn, companies continued to invest in IT. With the attitude of “constant improvement,” data center managers forge ahead with plans for consolidation of resources, implementation of Storage Area Networks (SANs), and infrastructure upgrades that tie directly to a return on investment. The data center model continues to evolve, enabling greater efficiency, robustness, and flexibility. Some would place storage and SAN (Storage Area Network) technology at the strategic center of tomorrow’s data center, while others are focusing on grid computing to better utilize processing power. A SAN is a high-speed, highly scalable, centralized storage network that is separate from but connected to the main LAN/WAN. Grid computing is the application of resources from many computers onto a single problem at the same time, a form of network-distributed parallel processing. With the increased convergence of voice and data, we see data centers borrowing networking technologies from central offices, and central offices adopting processing and storage technology from data centers. Because of our increased dependence on data, public and financial data centers are under pressure to back themselves up with disaster recovery sites and business continuity planning. Many public data centers now treat services as a utility, billing customers according to how much bandwidth, processing power, or storage is used. On the cutting edge of technology is autonomic computing, which will give the data center more capabilities to manage itself. 


A good data center is invisible to the end user. The end user only sees the services or transactions that take place: the debit card purchase is approved, the bill is paid automatically, the insurance company pays the healthcare provider, or the catalog web page appears online. Behind the scenes is a team of IT professionals managing the data center and its equipment.

Likewise, in the data center, a good cabling infrastructure should be “invisible” to the IT professional. It may be physically visible, but once it is installed, the IT professional rarely stops to think about it because it is functioning as intended: transmitting data at optimum speeds, keeping cables organized and protected, identifying cables and ports, and allowing easy MACs (moves, adds, and changes). A reliable grounding infrastructure is necessary for safety and signal quality, but it too is at its best when it is just doing its job. When the cabling and grounding infrastructures are functioning properly, the IT professional has more time for proactive tasks like analysis, optimization, planning, and upgrading.

Today’s data centers house tens to hundreds of active devices: servers, mainframe and midrange computers, storage disks, tape
backup, firewalls, network monitors, KVM switches, load balancers, network switches, routers, and transport equipment. This level of diversity requires a well-designed structure for cabling all of these devices.

In April 2005 the Telecommunications Industry Association published TIA-942, “Telecommunications Infrastructure Standard for Data Centers,” in order to promote a common methodology for structured cabling. PANDUIT was an active participant in the committee developing this standard. The generic cabling topology specified in TIA-942 organizes the data center into a logical, manageable network, as shown below.
By definition, the data center includes the Computer Room, Entrance Room, Telecom Room, and Office/NOC/Support Rooms. At the heart of the data center is the Main Distribution Area (MDA), which uses switching and patch panels to connect the internal users, external users, and end equipment. The internal users are represented as “Offices, Operations Center, Support Rooms.” They are the office personnel (if it is a corporate data center) and the IT staff who maintain the network via the Network Operations Center (or NOC). Internal users are switched and cabled to the data center through the Telecom Room. The external users may be in another building or in another part of the world. They connect via campus or carrier networking equipment and cables in the Entrance Room.
The end equipment is the servers, mainframes, tape drives, etc. that provide the business functions and information storage mentioned earlier. End equipment is located in the Equipment Distribution Areas. Large data centers will subdivide the Equipment Distribution Area into manageable subgroups, with the MDA connecting to each subgroup through a Horizontal Distribution Area (HDA), as illustrated.

There are endless variations on the topology shown. For example, a small corporate data center may eliminate the HDAs and combine the Entrance Room, MDA, and Telecom Room into one area of the Computer Room. Alternatively, a large Internet data center may require dual Entrance Rooms for redundancy or to accommodate a diversity of carriers. It may also place the MDA and HDAs in separate rooms or cages for increased security.

With its vast experience in providing cable management solutions and a full offering of products for copper, fiber, and grounding, PANDUIT understands what it takes to design cabling and grounding infrastructures to optimally serve all kinds of data centers.

Generic Data Center Topology, per TIA-942 

Data centers are as varied as the organizations that run them. They differ by types and tiers, as well as by number of carriers, amount of storage, building construction, type of floor (raised or solid), available budget, and numerous other details. Regardless of these factors, every data center must have a cabling infrastructure that addresses some common concerns: how tomanage a large number of cables in a changing environment, how to select cabling with sufficient bandwidth, and how to route cables efficiently throughout the facility. Because data centers are constantly growing and evolving, they share a common need for a high density,flexible cabling infrastructure. 

Cable Management 

Virtually every device in the data center must be cabled to another device, often with multiple cables to provide redundancy. A data center for a medium sized corporation may have 50 to 100 servers, while a data center for a large financial institution may have several hundred, so cable counts are extremely high – especially in the main and horizontal distribution areas. To manage large numbers of cables, PANDUIT recommends the following. 

Start with sturdy frames
 that are designed to support large network devices and the associated cross-connect panels. Add copper patch panels and fiber enclosures according to projected port counts, plus cable managers for horizontal and vertical routing.

Utilize horizontal and vertical cable managers
. They protect cables from damage, keep them from blocking equipment interfaces and cooling fans, provide bend radius control, create a neat appearance, and improve routing and traceability. Horizontal cable managers such asPANDUIT® NetManagerTM and Active Equipment Cable Manager prevent sags and guide cables from ports to vertical channels on racks and in cabinets. For vertical cable management, PatchRunner provides a spacious vertical channel for looping excess cable. The dual hinged door creates a sleek appearance while allowing easy access to cables. Select cable managers according to PANDUIT capacity charts, which show how many cables will fit based on the 40% fill capacity rule. For space-restricted equipment cabinets, PANDUIT offers a variety of cable management options including PAN-POSTTM Standoffs, Bundle Retainers, Vertical D-rings, IN-Cabinet Vertical Cable Managers, and Tie Mounts.

Eliminate sharp bends
PANDUIT cable management products are designed with bend radius control to help prevent kinks and sharp corners that can impede cable performance and cause long-term damage.

Provide visual identification
. Implement a color-coding scheme for cables and ports so that they can be quickly identified, even from a distance. Due to the large number of cables and hardware that are managed in the data center environment, it is imperative that cables and ports be clearly labeled with durable products, as explained in the Labeling Compliance BrochurePANDUIT self-laminating cable labels, patch panel labels, port identifiers, labeling software, and printers are ideal for the data center.

Bundle common cables together
. Use TAK-TYTM and PAN-TYTM cable ties to secure cables to supporting fixtures on ladder rack and cable tray.

Use cabling products that are compatible with networking equipment
PANDUIT has engaged in strategic alliances with Cisco Systems* and APC to address network, power, and cabling infrastructure considerations surrounding converged technologies. This means thatPANDUIT rack systems and cable management products are designed specifically to function in a Best in Class manner, fully interoperable with our partners.

To meet current and future demands for bandwidth in the data center, TIA-942 recommends that horizontal cabling be Category 6 or 6A, if not fiber. PANDUIT copper solutions have been installed trouble-free in mission critical facilities including major financial data centers and Novell’s Global Network Operating Center. PANDUIT Category 6 patch cords and connectors can support data rates in excess of 1 GbE. The PANDUIT white paper “10 GbE” gives a realistic engineering explanation of the current technologies available for sending higher data rates over copper cabling.

To reduce congestion on the main network and improve allocation of storage devices, many data centers save data in storage area networks (SANs). Disk space is no longer directly attached to each computing device, but access to stored data is still fast because SANs are designed for high-speed transmission (e.g. – 1 to 4 gigabytes per second, even up to 10 Gb/s). Such high speeds demand a state-of-the-art fiber cable infrastructure. Standard multimode fibers contain physical defects that cause errors in high bit rate transmission, but PANDUITOpti-CoreTM 10Gig 50/125 micron fiber is laser optimized for low bit error rate (BER). Expert in-house test methods, including eye diagrams and BER measurements, verify that Opti-CoreTMpatch cord and pigtail fibers will transmit 10 Gb/s over 300 meters, in accordance with IEEE 802.3ae 10 GbE standards.
* Cisco Systems is a registered trademark of Cisco Technology, Inc.

What is the best way to route cables throughout the data center? Cables can be routed under the raised floor or overhead, depending on factors such as available space, security, and the desired aesthetics. When cables go underfloor, TIA-942 recommends routing them in cable trays (e.g., wire mesh basket), up off the slab. The raised floor is commonly used as an air plenum, so local codes may require the cables and bundling straps to be plenum rated. Halar cable ties and plenum ratedTAK-TYTM cable ties (HLSP and HLTP) are ideal for this purpose. When cables are routed overhead, the preferred pathways are ladder rack and FIBERRUNNERTM. Both are sturdy for large cable counts, and they can be hung from the ceiling structure or mounted on top of racks and cabinets. Multiple widths are available (up to 36″ for ladder rack, 2 to 12″ for Fiber Runner).

Ladder rack
. Spaced every 9 to 12 inches apart depending on the style (9″ is typical for telco style and 12″ is typical for other styles), ladder rack rungs provide support for copper cables, spaces for drops, and means for securing cables with cable ties.

PANDUIT recommends the use of Stackable Cable Rack Spacers to separate and support cables. These spacers attach to the rungs and prevent pinch points, as the bottom cable

CIENA, San Jose CA

layers no longer bear the weight of the upper layers. When routing cables off the ladder rack and down to the equipment, PANDUIT waterfall accessories should be used to provide bend radius control.

Generally, fiber cables should not be laid directly on ladder rack. Over time the cables will sag between the rungs, and bends can occur that affect cable performance.FiberRunnerTM provides a solid bottom channel for routing fiber cables and features integral 2″ bend radius control at every junction (turn, tee, cross, spillout, etc.). Cables are accessible for future changes, but they are also protected on all sides when the hinged covers are closed. Innovative QUIKLOCKTM couplers make FiberRunnerTM fast and easy to install: a mechanically secure connection is completed in under 5 seconds without tools. Because FiberRunnerTMprotects and controls bend radius over 100 percent of the cable path, many data centers are using it both for fiber and for high-performance copper cabling.

To see how PANDUIT FiberRunnerTM and ladder rack accessories were used for the high-density fiber installation of CIENA’s data center environment, refer to the CIENA Case Study.

High Density 
Data center floor space is very costly. It must be utilized effectively in order to keep costs downWhen a data center is limited by floor space, high density is key to the cable management strategy. PANDUIT has specially designed high-density products to help space-limited facilities put more copper and/or fiber connections into a smaller area. 

 – For all copper terminations, PANDUIT Mini-ComTM High Density patch panels make it possible to fit 48 ports into a single rack space. High Density Angled Patch Panels provide even greater space savings because they do not require horizontal cable managers, which mean they consume fewer rack units. 

 – The enhanced Opti-ComTM HD Fiber Distribution System is designed to handle as many as 1,584 SC or LC fiber optic connections per frame, using 1.6 mm cable. Patch cordsmade with small form factor connectors and cables (e.g., 1.6 mm jacketed, ribbon distribution cable) take up less space on the patch panel and in the routing duct.

PANDUIT FiberRunnerTM and Opti-ComTM HD Cable Management Rack System

Fast, Accurate MACs

Certain environments require careful planning so that MACs can be made quickly and accurately:

    • Hosting facilities that promise to give customers a network connection in a specified number of hours
    • Corporate IT departments in fast-changing, cost-restricted environments where every MAC has a dollar value
    • Financial data centers that cannot tolerate the downtime associated with errors.

The PANVIEWTM Solution for patch field management can be used to monitor the network’s physical layer, and uses LEDs to guide technicians through planned patch cord moves. In addition, the following strategies facilitate ease of making MACs:
Pre-install the horizontal and backbone cables instead of running new cables throughout the facility each time a change occurs. Terminate them on cross-connect panels, and then rely on patch cords when it is time to make changes. Patch panels with built-in intelligence simplify MACs and verify that they are completed correctly.

Terminate the horizontal cables connected to equipment distribution area hardware (for example, inzone boxes in the floor or patch panels in the equipment cabinets). When new equipment is added, it is simple to run a patch cord from the termination hardware to the device. Stock a few key lengths of patch cords (e.g., 14 feet, 20 feet) or use field-installable pre-polished fiber connectors to reduce cable slack. 

Always label the cabling infrastructure
. With PANDUIT labels and printers, it is easy to apply clear, durable identification to all cables, racks, panels, and ports. Labeling the data center according to TIA-606-A allows MACs, troubleshooting, and repairs to be accomplished faster and more efficiently.

Proper grounding and bonding is essential for efficient data center performance. In order to ensure the safe and proper functioning of data center infrastructure equipment, careful thought must be given to designing, installing, and maintaining a quality grounding and bonding system. The grounding system should be viewed as an active functioning system that provides a low resistance, visually verifiable grounding path to maximize uptime, maintain system performance and protect network equipment and personnel.

Data center grounding is governed by the following documents:

    • TIA-942, Telecommunications Infrastructure Standard for Data Centers. TIA-942 defines practical methods to ensure electrical continuity throughout the rack materials and proper grounding of racks and rack-mounted equipment. The grounding section of TIA-942 was written specifically to address the needs of data centers.
    • J-STD-607-A-2002, Commercial Building Grounding (Earthing) and Bonding Requirements for Telecommunications. This standard focuses on grounding for telecommunications. It defines a system that begins at the entrance facility, in the telecommunications main grounding busbar (the TMGB), and ends at the local telecommunications grounding busbars (TGBs) located in the telecommunications rooms.
    • IEEE Std 1100 (IEEE Emerald Book), IEEE Recommended Practice for Powering and Grounding Electronic Equipment. IEEE provides further detail on how to design the grounding structure for a computer room environment through a Mesh Common Bonding Network (MCBN). The MCBN is the set of metallic components that are intentionally or incidentally interconnected to provide the principal means for effecting bonding and grounding inside a data center. These components include structural steel, reinforcing rods, metallic plumbing, AC power conduit, cable racks, and bonding conductors. The CBN is connected to earth ground via the exterior grounding electrode system.

Characteristics of the Data Center Grounding System
The purpose of the grounding and bonding system is to create a low-impedance path to route stray electrical currents away from sensitive equipment and send them to ground. Lightning, fault currents, circuit switching (motors turning on and off), and electrostatic discharge are the common causes of these surges and transient voltages. Fault currents tend to return to their sources: to the earth for lightning, and to the power source (AC panel) for power surge events. According to the standards listed above, a properly designed grounding system has the following characteristics:

    • Is intentional: each connection must be engineered properly, the grounding system is only as reliable as its weakest link
    • Is visually verifiable
    • Is adequately sized to handle fault currents
    • Directs damaging currents away from equipment
    • All metallic components in the data center are bonded to the grounding system (e.g., equipment, racks, cabinets, ladder racks, enclosures, cable trays, etc.)

Along with these characteristics, all grounding and bonding components should be listed with a nationally recognized test lab such as UL. Local electrical codes must also be adhered to. To ensure long-term integrity of the grounding system, always use compression connectors as opposed to mechanical. A compression connector is permanent and does not come loose with vibration. However, mechanical connectors can loosen when exposed to vibrations (e.g., nearby fans or humming equipment), and loose connections are high-resistance and can fail in a surge event.
Because data center racks and cabinets typically are painted and bolted together, electrical continuity throughout the rack is not assured. Paint piercing components such as washers and screws should be used to provide continuity between frame members and housed equipment. Paint piercing hardware eliminates the need to manually remove paint, thus saving labor costs.

Typical Data Center Grounding Topology

A Quality Grounding Infrastructure
Data centers depend on a quality grounding infrastructure for safety and signal quality. Grounding components must make robust connections that have low resistance, and they must be easy to install and convenient to check during yearly inspections. From the rack to the MCBN to the busbar, PANDUIT StructuredGroundTM Grounding System is designed with these qualities in mind. Rack and cabinet grounding kits combine the jumpers, grounding strips, ESD ports, lugs, HTAPs, and all necessary installation hardware in one convenient package.Copper compression lugs have inspection windows and are tested by Telcordia to meet NEBS Level 3 performance. For more information on Telcordia and NEBS Level 3, read the article, “Minimize Service Interruptions.” HTAPs and Clear Covers are carrier class, so they can be used on the MCBN or in the entrance room with DC-powered equipment. Compression tools including hydraulic battery powered toolshydraulic manual tools, and manual mechanical toolsare easy to use and produce straight, clean crimps. Grounding busbars can be identified withPANDUIT moisture- and heat-resistant labels as shown in the TIA-606-A Labeling Compliance Brochure.

The data center is the behind-the-scenes resource that makes it possible for critical business functions to be carried out. Just as the user counts on the data center to conduct day-to-day business operations, the data center manager counts on their cabling infrastructure to support growing and evolving data center requirements. By following sound design practices and installing PANDUIT cabling solutions, your cabling and grounding infrastructure will function as intended – providing clear signal paths, orderly management, protection from damage, and change-friendly interfaces.
For more information or to receive a copy of the Network Connectivity Solutions Catalog (#SA-NCCB34) call 800-777-3300 or e-mail

1. Marlin, Steven. “Who Needs Cash?” Information Week. December 22, 2003.
2. Kolbasuk McGee, Marianne. “Collaborate and Conquer.” Healthcare Enterprise. November 3, 2003.