Cabling Planner Issue 11

Page 1

CABLINGPLANNER MAKING THE RIGHT CONNECTIONS

2012

ISSUE 011

A special supplement with

SPONSORED BY

CABLING THE CLOUD



CABLINGPLANNER MAKING THE RIGHT CONNECTIONS

2012

ISSUE 011

Publisher Dominic De Sousa Group COO Nadeem Hood Managing Director Richard Judd richard@cpidubai.com +971 4 440 9126 Commercial Director Rajashree R Kumar raj@cpidubai.com +971 4 440 9131 EDITORIAL

CABLING THE CLOUD Cloud computing changes everything, even the physical layer of the network. As more and more enterprises move to hybrid and private cloud implementations, it is important to have a dynamic network that can be elastic. The move to cloud also places unprecedented demands on the network cabling infrastructure. The enterprise network is becoming much more critical to high-priority business initiatives, and a perfect storm of mobility, virtualisation and cloud computing is forcing IT managers to re-think the way they design networks, and especially the underlying cabling infrastructure. Contrary to the popular perception, cloud doesn’t really eliminate the need for a robust physical infrastructure design, which is in fact a critical success factor. It is an imperative for data centre managers to design their physical infrastructure to maximise cloud benefits. According to a recent white paper from Panduit, one of the key questions that you need to consider while moving to a private cloud environment is the increased bandwidth requirements. Many companies experience a large increase in the I/O per server as they adopt virtualisation. Panduit says virtualisation leads to an increase in bandwidth and number of cables connected to each server. How will you manage up to three times more cables in each cabinet and in your pathways? And what will be the impact of this on the power and cooling aspect? The advent of cloud makes the case for thorough physical infrastructure planning that will pay off in terms of lower cost and outages. The time is now to plan ahead and prepare your cabling infrastructure deployment with minimised unplanned outages, management costs, and faster provisioning.

Group Editor Jeevan Thankappan jeevan@cpidubai.com +971 4 440 9109 MARKETING AND CIRCULATION Database and Circulation Manager Rajeesh M rajeesh@cpidubai.com +971 4 440 9147 PRODUCTION AND DESIGN Production Manager James P Tharian james@cpidubai.com +971 4 440 9146 Design Director Ruth Sheehy ruth@cpidubai.com +971 4 440 9112 Graphic Designer Analou Balbero analou@cpidubai.com +971 4 440 9104 DIGITAL www.networkworldme.com Digital Services Manager Tristan Troy Maagma Web Developers Jerus King Bation Erik Briones Jefferson de Joya Social Media & SEO Co-ordinator Jay Colina online@cpidubai.com +971 4 440 9100 Published by

1013 Centre Road, New Castle County, Wilmington, Delaware, USA Head Office PO Box 13700 Dubai, UAE Tel: +971 4 440 9100 Fax: +971 4 447 2409 Printed by United Printing Press Regional partner of

Š Copyright 2011 CPI All rights reserved While the publishers have made every effort to ensure the accuracy of all information in this magazine, they will not be held responsible for any errors therein.


FEATURE CLOUD

CLOUD NETWORKING The cloud is revolutionising networking, and this overhaul presents enormous challenges for IT managers who are used to being able to see, monitor and control their networks and systems. 4 CABLING PLANNER

2012


N

etwork and systems management software has been heading in this general direction for years and is better positioned than you might think to take this next step - but there are other areas that require more work by the industry. A case in point is the underlying cabling infrastructure. The key question now facing the industry is what will be the impact of cloud and mobility on network design, and cabling in particular? “It is highly likely that any large cloud computing infrastructure will be based on a high degree of virtualisation. This concept of virtual machines not being tied to physical servers allows for the flexible scaling required in the cloud paradigm. Along with virtualisation comes the requirement for flexible and low latency network architecture. This is leading many data centres to move to a two layer design instead of the traditional three layer core/distribution/access tiers that have characterised data centres for decades. The primary reason for this is to allow for very low latency between physical servers and their corresponding virtual server instances,” says Ciaran Forde, VP-Enterprise, Middle East & Africa, CommScope. Alberto Zucchinali, Data Centre Solutions and Services Manager for Siemon, adds another perspective: “Cloud computing, distributing services among a number of different data centres, requires careful management and advanced

Ciaran Forde, VP-Enterprise, Middle East & Africa, CommScope network upgrade scheduling by the cloud providers. Cloud providers should be able to detail to their customers a list of suppliers, typical design configurations in their facilities and what their maintenance and monitoring procedures are throughout the facilities. If a cloud provider is using outsourced space, then this same information from their provider should be passed on. It is important to ascertain whether or not a provider is operating via industry standard-compliant infrastructures (defined as cabling, networking, servers and software).” He says bandwidth upgrade plans should also be a part of the evaluation. Some cloud providers are already designed to accommodate 40/100G

Services and applications over a given segment of the network will change as mobile users migrate across different network segments.

Samuel Huber, Product Manager from Molex Premise Networks Ethernet in the backbone and 10G Ethernet in the horizontal. This means there will be less likelihood of downtime or reliance on other sites during upgrades. Samuel Huber, Product Manager from Molex Premise Networks, says cloud and mobility are accelerating the trend of the dynamic network. “Mobility implies an ever changing set of endpoints served by the network. Services and applications over a given segment of the network will change as mobile users migrate across different network segments. The demand on network design is that usage peaks and valleys on each segment will be less predictable as mobility increases. Cloud computing presents similar challenges at the other end of the network. The concept of a well-defined network core with large bandwidth pipes designed to support servers hosting applications in a physically contiguous and static configuration is replaced with a combination of on-site and off-site applications that requires a significantly different allocation of bandwidth. The increasing trend of virtualisation and the mobility that virtualisation provides changes the old network model from one to many to any to any.” Industry experts say the emergence of these advanced technology is forcing IT managers to re-cable their data 2012 CABLING PLANNER 5


FEATURE CLOUD

centres to meet the demands of higher throughputs. “With increasing server I/O demands, there is a corresponding increased requirement for higher and higher uplink capability from the access switches delivering this ever increasing capacity to the core switching platforms. This will drive the adoption of 40 and 100 Gbps Ethernet ports in the data centre environment. Initially 40 Gbps uplinks to support 10 Gbps access capacity to hosts were used, but eventually 40 Gbps to the host with corresponding 100 Gbps uplinks to the central switching/routing fabric will be used,” says Forde. Shibu Vahid, Head of Technical Operations, R&M Middle East & Africa, agrees: "It is evident that, the demand for higher throughputs requires higher performing cabling to support the infrastructure in data centre. This is why the recommendation for copper horizontal cabling in a data centre is moving from Cat 6 to Cat 6A in current standards revisions. In addition, fibre recommendations are moving to OM4 multimode fibre for most fibre applications. While considering Cat6A and OM4 cabling, it is also highly recommended that the re-cabling to be done with pre-terminated cabling infrastructures. The Pre-terminated FO links will provide a great deal of flexibility in migrating to 40 and 100 Gbps.” Zucchinali recommends a good balance between copper and fibre to save Capex and Opex in the short, medium and long term. 10GBASE-T network equipment offers greater reach and flexibility than any other 10 Gb/s copper solution and is a very attractive alternative to 10 Gb/s optical fibre solutions when deployed channel lengths are less than 100 metres. “The adoption of an MTP-based fibre optic infrastructure for top speed links allows for easy and flexible network management. The use of plug-and-play systems can provide customers with a number of benefits including high density, scalability and up to 75 percent faster deployment than field terminated fibre connections,” he adds. Huber from Molex says in general the

6 CABLING PLANNER

2012

Shibu Vahid, Head of Technical Operations, R&M Middle East & Africa network will need to be designed to be more robust to accommodate changing bandwidth requirements. “Cloud and virtualisation effectively distributes applications and data sources across multiple cores while mobility promotes an ever-changing mix of endpoints. Networks will need to be flattened to accommodate the any to any model. Convergence should also be factored into building design. The consolidation of data, phone, building automation, and security to the Ethernet platform will increase the demand on structured cabling.”

The fibre conundrum With the advent of high-speed Ethernet,

the choice that IT managers have to make is whether to use OM3 or OM4 fibre for their optical networks in the data centre. “Both are optimal for 40G or 100G, but the major difference is the maximum span distances. In a 10GbE network, OM3 fibre can span up to 300m while OM4 supports even longer channels. The IEEE 802.3ba standard states that in 40GbE or 100GbE environments, OM3 can be used up to 100m and OM4 up to 150m. As data centre applications continue to increasingly require faster Ethernet speeds, distance between network infrastructure components and the quality of data transmission become challenging concerns. In the future, extended distance capabilities over and above the standard will prove useful in addressing real-world topologies in many data centres at a far lower cost than singlemode alternatives,” says Forde. Commscope advocates the use of preterminated fibre infrastructure for those who are looking to move to OM4. This is because 40 and 100 Gbps will primarily be delivered across parallel fibre optic links, which are characteristic of preterminated fibre solutions. In addition to the proper Multi-fibre Push On (MPO) based connectorisation, pre-terminated fibre will dramatically reduce installation time while simultaneously allowing for easy and rapid re-deployment, according to the company. Zucchinali echoes a similar opinion: “As next generation applications, such as

While considering Cat6A and OM4 cabling, it is also highly recommended that the re-cabling to be done with pre-terminated cabling infrastructures


40/100G can reach respectively 100m on OM3 and 150m on OM4, in bigger data centres with longer connection links it could be worth deploying OM4, while in smaller environments the option of OM3 could well support all future upgrades with some money savings.” Huber says consideration should be given to the expected life of the cable plant while choosing between OM3 and OM4. “ 400G is already being discussed and adoption of OM4 will reduce the chance of having to replace cabling if and when the user adopts 400G.”

Intelligent cabling in spotlight The whole discussion about re-cabling the data centre has also revived the dormant topic of intelligent cabling systems. “As networks have evolved in volume, application, sophistication and critical operations, the smart or “Intelligent” way to combine monitoring with the physical (cabling) layer is to use an Intelligent Infrastructure Management system - a real time system that holds and tracks all the information of what is connected to what. “Originally this was a technology reserved for large corporate networks, data centres and critical network Infrastructures, but now with the cost differential falling any large port network or a network requiring higher level of security IT managers should consider this technology. It is anticipated that one day all networks will feature this level of intelligence and control as standard,” says Forde. Huber from Molex points out the value of the investment in a network cabling infrastructure will erode over time if there is not adequate and up-to-date documentation with which to manage it. “ In essence you can’t repair a network cabling problem if you can’t find it. Everything in a data centre should be documented, it is important not only for connectivity but for asset management. The benefit of a software managed intelligent management system therefore becomes obvious, particularly for more complex installations.”

He adds that investment in structured cabling in the Middle East region continues to be positive. Geographically the Kingdom of Saudi Arabia and other parts of the Middle East are steadily expanding markets for Intelligent Infrastructure Management Systems. “Major Infrastructure projects in the Middle East region, such as Airport expansions, hospitals and government institutions, make the deployment of an IIM system a sensible investment as it liberates IT personnel untangling physical network cabling issues, making the task of moves, adds and changes much simpler and time reduced, leaving them to concentrate on higher priorities.” Zucchinali says an intelligent cablings solution can also maximise the utilisation of network assets, reduce downtime, improve response time and streamline work order processes. It also improves security levels, by tracking any physical layer changes such as unauthorised removal of equipment or connection of unapproved devices - telling you exactly when and where a breach occurred by sending real-time alerts.

Energy efficiency to the fore With power and cooling emerging as one of the biggest issues in data centres, which represent 30 to 50 percent

of overall data centre budgets, it is time for IT manager to think about an optimal cabling design to facilitate efficient cooling. “Cabling must be properly designed, remediated and routed to allow the air to flow in the unobstructed manner. Many data centre standards around the globe suggest that horizontal and vertical cabling be run accommodating growth so that these areas do not need to be revisited,” says Zucchinali. There are several reasons for this recommendation, including eliminating the adverse affects of removing floor tiles and decreasing static pressure under raised floors during MAC work; assuring that pathways are run in a manner that will allow the flow of cold air in cold aisles to be unobstructed by cabling; and a potential benefit to cooling as the cabling can be installed to provide a baffle of sorts, channelling cool air into cold aisles. He adds that unused channels often create air dams which obstruct air flow, which could result in higher energy consumption as your cooling equipment will work less efficiently. While that problem alone should be enough to commission the removal of abandoned cabling, there may also be issues with the older cabling jackets not meeting current RoHS (Reduction of Hazardous Substances) 2012 CABLING PLANNER 7


FEATURE CLOUD

requirements. In many cases, these older cables carry significant fuel load which can pose additional fire threats, and can release toxins such as halogens if ignited. Beyond the life and safety issues at risk, the proper removal and disposal/ recycle of abandoned cable can remove a significant environmental risk. Although removing abandoned cable will have a positive green impact, reducing the volume of potentially abandoned channels through proper management is an even better option. Intelligent infrastructure management systems can provide a lights out advantage by allowing detailed monitoring of any MACs made. By providing a consistent and up to date diagram of the physical layer connections, channels can be managed and fully utilised before they become a management headache or a source of unchecked MAC work. While the ability to keep the cabling channels in check will almost certainly reduce power consumption on the cooling side, intelligent infrastructure management can also reduce power needs of the active network equipment. When designed with a central patching field, an intelligent infrastructure management system can help ensure that all switch ports are utilised - decreasing the power needs for electronics by keeping unused ports to a minimum. The ability to patch into unused ports rather than adding additional switches can provide an energy savings which in turn translates into further cooling savings. “The adoption of a last generation cabling will save the amount of materials to be replaced and will save from negative impact on "green" ratings due to the waste of materials and additional site visits by contractors. A fully-shielded cabling system such as TERA will significantly reduce noise on the cabling channel which can result in significant power savings in the active electronics by eliminating Digital Signal Processing (DSP) complexity used to suppress noise levels,” says Zucchinali.

8 CABLING PLANNER

2012

Vahid agrees that proper design of a cabling infrastructure using high density solutions and pre-terminated bundled/ composite cables and fibre optic MPO trunks will greatly reduce the amount of cabling that may potentially allow a large air handling space in the racks. “A well-chosen cooling principle and proper containment together with application specific racks are important for thermal management in data centre which eventually improve the PUE (Power Usage Effectiveness).”

What is on the horizon? Most of the major cabling vendors predict a high growth in the data centre networking space next year. “This is due to IT services and applications increasing the demands on telecommunication and storage systems. Data centre networking has moved past the stage of simply connecting servers and switches together, as companies broaden their definition of the data centre from “computer room” to “strategic corporate asset”. Within the data centre we see the move towards 10G, 40G and 100G solutions,” says Forde. He adds that in the corporate enterprise, there is a move towards 10G communications to support high

bandwidth applications. In addition, building owners are including in-building wireless capabilities through distributed antenna systems to future proof their premises. There is an increased appetite for high quality in-building coverage to support the upsurge in the use of tablets, smart phones and other wireless devices. Siemon expects an upswing in the adoption of 10GBase-T solution in data centres in 2013. “This will probably not significantly change the ratio between copper and fibre infrastructures in data centres, as there will also be a significant demand for 40/100 Gb/s-ready upgrades in optical links. Then, generally speaking and in spite of the global crisis, we can see a number of moves towards next generation infrastructures,” says Zucchinali. Huber sums up the trends by saying, “The data centre/network world is following the network phone carriers into a new and exciting period. Just as phone systems has evolved from a one service, one delivery method operation into a wired/wireless hybrid network providing myriad services to customers, the data centre and network cabling is moving from a one service, core to edge hierarchal structure to a multi-service distributed network.”


Introducing OptiFiber® Pro Built for the Enterprise with SmartPhone User Interface

Industry’s first datacenter fiber troubleshooting and certification tool • Intuitive smartphone interface • Shortest event and attenuation dead zones

Learn more at www.flukenetworks.com


inter v ie w Jean-Pierre Labry

Committed TO quality The Swiss cabling vendor R&M is planning to tap the fast-growing Middle East market by spreading its wings and investing more in the region. Jean-Pierre Labry, Executive Vice President of R&M Middle East and Africa, spoke to us about his company’s value proposition and road map.

C

an you tell us a bit about your operations in the Middle East? Jean-Pierre Labry: Our HQ for the

region is in the UAE, and we have offices in Saudi Arabia and local presence in Oman, Qatar, Jordan, Egypt and from January we will have Turkey also. Time to market for us is very important. I think this is the most important thing for us. And it is equally important to be close to the market to understand the challenges of the market and customers. That’s why we decided to invest in resources, not to reduce the number of people, like some other companies may have done. This is one of the strengths of being a family business. When you have a familyrun company sometimes the vision is completely different. Even though there was a world recession, the vision of the family was to invest, become stronger and be number one in this part of the world. Our headcount has grown from 12 to a little bit more than 60 in a short span of time and it’s going to increase again. I think increasing our footprint is also important as you cannot serve the whole region operating out of Dubai. So you need to have local presence but at the same time, you need to have not only sales but also back office support.

Where has the growth come from this year? Jean-Pierre Labry: We don’t have all our eggs in one basket and that’s the beauty of it. The first thing is that R&M has both

10 CABLING PLANNER

2012

copper and fibre in the portfolio. This, by itself, can give me some sustainable growth. Also, I am not focused on only one vertical market. If you are only into construction and located only in Dubai, you may have a tough time. We are present in all the vertical markets, including finance, healthcare, hospitality, education, transportation, oil and gas and telecom. Thirdly, we have an end user approach which means we sell solutions, not boxes. Also, we are not located in only one place, but every country in the region. We are fortunate to have dedicated and passionate people. This for me is the key – you may have good products but if you don’t have the right people it’s difficult. For me, the team spirit is very important. We have seen that users in the developed markets tend to have the latest products but here companies look for an in-between solution. I believe that it has got to do with education. The problem here is that a lot of people are not aware of what is going on in the market so that’s why we decided to launch a training academy. Part of this training academy’s goal is to make sure that we can share our expertise in the market and share our innovations as well. The transition to Cat6A in this market has been rather slow because a lot of people still do not see the necessity of spending more for new infrastructure – they think they are okay with what they’ve got. The

problem is that they forget how fast the technology is changing. It’s important as an IT manufacturer and vendor that we convince the people that spending a little more money today is going to help them for the future because they don’t know what’s going to happen in the future. The data transmission speeds are going so high that it is becoming highly difficult to keep pace if you don't invest in a futureproof solution today.

The debate about shielded vs. unshielded is still raging on. Which side of the fence are you in?


Jean-Pierre Labry: If you look at Europe it’s a shielded market and here we are still an unshielded market. In terms of education people are not very concerned and don’t see the necessity of why they should spend on shielded solution. You can see in Europe where people are more concerned about technologies use shielded because they see the advantages. So I would say we have a good portion of customers here that are interested in going with shielded. We offer both and I would say that the UTP business is still higher than the shielded one for us in this region.

What are the advantages of investing in a shielded solution? Jean-Pierre Labry: Technology wise there is better performance. It depends on the type of insulations that you want to have and locations. At the same time you have to make sure if you start using the shielded one that you use it from A to Z. The speed of a network is always the speed of the weakest link. You may have an extremely good computer, but if your network is not very good it doesn’t matter. So it’s a question of how much you are willing to invest in IT infrastructure and networking. If you use the best from the beginning, then you don’t have to worry about the future.

Our headcount has grown from 12 to a little bit more than 60 in a short span of time and it’s going to increase again

The copper market is really volatile. How do you deal with these price differences? Jean-Pierre Labry: For us it depends on the cycle of the project. If you look in terms of a project, it depends on the cycle of the job. Sometimes the tricky part is that you may quote one price today and then it will be different after six months or eight months. We never penalise our partner for that. I think it is part of our responsibility to make sure that we have a little of anticipation of where the market is going to go. So for us we have quite a good history of sensing how the market tends to go.

because there is an increasing demand. Data transmission speeds are increasing, and fibre is a safe bet though it may not be cheap as copper. All the telecom operators are moving to fibre as they can offer triple play services based on their optic networks. The fibre is so big that you can split it. You can split one fibre into two, four, eight, 16 or 32 because it’s extremely powerful, so it’s difficult to compare fibre and copper.

Do you foresee a scenario will fibre will completely overtake copper? Jean-Pierre Labry: I think you will require

There are a lot of obscure brands offering cheap products in this market. Have you been able to operate the market on quality? Jean-Pierre Labry: We all have the same

both. For sure, fibre is becoming more and more used in the market. Though R&M started off with copper, we are spending more on R&D around fibre

issue of defining what quality is today. If you want a car to go from A to B you may

think that the best option is a Chinese car because it’s the cheapest. I don’t compete with them because I don’t address the same target. I’m not going to spend time to convince the people who are not interested. I think it’s really up to the requirement of the customer.

What are your plans for next year? Jean-Pierre Labry: 1st of January we are going to open Turkey and that’s one part of it. Then the plan is to try to continue the same growth that we had in 2012. It will be a challenging year. We are on the course for two digit growth for sure. For us it’s highly important to see how we can go further and continue this investment and also benefit from the investment that we have done. 2012 CABLING PLANNER 11


inter v ie w Tarek Helmy

Eyes on the future Tarek Helmy, Regional Director Gulf & Middle East, Nexans Cabling Solutions, discusses the implications for the cabling infrastructure in the context of cloud computing.

W

hat will be the impact of mobility and cloud computing on network design and cabling in particular? Tarek Helmy: This trend impacts both offices and data centres. Offices should cater for new smart ways of working, with increasing use of mobile devices – often referred to as Bring Your Own Device (BYOD). The design of cabling infrastructure should incorporate the connections of an increased used of wireless access points as well as be ready to support the required bandwidth. New generation access points are starting to arrive and many foresee this new generation access points will use increased uplink speeds up to 10Gb/s. In the data centre, this results in handling astronomical amounts of operational data and communications, increased by new developments such as high definition multimedia interface, mobility and ‘cloud’ computing. Bandwidth requirements have already moved from one gigabit to 10G and the new benchmark is 40G, with 100G on the horizon. Organisations want 100% uptime, high security, lower costs and a “green” data offering. The increase in mobility computing and cloud computing has created a need for higher speed networks and hence organisations need to prepare themselves to migrate to a network infrastructure that supports 40G and 100G speeds.

12 CABLING PLANNER

2012

Cloud computing requires a higher level of virtualisation, where the cloud users are not connected to physical servers but to a flexible and low latency network architecture with virtual servers. This has impacted network design and cabling as many data centres have now begun moving to a two layer design than the three layer design that was the norm in the past. Virtualisation can now support many virtual servers in a single physical host where the input and output levels increase significantly over the traditional network architecture, thereby increasing the demand for higher performing cabling infrastructure to support higher data transfers. Virtualisation also requires solid cabling systems to ensure that there is no downtime in the network. To support mobility and cloud computing, data centres are now moving to Cat 6A cables for copper and to OM4 multimode cables for fibre.

What factors should you consider while re-cabling the data centre to support some of these emerging technologies? Tarek Helmy: Many experts have said that a robust cabling system is the foundation of a network infrastructure, which in turn serves the larger IT infrastructure. This period of time is especially buoyant with IEEE recently starting to work on Next Generation

Base-T to definition of data speeds of 40Gb/s over twisted pair. Cabling standardisation bodies from ISO and TIA work on cabling definitions to support IEEE. This means that there will be become more options available both in fibre and in copper for various data rates. Preparing now for a 40G-ready network versus a 10G ceiling, especially in the switch-to-switch area is undeniably costfavourable to a data centres’ efficient performance throughout its average 15-20 year lifespan. Businesses that prepare their data centre now with the expected high


port count for 40G/100G also help make data centres future-ready for new emerging technologies and devices as they become available and avoid data centre interruption during complex installation of additional cables for 40G/100G. Recabling to handle future bandwidth is expensive, time consuming and jeopardises an organisation’s ability to perform as well as business continuity. Cables which are installed today should have adequate headroom to handle bandwidth demands for decades to come. Network managers need to ensure that their cabling infrastructure is future-proofed so that they can support the expected speed increases to 40G over copper or fibre infrastructure and 100G over fibre in the coming years.

Do you recommend users to move to OM4 cabling to support 40G and 100G implementations? Tarek Helmy: The cabling infrastructure of a data centre must be designed to support future technologies and data applications. With the continuous expansion and scalability of data centres, organisations should deploy cabling infrastructures that provide reliability, manageability and flexibility. We would recommend that users move to OM3 and OM4 cabling. Multimode fibre remains the best fibre transmission choice for data centres due to the higher costs of single mode transceivers. OM4 supports multimode over longer distances than other fibre products, which can be needed from structured cabling installations in data centres. It can be installed for use in today’s data centres and applications, while providing an easy migration path to future-proof, high-speed technologies, such as 40G and 100G Ethernet.

Is there a demand for intelligent cabling systems in the Middle East market? Tarek Helmy: The Middle East has begun taking little steps towards adopting intelligent cabling systems. However, right now the adoption is

increasing, particularly in the data centre segment. To ensure 100% uptime and operational continuity, Nexans offers LANsense Automated Infrastructure Management (AIM) to keep costs down and lower capital expenditure by automatically mapping, locating, reporting and alerting on any network event. This solution also monitors moves or unauthorised connects, records the power consumption of servers and equipment, and detects realtime changes in temperature, humidity, and operational conditions. Data centres today account for about 10–20% of the cabling market, but we expect this figure to rise to 30%.

requirements to enable data centre owners and operators to manage their infrastructure and power consumption more efficiently and cost effectively. Nexans’ environmental monitoring and access control (EMAC) device when combined with our LANmark and LANsense solutions works efficiently by providing intelligent information on power usage and cooling within the network or data centre environment. Nexans’ EMAC effectively supplements the existing physical layer management (LANmark) and infractructure management (LANsense) solutions by providing an intelligent focus on power and cooling within the network and its physical environment.

Cables which are installed today should have adequate headroom to handle bandwidth demands for decades to come

How can you save energy in the data centre with structured cabling systems? Tarek Helmy: Although counterintuitive, enterprises can save energy by installing ‘green cabling’ solutions, which also protects the environment from harmful chemicals in the long run. As data centres grow in both capacity and physical size, they produce excessive heat through increased power consumption and inadequate cooling. This is becoming a critical aspect of infrastructure management that must be addressed, especially in view of the growing awareness of green issues and the impact of an organisation’s corporate carbon footprint, as well as the escalating costs of energy. So reliable, intelligent management information and control are essential

These intelligent capabilities can be applied to new or existing infrastructure and can be configured either for power monitoring only, or to offer a combination of power monitoring linked to individual outlet control. Users can define their own upper and lower heat thresholds to provide different warning alarms or alerts, such as when a defined critical status requires emergency intervention, and to trigger appropriate escalation paths.

What kind of technical innovations will be in demand next year? Tarek Helmy: We see an increased interest for our 40G cabling solutions, LANsense Automated Infrastructure Management (AIM) solutions and our environmental monitoring and access control (EMAC) devices next year. 2012 CABLING PLANNER 13


a d v ert o rial Fluke networks

Meeting today’s challenges in testing enterprise fibre David Veneski, director of marketing for the DCI business unit at Fluke Networks, lists out the best practices in testing fibre optic cabling.

T

here are several factors driving the almost incessant demand for greater bandwidth in commercial organisations. Among the most notable include virtualisation, the rapid advance of cloud computing, server-to-server traffic, storage access via Ethernet, and ever-greater mountains of data that need to be accessed. However, with fibre becoming the centre of all data centre networks, the requirement for certifying and testing it becomes just as vital. When fibre represented a minority of an enterprise’s network links, either in the Storage Area Network (SAN) or the wide area link to the Internet, the job of fibre test and management was typically outsourced to specialists. However, today when organisations rely heavily on fibre to function, the health of an enterprise’s fibre plant rises to the highest level of importance.

14 CABLING PLANNER

2012

Maintaining tomorrow’s fibre network with yesterday’s troubleshooting tools is a recipe for frustration, if not disaster. To ensure that fibre in data centres is reliable, a network professional needs a more accurate and faster method for assessing the integrity of the infrastructure. Such a shift renders most existing testing equipment obsolete and instead demands a new class of Optical Time Domain Reflectometer (OTDR) that is capable of characterising and certifying enterprise fibre quickly, accurately and without the “tribal knowledge” of a rarified fibre expert. But, what are the parameters an installer or an enterprise site should consider when selecting an OTDR? Choosing the right device not only solves the new generation of testing requirements related to new technologies, but also helps professionals work efficiently while increasing the reliability and value of the enterprise fibre network. To understand what you need to know, let’s look back at the changes data centres are undergoing, and the implications these changes have on fibre testing requirements. Once you understand those challenges, criteria can be outlined for selecting an OTDR to satisfy evolving requirements.

What is Driving Change in Fibre Technology? Modular cabling systems: With its plug-and-play capability, modular or pre-terminated fibre cabling is gaining acceptance because it is simpler and less costly to install than field-terminated cable. The challenge is that pre-terminated fibre is only guaranteed “good” as it exists in the manufacturer’s factory. It must then be transported, stored, and later bent and pulled during installation in the data centre. All kinds of performance uncertainties are introduced before fibre cables are deployed. Proper testing of pre-terminated cables after installation is the


only way to guarantee performance in a live application. High-density and high-speed equipment in the data centre: As data centres grow larger, most enterprise IT departments look for ways to minimise power consumption and reduce expensive floor space. One strategy for reigning in operational expenses is data centre consolidation using faster and higher-density networking and storage equipment. These new-generation devices are usually equipped with 10Gbps or faster fibre links to transport traffic. This shift is driving a significant uptick in the use of fibre in data centres. Virtualisation presents challenges along with advantages: The adoption of server and network virtualisation dramatically affects data centre networks. The implication is two-fold. First, virtualisation consolidates multiple server resources onto fewer physical platforms. This creates much greater data traffic to and from virtualised platforms. Second, this traffic may pass to direct-attached storage or through a switch to network-attached storage, other servers, or to the greater enterprise network. Data centres

adapted to the requirements of virtualisation by using End-ofRow (EoR) and Top-of-Rack (ToR) network topologies. Both EoR and ToR topologies support the bandwidth demands of virtualisation and drive new cabling requirements. Intra-rack fibres in ToR configurations are typically less than 6 meters. To reduce clutter and improve equipment access, patch panels with short patch cords are usually employed to connect server, storage, and networking assets. This creates new problems: A high concentration of fibres connecting the equipment to the patch panels can confuse installers regarding fibre polarity. Short patch cords quality and workmanship defects are invisible to most fibre test equipment. As virtualisation marches forward, data centre networks will fundamentally change. To deliver bandwidth to virtualised assets, 10Gbps, 40Gbps or 100Gbps links will be employed throughout the data centre. Any uncertainty in the fibre links will jeopardise the stability and reliability of the network connected to those virtual servers. It is critical to have these fibres certified with channelised information and properly documented.

So, what are the key parameters an installer at an enterprise site should consider when selecting a new OTDR? With the technological evolution that is occurring in data centres, test requirements dramatically changed for the fibre networks that connect mission-critical servers, networking and storage devices. Selecting the proper OTDR to test your network not only strengthens its reliability, but also improves how quickly and efficiently the job is done, as well as documenting the quality of work. Here are some recommended criteria

to consider, aside from the basic OTDR testing capabilities: 1. A simplified and task-focused user interface: Populating a data centre with thousands of tested fibres is an enormously timeconsuming job. Maintaining fibre health is just as challenging and makes fast troubleshooting critical. Almost every OTDR on the market today is designed to cover carrier applications. As a result, many have very complicated user interfaces, which require the user to grapple with numerous buttons and controls and navigate cumbersome multi-level menus. While this is suitable for the fibre enthusiasts who test Telco fibre on a daily basis, it’s a different story for enterprise network technicians. An OTDR designed around the enterprise

workflow, with an intuitive user interface, greatly improves operating efficiency. Simple-to-use test equipment shortens the learning curve, reduces testing time, and ultimately saves money. 2. Precision fibre channel information: With the increasing use of short patch fibres and multi-fibre connectors, details on every link—loss, connector, and reflectance—are critical to ensuring performance. OTDRs with an attenuation dead zone of more than 3 m are no longer applicable for testing data centre fibre. Ultra-short dead zones are needed to find issues that jeopardise the link loss budget or cause serious signal degradation.

A new OTDR solution from Fluke Networks The OptiFiber Pro OTDR from Fluke Networks is a fibre tester engineered specifically to address the enterprise, data centre and campus needs of fibre professionals. Leveraging advanced optical technology, advanced user interface design principles, and feedback from both seasoned and novice OTDR users, the OptiFiber Pro OTDR provides a unique solution amongst existing OTDRs. It increases troubleshooting efficiency, reduces operational costs (particularly in data centre environments), and provides an unprecedented level of insight into an enterprise’s fibre infrastructure.

2012 CABLING PLANNER 15


O p ini o n CLOUD

Data centres and cloud computing Implications for the cabling infrastructure

T

he advent of cloud computing and the trend to higher-speed Ethernet communications, including mobile apps, is making it more imperative than ever for data centre infrastructure managers to carefully consider their network architecture. In today’s competitive business environment, there is a need to implement the most cost-effective, future-proof connectivity infrastructure quickly and efficiently. Server virtualisation is a major current trend in increasing the efficiency of data centres; however, it can lead to stresses in the supporting connectivity infrastructure. The ever increasing flow of traffic - from the cloud in particular - is putting pressure on conventional network architectures, particularly in terms of ensuring business continuity. These developments affecting the data-centre environment place a new series of demands on the network at the level of cabling infrastructure, and are changing data centre design from the ground level up. At the forefront of these developments is the implementation of innovative dynamic Intelligence, which is addressing many of the challenges data centres face and is leading to reduced capital and operating expenditure.

Changes to traffic flow A conventional enterprise network normally consists of a three tiered set-up from the server upwards: a dumb access layer, an aggregation layer with policybased connectivity and a high-speed routed core layer. 80% of traffic flows from north to south, with only 20% flowing across the network from east to west.

16 CABLING PLANNER

2012

However, application trends (cloud computing and mobile apps) are causing a shift in the traffic flow, due to greater interaction between servers - 80% of traffic now flows east to west across the network, creating a bottleneck in the crucial core layer. This bottleneck presents a number of challenges to the network, particularly in terms of performance, as the congestion in the core layer can lead to ineffective routing, and poor latency. The demands placed on the core layer alone can mean it needs frequent upgrades and software updates, which are costly, time-consuming and affects throughput. Security can also be compromised as the ‘dumb’ layer at the access level is left vulnerable to intrusion. Quality of service can also become a problem, as the network struggles to prioritise traffic and there is a need for allocation of bandwidth on an application basis. These are the key challenges facing data centre managers today. New innovations in data centre design and infrastructure are helping data centre managers overcome these challenges through the deployment of dynamic intelligent network infrastructures.

Dynamic intelligent infrastructure In order to resolve the challenges presented by an increasing traffic flow at an infrastructure level, the network needs to be converged; taking out one layer of switching and aggregating a lot of the intelligence from the aggregation layer to the access layer. Placing intelligence in the access layer frees up the core and aggregation layers from making all the necessary decisions. Traffic will find the shortest, easiest route to get from A to B and the network infrastructure becomes self-maintaining, thereby resolving the previous challenges stemming from a conventional infrastructure. In this way, the decision making, dynamic configuration and routing now resides at the network edge, improving traffic flow and reducing the need for frequent upgrades in the core layer. Security is also moved to the more robust network edge.


A key benefit of this dynamic intelligent infrastructure is that it enables dynamic bandwidth allocation. The network can now allocate bandwidth requirements per application and drive up bandwidth for heavy use as well as enable higher I/O throughput on virtualised servers.

Key considerations In order to realise the benefits of a dynamic intelligent infrastructure, there are a number of key considerations for data centre managers in terms of network cabling infrastructure. An increasing number of virtual machines per server will accelerate data rates on servers beyond 10G to 40G at the network edge faster than previously predicted. In terms of cabling infrastructure for the new data centre design, a

minimum standard of CAT6A copper is required at the network edge as well as OM3 or OM4 MPO fibre uplinks for 40G/100G. The ‘intelligent’ core to edge network offers easier management and the option to eliminate the crossconnect in the data centre, between the server and network switch. Growth in the number of servers per rack will mean that the cooling capacity of forced air in the data centre is exceeded. In this instance, new cold air containment systems will eliminate the need for a raised floor. Cabling will need to be installed in overhead racks in a neat, contained design. The infrastructure from a cabling level upward is therefore undergoing a paradigm shift in order to facilitate new cooling systems and server growth, as the trend toward cloud computing and mobile applications is ever increasing.

A new era of data centre design The dynamic intelligent network infrastructure - the ‘intelligent’ core to edge network - is a self-learning, self-configuring, self-healing system, offering the ability to easily scale up in terms of size and data throughput. This is data centre management for a new era of data usage. The cabling system becomes a virtual system bus, which needs to be matched to future speed and throughput expectations, yet still offer backwards compatibility with legacy equipment. To remain competitive in today’s challenging environment, and with ever growing interest in the cloud and the everyday use of mobile applications, it is more imperative than ever for data centre infrastructure managers to implement cost-effective, bandwidth rich infrastructures quickly and efficiently.

About the author Harry Forbes is the Chief Technology Officer at Nexans Cabling Solutions and was educated in electrical and electronic engineering. He has worked in the cabling and networking industries for the past 30 years in various technical roles, and has extensive knowledge of and expertise in enterprise systems and data centre infrastructure requirements.

2012 CABLING PLANNER 17


O p ini o n CLOUD

Seeing through the cloud cover Werner Heeren, Regional Sales Director, Fluke Networks , makes the case for application performance management tools for the cloud.

N

etwork applications are the lifeblood of a business. Any interruption to the performance of applications as they travel over the corporate network can do immense damage to productivity and, ultimately, the bottom line. In the traditional IT environment, where systems are kept on-premises and the IT manager has a full view of the applications running over the network, there is an inherent control over how these applications perform. If the right Application Performance Management tools are in place, the IT manager can quickly discover if there is pinch point on the network, understand the fault, and have it repaired rapidly. It is a system that works well and keeps network disruption to a minimum. This situation is changing, however. Cloud is now more than just hype – real implementations of the technology are taking place every day, with Saas and IaaS

18 CABLING PLANNER

2012

implementations leading the way. This is something to be welcomed. Businesses now understand what the Cloud really is, what actual benefits it can deliver them, and how they can integrate Cloud strategies into their wider IT infrastructures. As this is happening, however, certain questions are beginning to arise. Most importantly, IT managers are coming to understand that a move to the Cloud means a loss of control. If a business subscribes to Infrastructure-as-a-Service, what happens when the applications running over that network run into trouble? There is a clear loss of visibility here which could lead to faults taking longer to repair, meaning that businesses’ operational effectiveness is impaired for longer periods of time. This is a worrying trend, but one that can be overcome. What is clear, however, is that IT managers as well as Cloud vendors are going to have to initiate a new way of working together. Firstly, it is important that businesses establish clear SLAs with their Cloud provider from the outset. They need to understand exactly what they can expect in terms of service quality, and importantly, what happens if applications start to underperform. It is also necessary to establish who has overall control of fault resolution, and how quickly faults will be resolved if they occur. It is important for IT managers to treat IaaS as they would any IT service, and this means negotiating with their vendor to get the right levels of performance and functionality for their business, applying best practice learned from procuring on-premises equipment. The move to cloud services will, however, also mean that network management tools will need to evolve. Two trends in particular are driving this requirement: server virtualisation and desktop virtualisation.

Server virtualisation is a major building block for cloud services, driving more traffic through the network and thereby creating ever greater capacity demands. The problem for IT managers is that server virtualisation largely hides the complexity of its operations from view. This means that if there is an issue with the data being sent to and from the servers, it is difficult to analyse exactly what is going wrong. To overcome this, it will be necessary to develop tools that can provide a clear view of the infrastructure that underlies server virtualisation, allowing managers to more rapidly diagnose and repair faults. Desktop virtualisation is presenting its own set of challenges. Today’s workers can carry out their duties from pretty much anywhere with an internet connection. This means that user behaviour is less predictable. Where once network managers could accurately predict peaks and troughs of network activity, and take appropriate action, today’s working patters are making this increasingly difficult. Having real-time visibility of the network becomes much more important in this environment, and is another consideration that businesses must bring to the table when negotiating their Cloud contracts. It is clear that we are on the verge of another chapter in the story of network management. There is now a strong case for Application Performance Management tools for the Cloud, ones that can flexibly manage and monitor how services are being delivered to customer organisations. The IT manager’s role, meanwhile, will change to one of chief negotiator and enforcer, responsible for ensuring their cloud vendor delivers a high-quality service with immediate fault resolution.


NS NS NS r rIO IOr IO i i i ou oUuT UTou UT ba ba ba at OatL OELat EOr L rDEu Dur Du us RuSs RNSCusNRCbSe EtebNle,Ctel,be tel, E Rom o m o sit sEit REEsitREm Vi NViT NNFTViNFoNvTe ohNvFHe h Hove h H CE CCOE COCNE CaNOc ac N ac TA TA -T2A0 -B20e B-e20 Be DA DA 1D9Ar1a9h rah19 rah ei ei ei m m m Ju Ju Ju @ @ @

Almost £90 million in server room Almost £90 million in server room research. Now yoursinisserver FREE!room Almost £90 million research. Now yours is FREE! research. Now yours is FREE!

The Advantages of Row and Rackoriented Cooling ArchitecturesDeploying Higha Low or The f e Advantages for Data Centress of Row and Rackr Den -D tu c y e it it s oriented Cooling ArchitecturesDeploy ensity Dat sity Zones h n c e r D A in in g High a Centre d h- for White Paper 130 e a ig v L o H White ow-D -De pr ure for Data Centress cy, Paper ens ity Da nsity Zone Po 134 An Im-Efficien rchitect ensityThe by Kevin Advantages Dunlap and Neil Rasmussenof Row and Rackta Cen s D A in s h White Paper 130 d Ma wer e h by ig r e t NeD tre H n pCroevnt y, Hig Rae puslo sm Whiilte oriented Cooling Architectures a n sen y eta anin d nc per Poage nd C Avela a LoPaw 134Victgor H D iciaIm igr h-D Affn for for feicr ie by Kevin Dunlap and Neil Rasmussen f e s Data Centres 126 r E u e D t n ensity sity Z Wh Ma wermen ool E Highhitte Pap ntres hitec y by Ne it il s y Ra o n smus in Data C We n it a sen an Ce g Den White Paper 130 Arc d Vict Contents entreby es einPapne age nd tCfor g C or Avela er iciataplrRaosmvusrese12dn6 y, Highr fD White r1 > Executive summary Ne ei pe oo Dat apa En 5 me il R Ef n Im W by N ienc e Pa > ExecutivPae per 134 nts haitsm Po 0 c l n by C Kevin i a Whit-Effic onteDunlap and Neil Rasmussen u e t it ng rgy A ig w sPse summ s for ng CCen ity Contents M n ary by Ne aa pe er H hnt C il Rasm by nustsern e en es ne ussen > Executive summary Conten nra15 an Da apa ter Ne and Vi iaebytaNeil Raesm il R ts m ntrg E ctor Av 0 d c g ts as > t Ex D n i e ela ci s ec te r l a 6 m em Co ry utive e tin 12 u Con ff summ p mma per Wh ssen ary en olin Cen ty >E Im ata Cen res 4 gy E White Pa Executive suntents ite te Contenxec t fo g > 1 r o P u t ts 1 m Contents C t a n D le en per ne se pe bivy e r D Cap rs smus s r > Executive summary mary p eil Ra a E sum ata ac by N > Ex Neuil mRamsmaury 150 tive ents > Executive Im ata Chite Ping u4ssen tents t i Execu Con e n s su C c > s 1 o m en ut mary C en ty D Went pileRearssm1 ive su ter Conten mm ts m tebnyPtNare s e ry ary l en ry a i a e s Co m p h m us nt e sumnts m en > Ex cutiv su Im ataWC eil Rasm 4 ts ec ive > Exe Conte N ut 11 ut D iv y r by c es e ar Co pe um Ex m nt Pa ma en um n > ts ry ite se es Revision 1

Revisio n1

Revision 1

Revisio n1

Revis

Re vis ion

ion 1

Revision 1

Re

Re

io vis

io vis

n1

n1

h 1 W evision R

by

m as

il R

Ne

us

Click on a section to jump to it

it Latest generationjum high p to density and2variable density IT on to conditions that traditional data equipment create secti ion 1 on a Revis Click room cooling was never intended 3 center to address, ion go? ctcooling duin resulting systems werthat are inefficient, unpreIntro e po 5 all dictable, and low inth power density. Row-oriented and es 2 itcentre re do cooling toarchitectures ectures have been develrack-oriented Latest generation high density and2variable density IT p ta Whe it to jum d da problems. This rld- 3 onize oped toaaddress these paper contrasts equipment create conditions that traditional data secti e wo tim p to rs 7 op e m ructur t-hou ur ck on An ct ju Cli st 3 room, row, and rack architectures and shows why rowcenter room cooling was never intended to address, at rite aw to ra ? archction ninf 5 ng go meg work powe rthat to oriented will emerge asesthe solution resulting in cooling arepreferred inefficient, unpretio ,000 we cooli c00 onsystems oach troducooling d ul po pr s In ris e ef an se r ,0 th l ap data mpa ion 1 5 17 for mostCo next generation centers. all dictable, and low inna power density. Row-oriented and no us tt ormount public powe ano60 na es tio Revis on ts an en doen rer does e 7 nter more th kth 2 a ica re nv nt ti t at th he ce nif co he ce c ectures have been develrack-oriented cooling architectures w n lic en W ta ot es n ribees Data wastes ctricCity - nter osc dues dofisa aitsig de d da on to m repr rld 18contrasts izeris oach oped to address problems. This paper ptie wo , an p to pemrur ta ce tro tim wide ar of ele ent. This rs3 ent mpathese uipdaou op its 7 eed appr In ustry alueis Co le ct m pa 9 An uailab t-h eqat ctur os Thju stru room, row, and rack architectures and shows why ce lim m per ye equipm en on ind ue ite op yoto n rnseg it awto 20 rowto rallyoav we chpr IT dapo e v.ninf to density ip 5 Latest ar generation high and variable density IT rmanes l iss cia d rk jump oliThng ing IT cial burd mendta oriented cooling emerge as the2preferred solution te qu y cm pticy of to will 00 perfooach tioer,0 n in on en alto cm on ris 12 se ul wo m mouPsI e rgem cti equipment create conditions traditional data onr anw,co ficuien co pageneration ,000 tioef acticna ic y l apprdatathat finan y envirwe mPr a se cus im for ck most next centers. 17 21 nepl al efor 60 on aCo nsmDCnt publerg ns nno licnter poof ae ne Cli tt onn be 3 than ductric entio was never room to address, doEees ne 13 nvcooling an yen po r intended conifinica 7 center kca tioele hego ceinc co ? iplesmor that wasig ion nclusion nllt ethuctio lic that th ents arg n on ribees ctcooling ucesre to ot rthat a Dataprwa o er es du stescturctericCityprovreeopr resulting in systems are inefficient, d Co ra is f we sc on d nt e ch ti m 18 unpreti ipve ta re ce g ite c r pde Intro e po , and paristhap ch tr prsoa t lly imThis Eonpadupe taele widear ul odaon mabqale all aof its5 dictable, and low in Row-oriented and sinardm In ustry alueis Com ent. atica urce density.lim 9 edpower en uail esos aye m r so e s. ind re Th s am ti do c to r ce . re v m re pe Re uip h op dr n ITcti dapyon2 ip re av an 20devele g en on l isshue cre )IT fo eq cent have been rack-oriented rmectures m ti -of Whe pr cooling rd unt co itin taarchitectures e l ce ta rfo ta T erciaylly bu rad to dape p cyeqnu O da suien d al ro a- men n in ing nPte ciath 12 address izetic m rld oen I io 3 y to oped to these problems. This paper contrasts on ure.w, comm eplrgem an e th onfic euwo e a (TCfin 7 ur 21 Prac im optim n umcptiric alctcef e CluPsours erg ns uc yrsenl vir t ne infrof nsatDt-h m ctur room,An row, and rack architectures and shows why rowdjuctstru coaw te ales eca at can be Eeinf n s io nc weer13 ite co hip redponlic ele to ra re ic y nCinork ip it lusion eg arch po ullrceuct 5 ng be ers lly ce inc e th m on ys arccthurciteythanprco rg tiowo ovoli oriented cooling willto emerge asesthe preferred solution Conc oach o red cti ,000e cul ve wn ca ta prphch essra cimd gITite rison l appr triwe lly u ou ,000 noEnusdef 17 ha f o ati l da rar ove onpu cica mpageneration art an 60 blic a se for mostCo next ata na ces data centers. sin n re enorcm po dm al Rnt hent thhrees.than on does entiosour eleat sts t o m ica nteafce tdr ram snt 7 pti nt the ti ica . lickth at ctioents anwattcnif eer her gor dm conv Re foth co l cos dra typ ta cinecnre ce o n ot m es ta u o n ) sig u e ta es ity e es to o C a e Da O ywa to es d roptiact- ric . is reopr thst m ioncenter d fis aPra tisuosciprib 18 rison proach ag ta le of da an igTCde if da eth thmele e o papecmropndele sta of fr entut.reTh Intrustry, an mpa ntucef m u da us e to ssib tion theme des(wi ap nt its ar Co rs lu a u lu q in 9 ed e is y u ye c lim ab nc y to aTh os e ip uredsr onte nsuip ce al me on ind ue ns ailIT e oda esm icit f th po mp ofecothrsh qy pe 20 prop e v. ip onof llyoav lecerITcoyeq rman sic buhitrditen y n tal iss C pticy ping in ctr on o It is nsu igne b ghnew ctoallm Thmerciay c en uqrcu ted to perfo h ciaal rc a c cen 12 ien m sIoe on wrepan tical Ele acti rs. l co dehsahvrofuow y hoati xadata trionam t w, com neplrgem ofin IT cvir su RCeP 21 Prac ctirical efnfic rg s le of a ne can be im E duct fr nte ica atestsd tst o ins mes ice alal pntepothlice y en D o ne ion 13 s th . ele e e les id p ic ce of inc ce ctr pri co alnco xpladra y c n in lusion ll e ct od oen that ove the re th ip e re r e torovf tley ctr Conc ra du ti impr th ctpur ele proagtu erg tio ta le p o data ign pr ify ch g eite ve re s E n duc m ically apus ce toapseibnd once ee des ant ar autaat sin lo n f mdm urce re itystfruthis ppogs s a ptiduof th the qu resaodroram ca ptio ns nt res. Reso h ti ic m g c h fo c m le c e ceu ctr on o TIt isvinnsutly re ign gh w ton inp Oe)r data ro n . Pra nsu io Ele acti rs. slacoreades hrou s ho e axam(ToCw ce th th fra ure o s c lu fr nte ica gate d t in oems ehipal p du ters al in ect nc es ce ctr pri an xplabevcid erstriclly re cen sic hit y Co urc ele pro ture er eavpero wenlec ca ata phy arc ricit can so ap ruc ap hnd ofcoe mati al d ter e IT lect at Re st is p ogsts s a stdu ra pic en f th e e s th . Thgevcin l cyoreto d f ty ta c n o y th od tion th a tl a sas toetaa ible n o e d esig tif e p y u thegross ptio f th e d uan of mnsum it q s o ic f p m o th ctr on o It is nsu ign gh w to ple er c Ele acti rs. l co des hrou s ho xam ow fr nte ica ate d t in es e al p ce ctr pri an xpla vid tric ele pro ture er e pro elec ap ruc ap nd ce st is p gs a du Th vin y re sa eatl gr

tiv

cu

xe

>E >

ive

ut

ec Ex

y

ar

m

m

su

IntroductionRevisio

n1

1

2

Re

vis Room,Ne row, and rack-based ion w bre 2 akthro 1 cooling architectures allo ug w forto jump Click oncon a section to iths in a tained simple and power and Benefit of cooling cooling rapid 7 high-d lowcomparison -de de en Click on Introduction 2 ploymen technolo architectures high-d nsity data sity zones a section gy t of sel center en wit sit to hin fjump opera y zones . The an exi to it The pro tioand ind sti Room, row, Special issues 13 n ofrack-based tiveNe w bre blem: high-d allows for pre 2 ependence ng or new im de akt pa un nsi cooling architectures ct on hrougensity eq of the allo mana dictab ty powe w for the pehs in ged hig uip an Conclusion 18ment le and rel se d a simple Re rfo powe conr tai these iable witho lingof The sol an rof vis h and coo andrm necoo Hig high-d d hig Benefit infcooling lowcomparison ion 3 ras rapidce uti a nega existi lingut electr 7 plo -densi en de zonClic ensittru sith-d cture. ca k onh don ng techn ica 1 y zon es Resources 19 A ym architectures ty da l efficie en: high-density olo-gy p ent low -densi es yopzones an hig of sel cen stion un abila sec ncytato ensit eratewitsid Clickce onon ah-d section toyjump it ter bene hinean tha . Th ma ity ity ITto jump to it fit isf-thaty at mu n con ableop erapla Zoune ingzon existi es allo tionn ind venetio e pro desig Special issues 13 n of 4 ntai epench nTh era ng t tivop f m equ a bleom: excon tio n, imws na e im dehig for pre n ofhig r or ne h-den l desig ncehe str de Introduction pety genm nsi ipm hig tod of the w me2ntadic ma d penun sityple me h-den powe pact on the u tablens. Gu na c tho Ad eq e se ge te c tio dit uip ds sit re r ed idtu al d roli rn Conclusion mentn, andanpre zon18 duTh ion these and cooling performyan d nthigh6 re witho d rel is pro Higinhigph-d le anbene ro enfe diciab nfits e sol cees hig Room, row, and rack-based infras rayti ata stre 3 ut tof d vid sit uti b a exi c w h-d d electrNe a ne ed 2 en tru c on breaktsity lem zon sting . ga ctu 19 on e en sse cna cy h dlued : hig req czon ica Resources o es cooling architectures hrozon pa. T nins g h-d A side low-densi es op re. of ters s th ance allo ughs wl eff uirhoouse sity foricie ulinngvs. ncy tha op Inbilh ityo s wen era on pla in po be 12 a sim ty th it ne te con e . ve mcaa ven fit ite a IT at mu h p is e Ins e po nning ple an n con wer an able op taine Zoduym mizplo ne Th tide ch hig is that d coo d ven d higde Benefit comparison of obf r-a tall we connpaa ydo ntoe een erhqste sigcooling lowera tio ilit ssi eua owe qu4 id de h-d ling tec her n, imrap nal de 7 plo -detion of m gbnm xpnst tai en ildit en an is pa setrus Clic sityple r a ipm atio r de architectures iptidn high-d ymsig toyd to me ureaesec me high-d nsity zon da entns. Guhnology d c Repaluce o kecon g thom ta cen Ad nd en n a nsit ern ensit esnta m a to tedp pytion ertim dit re wittio idselftuma ohig freth hinn, andofpre e dse,notv13 eion t zones th jum opera ensity zones ter alna dudeTh redrotlife oli h-d y . yTh hig ph-d suit setrrlo6 cooli can nd e 13 e dpaato is proan exi ge tion of indep annbe ro dictnfits en inecpble ta me Special issues ra icen carieypro g ne dssit re e videdsting n nt le tive im ende high-d allows for pre or ne tasit tizon ofyra Clic luhedysm:ble nbcezon ocnk eecenan sasdes, g in ad req d ccoaode bma un nsi m p dictab nce. of the w le s ty ic y ko a in Co tedrsp s th powe pact on the ensity eq an fra to on lo Inlincit . Tthe gal in s wpna uirusi se hoon le na uip r op ncl d th16 Conclusion hig12 he p ov fr itehrfgef cd gvs. yc mven e use h .re Intr se these and cooling performan 18ment wit and reliable Indsic e ploos tide erhas poorm issuere ri plodTh a a m ctio H to T do ho a e ce high-d n ym b p inf ws o sol r-a ut od 3tat h ize enigutiana ilitcipssiste Re a nega etr wean qule electr n to ensity rastructu 19of existing zon uc ipv llpao er fd ceanstuh dbon ilg: higyh-d an issou les atiudnctu p rce uss es ica Resources to Ba tio jum 17rer acne a meleis tiw oner en pa re enitseym A side low-densi d c Reapalen e ance on l efficiency zones operare. g fo sity m e a ck e o n , s u d p to be nt. er ov13 ty rtim ity ohig f thbilitpre ityt th gro Appe re cnd nt c an de enm ma ond able op planning than convente at much nefit is tha Zo linh-d it y ome dic IT ee ra aas18 e pge anna Ca uchreie erlsoou4oolin an le d un desig Ane ccon t nsesit eratio gixcuen highe tio nt f ri a a y tai p q ta h n, rc na v of c zon g mo b uipk n of hig d r im a ac Clic d l de s a x b g nm ey en td le menc anindg p s,ea.n infr d to ity k o me h-den plementatio signs. Gu ConclAdtrupcacpeecstethedspic Sy pe dse lo adit 16 preow d ity de roal in erntho sity zon su n a reusi idtu n, and ion s on 19 n rf te du pri lifefra da o t ssure6 dicer los realmhig es is pro pre m-l Intro pply sectio anbene ro nen nfits in an ph-d raytistr tarm tre le t p s o d Ma c sit vided dictResou ev du an n to cean ss ve ow f zon 2 u . cos ancy cluadgebleip o n req drce le d e c e c m ag Ba l c tio d jum 17 ntece a es th l er In. T ingme s swfon otu li u ouse ing ck ap n em p rs. nd ise Mo he ovnt. ithr a f threisre12 ptideirho gro ac ed ngvs. to Apope a In c p c n n ven c e a nd plo s abr-a rh pohie eqo sta ow fo ito Ca p u itie d it 2 to en apa do Th mizix Aym r IT ri p ac nd il ssi ea 18wev uurc lla er s ip an is pa e us enstu bilit ity to ste tidn de ng p acit ity r in 4 ang p mee. tion denCo d c RepalSvyic ow y s m g, o 13 ertimeeof re p y at sitn d ow nt a setes e up ohig de ma oli h-d yclu thnagered the ea19 r a C ply su verlo cooelir can nRd m 6 e p me ng en ssit s n ra ic c -le ioan M o ta of ck re a ad ng leaeso ca riyb zonhey nt 2 veClicd conatnd na l c k o oli e de Concl pac es th sica ble p enc nd p s, an infra d tourc gin ngn m 9 16 usion ity eso l M lo d e re g c In apan a se 2 ma e pri infra rform sure dic los cit ct ts and ap tr fo nito s ies ion na ncip str r IT ri ac od Resou an lev t po of 13 u ity uc ge to rces de ng pB 4 tio jum 17 ce el is wer C me les fo ctu vic oawck on n p to nt. r a re re and e Appe ergro clu c 15 ndix A s 6 it a n un Csa sio Re 18hiev ourc d cd pa n so ing e. o c urc ity o16 9 lin po su es Syste g we 19 pp m-l r ly Ma an ev 2 na d d 13 el gin ca em Mo pa gc 2 cit 15 and ap fo nito ies r IT ri ac ity de n g p 4 16 Co vic ow nc es er lus 6 an Re ion d so co urc oli 9 ng es

13

'Implementing Energy Efficient Data Centres' 'Implementing Energy White Paper #114 Efficient00Data Centres' £

44 FREE! 44 FREE! £ 4400 FREE! White Paper #114

'Implementing Energy 00 £ Efficient Data Centres' White Paper #114

'An Improved Architecture for High-Efficiency, 'An Improved Architecture High-Density for High-Efficiency, Data Centres' High-Density Paper #126 'White An Improved Data Centres'Architecture 00 £for High-Efficiency, White Paper #126 High-Density £ Data 00 Centres'

'The Advantages of Row and Rack-Oriented 'The Advantages of Cooling Architectures Row andCentres' Rack-Oriented for Data Cooling Architectures White Paper #130 of 'The Advantages for Data 00 Centres' £Row and Rack-Oriented White Paper #130 Cooling Architectures 00 £ for Data Centres'

15 16

'Deploying High-Density Zones in a Low-Density 'Deploying High-Density Data Centre' Zones in a Low-Density White Paper #134 Centre' 00 High-Density £'Data Deploying White Paper #134 Zones in a Low-Density 00 £ Data Centre'

'Power and Cooling Capacity Management 'Power and Cooling for Data Centres' Capacity Management White Paper #150 for Data Centres' 00 Cooling £'Power and White Paper #150 Capacity Management £ for Data00 Centres'

81 FREE! 57 FREE! 78 FREE! 152 FREE! 81 FREE! 57 FREE! 78 FREE! 152 FREE! 00 £ £ £ FREE APC Papers£15200 FREE! 78White 5700 FREE! FREE! 8100 FREE! Download White Paper #126

White Paper #130

White Paper #134

White Paper #150

Download FREE White mistakes Papers to avoid the mostAPC common to avoid the most common mistakes in planningFREE IT power cooling Download APCand White Papers in planning IT power and cooling to avoid most mistakes Have a plan the for your data common centre

Weplanning talked to for thousands of customers Baltimore to Beijing and saw Have a plan your data centre in IT power and from cooling

the bad, and the ugly measures in their We good, talked the to thousands of customers from customers Baltimore totook Beijing anddata sawcentre planning. In many cases, turnover and budget cuts resulted in no plan all. Have a plan your centre the good, thefor bad, anddata the ugly measures customers took in their dataatcentre planning. Intomany cases, and from budget cuts resulted in no plan We thousands ofturnover customers Baltimore to Beijing and sawat all. Gettalked the answers you need and avoid headaches tomorrow the good, the bad, and the ugly measures customers took in their data Do you your staff top ten planning mistakes to avoid? Thecentre easiest Get theand answers youknow needthe and avoid headaches tomorrow planning. In many cases,without turnover and budget cutsFind resulted inanswers no plan and at all. way to improve cooling spending a dime? these Do you and your staff know the top ten planning mistakes to avoid? The easiest more inanswers our latest selection of spending white papers. TakeFind advantage of our valuable Get you need and avoid headaches tomorrow way the to– improve cooling without a dime? these answers and research today and save yourself money and headaches tomorrow. more – in ouryour latest selection of top white Take advantage of ourThe valuable Do you and staff know the tenpapers. planning mistakes to avoid? easiest research today and savewithout yourself money and headaches tomorrow. way to improve cooling spending a dime? Find these answers and – in our selection of white papers. Take advantage of our valuable Bring your business cardmore + this adlatest to our event research today and save yourself money and headaches tomorrow.

Bring yourthe business card to + this our event and enter lucky draw WINad antoiPad3! and lucky draw to WIN an Visit enter www.apc.com/promo Code 26827p Bring yourthe business cardKey + this ad toiPad3! our event Call 0845 0805034 • Fax 0118 903 7840

Visit www.apc.com/promo Key Code 26827p and enter the lucky draw to WIN an iPad3! Call 0845 0805034 • Fax 0118 903 7840

©2012 Schneider Electric. All Rights Reserved. Schneider Electric and APC are trademarks owned by Schneider Electric Industries SAS or its affiliated companies. All other trademarks are the property of their respective owners. www.apc.com • 998-1764_ME-GB_A ©2012 Schneider Electric. All Rights Reserved. Schneider Electric and APC are trademarks owned by Schneider Electric Industries SAS or its affiliated companies. All other trademarks are the property of their respective owners. www.apc.com • 998-1764_ME-GB_A

Visit www.apc.com/promo Key Code 26827p Call 0845 0805034 • Fax 0118 903 7840

©2012 Schneider Electric. All Rights Reserved. Schneider Electric and APC are trademarks owned by Schneider Electric Industries SAS or its affiliated companies. All other trademarks are the property of their respective owners. www.apc.com • 998-1764_ME-GB_A 26827p_Network_World_ME.indd 1

10/30/12 4:21 PM

26827p_Network_World_ME.indd 1

10/30/12 4:21 PM


Does your fibre system tick all the boxes?

LANmark-OF : Competitive Fibre Optic Solutions 40G

100G

• Micro-Bundle cables save up to 50% trunk space • Slimflex cords offer 7,5mm bend radius saving 30% space in patching areas • Pre-terminated assemblies reduce installation time • MPO connectivity enables cost efficient migration to 40/100G

www.nexans.com/LANsystems

LANmark-OF brings the best fibre technologies together to ensure maximum reliability and lowest operational cost.

OF brochure

Accelerate business at the speed of light

info.ncs@nexans.com

Global expert in cables and cabling systems


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.