Loitering at 2-4 kW rack densities is no longer an option for many of today’s CIO and CTOs. Even simple blade chassis can use 8kW and three of them in one rack can generate 24kW of heat. But ramping up to survive a +20 kW server rack design means upping your game when it comes to thermal management and cooling practices.
If your current cooling infrastructure isn’t designed to cope with higher rack densities, the new servers can cook, hot spots can lead to hardware failures, or even worse fires. Migrating workloads away from poor cooling designs to prevent premature hardware failures is just plain embarrassing.
To my mind you have three obvious options to solve this problem:
Communicating the business case for options 1 and 2 will always be an uphill struggle. Option 3 will always be listened to, no organisation will ignore cost savings no matter what the perceived risk of moving.
Automatic migration of workload
There are numerous technologies available now to migrate workloads on the fly, both stateful and stateless migration of data and machines are mature technologies available to all organisations at a fraction of the cost that they once were.
The use of such technology is laudable for the purposes of business continuity, but it can be a worry when an organisation’s IT systems are interconnected in complex and often poorly understood ways. No CIO or CTO wants to be in the position of continually moving such workloads simply because the physical cooling system is not up to scratch.
Making your cooling system stretch further
This can range from the simple, don’t put more kit in the rack than it can cool but use more racks, through leaving the rack next door empty and “borrowing” its capacity, to install supplementary cooling systems. Using more racks may be an option, but floor space is not without cost. Borrowing capacity from elsewhere is risky because of hot spot formation, necessitating careful planning and analysis to prevent the potential for fire suppression discharges or even fires.
There are whole industries of supplementary cooling systems, there for you to spend capital on. Simple ideas, like aisle containment look at first glance to be low cost effective solutions until you realise that it probably means altering the fire detection and suppression systems.
In rack fans can increase the flow of air through the IT equipment, effectively borrowing cooling from elsewhere in the room, but unless they are automatically tuned to the needs of the IT equipment they will be set up to flow more air through the equipment than it needs. Excess air is wasteful of the energy needed to run the fans, but also it lowers the return air temperature to the cooling system, making it less efficient. So it is not only extra capital you will be spending.
In-row or in-rack supplementary cooling systems are point solutions designed to add more cooling to the data centre, they are not cheap or quick to install, nor are they inexpensive to run.
Liquid cooling is interesting, the thermodynamic efficiency of cooling using non-electrically-conductive liquids is far better than air, for the moment it is a niche market, but I look forward to accommodating a customer using it in our data centres.
Make the move to Ark
Alternatively, you could sit back and simply avoid the expense and risk involved with retrofitting cooling technologies that deliver uncertain outcomes and generate additional operational headaches.
Instead, move into one of our data centres designed, built and operated to be highly energy efficient at any density between 1.25kW and 30kW in any rack position. Ark data centres are smart buildings, modern IT kit reduces its power requirement when they are not in use, so the 30kW rack goes to 1.25kW, the smart bit is that the amount of cooling applied, dynamically changes to match the demand on a moment by moment basis. So if your IT tunes down to reduce power consumption, so do our data centres.
If you want to know more about how Ark can save you money or the cooling processes in our campuses call 0845 389 3355.