News

Mission Critical, navigating your options

Robert Thorogood, Director, Mission Critical, reviews the technical considerations on data centre provision.

We live in uncertain times. Companies should be flexible in the structure of data centres and how critical facilities are used in the future. Strategies such as scaling, optimising module size and phasing should be employed to minimise initial capex without compromising future build costs.
Building shell
When considering the type of technology and the density of technology to be deployed then the type of building to be designed or selected will range from the following:

  • Containerisation
  • Modularisation
  • Open Plan
  • Box in Box
  • Prefab Module
  • Single Storey or Multi-storey

Modular data centres

Modular data centres vary from containerized systems through to walled systems with a particular number of racks to cube systems which are a matrix of rack components. Modular data centres use the same techniques used in containers but on a larger scale and with the ability to add further capacity. However, higher levels of resilience and redundancy are difficult to achieve in a modular data centre. They have a place in the market but have to be considered in the wider context of overall cost against the impact on a business when a module fails.

Containerised vs bespoke
The likes of Google, HP and IBM have all demonstrated what can be done with containerisation.
While this may seem a good way to reduce costs there are some big risks. The load densities in these containers can be very high. Some users can cope with this, most cannot. However, why buy a 40ft container – to demonstrate you can put a lot of servers in a small space – when standard partitions
can be used with similar results and even less cost?

The infrastructure offer
Modular data centres and containerisation are only two of the alternatives which need to be considered in the infrastructure of the data centre.
Open plan systems, box in box and prefab modular systems (single or multi-storey) are also options that should be factored in. With rack solutions, air, water, CO2 and density considerations should all be analysed.

Rack solution
The type of rack solution to be deployed will often impact the space and load allocation.
These vary from Air, Water, CO2, direct and indirect chip set cooled systems. The selection will depend on whether there is a requirement for low, medium or high density solutions.

Multi-tier
Key to any critical facility is its ability to deal with equipment and system faults and planned maintenance. It is important to look at business risk and design the engineering infrastructure to match the risk. Often critical facilities can have different levels of risk and so systems with differing levels of resilience can be designed to suit.

Tier I, II, III or IV – levels of resilience
The most common Tiering system used is from The Uptime Institute (TUI) but there are others. Their
Tier levels indicate how designs perform in terms of average ‘uptime’ per year and their ability to deal
with planned maintenance and system faults. What the TUI Tiers don’t tell you is which should be used
to suit a particular business. Tweaking Tier levels is common place in trying to match the business
risk to the cost of the resilience level.

UPS systems: DRUPS vs static vs hybrid
There is no one answer; all have their merits and weaknesses. If you are on a minimum capex project
then you are unlikely to go away from static systems, but if you are looking at the longer term then the life
cycle costs become much more important.

Water cooled or air cooled?
Physics helps us a lot here. As water is over four times better at carrying heat than air and is a lot easier to contain, we should use water wherever possible. The tricky bit is when water is transferred to air or air to water, as this is a relatively inefficient process. Externally, if you are going to use air, then you need a climate that will help you. So in parts of the UK, particularly Scotland and the North, where there is a great
amount of free cooling, we should use it wherever possible.

Metering or monitoring power?
Both! We have to meter our distribution systems to comply with building regulations which quite rightly
ask us to review and understand our use of energy. However, we also need to know when parts of the
system are being attacked by surges, spikes or harmonics, so good quality monitoring is essential.

Fibre or copper?
Ideally everything should be on fibre. It’s the future, it’s technically better, it’s immune to interference
and it’s fast. But can we afford it? It will take time. The costs of full fibre implementation are coming down, but it could be another 10 years before we see the data centre which is based only on fibre.

LV or HV distribution?
This would seem to be a simple electrical design issue, where the limits of fault level define when you change from LV to HV. But it’s not as simple as that. HV generation is marginally more expensive, and needs people to operate and maintain it who are authorised to do so. This has a cost to it long term, but sometimes there is no choice once you have reached a certain size and capacity. There are benefits though, an authorised person has to follow strict procedures and reduces risk.

Diesel or gas generation?
For most this has not been an option but there are more and more engine suppliers who are able to
provide both. Using gas on its own is tricky as gas engines and turbines are not very clever with dynamic load changes. But, a combination of gas or diesel based engines allows a reduction in on-site fuel storage and gives more resilience. It should be considered for all projects where the load is more than around 3 to 5MW.

CHP and absorption chillers
Options that should be considered in order to reduce your energy and carbon footprint (and arguably your
PUE) are Trigeneration systems also known as CCHP or CHP coupled with absorption chillers.
Availability of natural gas or other fuels such as biofuels, can be a challenge on some sites. It is also key to identify the profile of the cooling load particularly the proportion of IT load as this is needed to model the viability and saving potential.

Technical load
The norm would be 1,500W/m2 (139W/ft2). But there are other options – 1,000 or 2,000W/m2 (186W/ft2)
loads are also commonplace. The IT load varies depending on the hardware, software and racks used
as well as the applications that are run. In order to provide a flexible future proofed environment then
the following level of loads are typically selected from 1,000W/m2 (93W/ft2) up to 4,000W/m2 (372W/
ft2). The type of rack and infrastructure and type of presentation has to be reviewed in all cases.

Tweet about this on TwitterShare on LinkedInEmail this to someone