The best solutions are well researched -The Wellcome Trust Centre for Human Genetics, University of Oxford

The Wellcome Trust Centre for Human Genetics, University of Oxford was established in 1994 to undertake research into the genetic basis of common maladies such as asthma, diabetes, malaria and cardio-vascular disease.   Since June 1999 the Centre has been located in a new facility, the Henry Wellcome Building of Genomic Medicine, which is home to leading research luminaries as well as multi-disciplinary research teams in human genetics, functional genomics, bioinformatics, statistical genetics and structural biology.   The Centre is a not-for-profit organisation which places all its findings in the public domain.

For the last 6 years, Dr Tim Bardsley (he holds a doctorate in computer science) has run the IT network which underpins the organisation, including all hardware, software and infrastructure.  In addition to over 450 network-connected users, the Centre boasts a 150-node Linux cluster plus a number of 8-way servers to meet the data processing demands of the Bioinformatics and Statistical Genetics work involved in genomic research.

IT is a vital component in increasing the speed and accuracy of genomic research where highly trained bio-chemists produce large volumes of data which is fed to bio-informatics for processing and statistical analysis.  “Our storage capacity is a good indicator of our increased ability to generate data.” Says Bardsley.  “When I joined the Centre we had 100GB.  Since that time the Sanger Institute has mapped the entire human genome and additionally we’ve seen the exponential growth of genomic data, so much so that the Centre now has an 8TB SAN and we know that will continue to grow.”

“My job is really to ensure that all the data is safeguarded.  Although we don’t put a dollar value on it, some of the research programmes have run for over three years and in real terms the scientific value and significance of the data is incalculable.  Therefore we take a kind of ‘belt and braces plus a bit of rope just in case’ approach to ensure its protection and back-up.  Of-course, being so heavily data centric also means that we have to ensure the availability of the IT systems, since if we’re offline the researchers and administrative staff are severely impacted.  And the cornerstone of availability is a stable power supply.”

“Unfortunately we’ve had real problems with the mains electricity.  The local area has been subject to ongoing development which seems to mean power cuts are inevitable.  Over the last 6 years I can’t really remember a time when something wasn’t being built and it looks as though this situation will remain unchanged for the next 4 to 5 years.  In addition to these construction issues, we’ve experienced a number of unexplained outages which are probably more due to grid problems than contractor mishaps.”

“It was these outages that revealed the shortcomings in our previous uninterruptible power supply (UPS).  The first time that the power dropped, the UPS failed to support the load and the servers shut down because it turned out that the batteries were no longer able to hold a charge.  Unfortunately the legacy UPS had no facility allowing us to monitor or test the state of the batteries, so when that first outage struck we suddenly discovered that our UPS wasn’t a UPS at all!”

“We were fortunate that the effects of that first power cut weren’t more adverse – it took us around half a day to recover any unreliable data by doing a complete file system repair.  When the system crashed, data was being written to disks that had just suddenly disappeared off the network.  But it could have been much worse, when power is cut to hard disks it can cause the heads to crash down causing physical damage to the surface of the disk and rendering both the disk and the data unreliable.  In this case we just lost a few hours while we were offline, but as I’ve explained there’s always an expectation that the systems will be available 24×7, so downtime is lost time as far as research is concerned.”

“Our first step was, therefore, to replace our UPS.  But as we considered the way our network was expanding and also our provision of core computing facilities to other research groups housed at the Centre, it raised the concern that we really needed a more flexible, robust and autonomous physical layer for the network.  So we started to look at what was available on the market and one solution immediately stood out – APC’s InfraStruXure architecture.  It was brought to our attention by APC Gold Partner, Latitude UK.”

“As an IT reseller, Latitude has long term relationships with Sun and Fujitsu Siemens, and they had realised that increasingly high computer densities would have an impact on users’ ability to power and cool their hardware.  Based on this, they’d courted APC to build a business around high density data centres and computer rooms.  From the Centre’s perspective we were very comfortable partnering with a company who understood both the physical and IT implications of our situation.”

“For instance, this is a high density environment to which we anticipate adding more blade and single-U servers.  Therefore we need infrastructure which is expandable, scalable according to demand.  But at the same time we don’t want the up front costs and timescales associated with purpose-built facilities.  We’ve outgrown our computer room once so agility and flexibility are key to being able to adapt to indeterminate future power, rack and cooling requirements.”

According to Latitude UK director, Nick Jago “Once we’d performed a site survey and received a detailed inventory of the equipment to be hosted, we used APC’s online Build-Out Tool to specify the InfraStruXure solution required.  The Build-Out Tool empowers companies like ours to configure systems and even lay out data centres with no more than a practical working knowledge of mechanical and electrical considerations.”

“The solution we recommended is an InfraStruXure Type B solution consisting of a 40kW UPS, 5x equipment racks and managed PDUs.   InfraStruXure (ISX) was designed to meet demands such as those found in the data centre at the Wellcome Trust Centre for Human Genetics; as a modular and pre-constructed solution it can be easily deployed and scaled up or down with the addition of cabinets, batteries and even power modules.   Not only that, unlike custom-built data centre infrastructure, should the Centre now outgrow its new data centre facility ISX can simply be unbolted and re-sited with no loss in investment.”

For Tim Bardsley the new solution is already making life easier:  “The industry standard racking has made it easy for us to house our Dell and IBM kit side-by-side and we’re confident that if we want to introduce other manufacturers’ equipment into our heterogeneous environment we’ll have no compatibility issues.  Although we’re yet to start fully taking advantage of APC’s management software – InfraStruXure Manager – metered power distribution means that we can keep an exact measure of the capacity available for future expansion.  With current load, the ISX solution provides 1.5 hours autonomy in the event of a power outage although this will obviously change as more equipment is introduced, probably sooner rather than later as we’ve just ordered an additional two equipment cabinets !”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.