The impact of AI on data centres
Before we start, let's get pedantic: at this point in time artificial intelligence is a purely theoretical concept. True AI, a sentient computer capable of initiative and human interaction, remains within the realm of science fiction. The AI research field is full of conflicting ideas, and it’s not clear whether we can actually build a machine that can replicate the inner workings of the human brain.
And yet, being a part of the IT industry, you’ll have seen proclamations that one product or another delivers AI functionality. Just a few years ago, the same functionality would be called data analytics.
This being said, machine learning and related techniques are already producing some impressive results, so this post will look at the potential near-future implications of AI research – if we’re being optimistic.
In the data centre
The impact of AI on data centres can be divided into two broad categories – the impact on hardware and architectures, as the users start adopting AI-inspired technologies, and the impact on the management and operation of the facilities themselves.
We’ll start with the first category: turns out that machine learning and services like speech and image recognition require a new breed of servers, equipped with novel components such as Graphics Processing Units (GPUs), Field-Programmable Gate Arrays (FPGAs) and Application-Specific Integrated Circuits (ASICs). All of these require massive amounts of power, and produce massive amounts of heat.
Nvidia, the world’s largest supplier of graphics chips, has just announced DGX-2, a 10U box for algorithm training that includes 16 Volta V100 GPUs along with two Intel Xeon Platinum CPUs and 30TB of flash storage. DGX-2 delivers up to two Petaflops of compute, and consumes a whopping 10kW of power – more than an entire 42U rack of traditional servers.
And Nvidia is not alone in pushing the envelope on power density: DGX-2 is actually a reference design, and server vendors have been given permission to iterate and create their own variants, some of which might be even more power-hungry. Meanwhile, Intel has just confirmed rumours that it’s working on its own data centre GPUs – expected to hit the market in 2020.
As power densities go up, so does the amount of heat that needs to be removed from the servers, and this will inevitably result in growing adoption of liquid cooling. Dumping $200,000 worth of equipment into a tub of mineral oil or bringing water pipes into the rack might not seem like a good idea today, but these approaches might offer the only way to cool servers of the future.
As a consequence, AI research will require data centres engineered for higher power densities, with additional cooling and very, very strong floors.
For the data centre
But machine learning is also useful in management of the data centre, where it can help optimize energy consumption and server use.
For example, an algorithm could spot under-utilized servers, automatically move the workloads and either switch off idle machines to conserve energy, or rent them out as part of a cloud service, creating an additional revenue stream.
Google has famously claimed that it used AI to reduce its data centre Power Usage Effectiveness rating by 15 percent, saving millions on electricity. While the company is reluctant to share this technology, other businesses are bringing similar capabilities mainstream.
American software vendor Nlyte has just partnered with IBM to integrate Watson – perhaps the most famous ‘cognitive computing’ product to date – into its Data Centre Infrastructure Management (DCIM) products.
“Behold, a new member of the data centre team, one that never takes a vacation or your lunch from the breakroom,” quipped Amy Benett, North American marketing lead for Watson IoT.
Beyond management, AI could improve physical security by tracking individuals throughout the data centre using CCTV, and alerting its masters when something looks out of order.
I think it’s a safe bet to say that every DCIM vendor will eventually offer some kind of AI functionality. Or at least something they call AI functionality.
- Enterprise Business Development Director
- Up to £80,000 + Uncapped OTE
Our client, a Global Managed Service Provider, is seeking an Enterprise Business Development Director who will be responsible for scoping, identifying, creating and driving revenue growth across Europe and Asia at C level in the enterprise market. The Enterprise Business Development Director will need: Experienced in selling high value (multimillion) Managed Services and SDWAN to large enterprise parties· Have the ability to scope, identify and sell high value and complex managed solutions Extensive experience of commercial principles and contract negotiations with new global clients. Consistency of tenure in current and recent job roles Managing presentations, negotiations, and responsible for development/nurturing of the client relationship. Reference Number: BD7371
- Technical Design Authority (Telecoms, SDWAN, IOT, WAN, Hosted Services)
- Up to €90,000 plus car, bonus and benefits
Location: Frankfurt Technical design Authority is required to help lead a number of key client Migrations projects for this tier 1 Telecom company, the main role for the TDA is helping customers migrate to new services, with a focusing on hosting (AWS, Azure) SWWAN and IOT. You will be responsible for: Post sales design documentation, implementation and migration of complex solutions for managed enterprise customers. Complex solutions consist of multi-product services. The TDA’s role is to ensure that these services interoperate and integrate into the customer environment. Such products consist of but not limited to MPLS, Ethernet, IPSec VPN’s, VoIP, Video Conferencing, Wireless, Internet, Private DSL, WAN Optimization, Managed Security Services, Managed Hosting, SDWAN and Complex Migration Planning. The TDA will own the technical delivery of customer solutions and will be the technical interface between the customer, product teams and project management during service delivery. Close engagement with pre-sales, technically validating solutions proposed are deliverable and all technical aspects are clearly defined prior to contract signature. The TDA accepts technical ownership of the solution at the point of contract signature. Lead customer facing technical workshops requiring excellent communication with the ability to articulate technical concepts clearly to all levels of competency. Providing support to 3rd line teams for OEM and design related faults. You will need to be at CCIE level (ideally CCIE R&S or SP ) with strong low level design and deployment skills, comfortable in front of customers and leading customer meeting. Fluent German is required. Knowledge in SDWAN and Hosted services would be advantageous. Reference: RA7302
- Big Data Architect
- £70,000 + Benefits
A Big Data Architect is required for a leading Google Cloud partner. The Big Data Architect will be responsible for advising external customers(FTSE100) on Big data storage and transformation requirements on the Google Cloud platform. You will get the chance to be at the forefront of technology, regularly involved with Google Alpha tests – You will be shaping the future of google tech. Experience required; Public Cloud Architecture – Ideally Google but will consider people with an Azure or AWS background who are looking to move into GCP. Experience with Big Enterprise Data – Set ups, Flows, Pipelines etc. Strong SQL understanding – DataBase, Data Residency. Candidates must be based and eligible to work in the UK without sponsorship as our client do not have the ability to sponsor. Reference Number: PG7347
- NetIQ Consultant (Contract)
- £600 Per Day
A NetIQ Consultant is needed for a 6 month engagement in London. The NetIQ Consultant will be responsible for Designing, Configuring & Implementing Micro Focus Operations Centre & eDirectory Solutions. Required skills and experience Current experience with Micro Focus Operations Centre & eDirectory. SC Clearance is needed due to the nature of work. Reference Number: CH7363