top of page
VIPA_PPT_Background_06_Purple.png

VIPA Articles 

Blue Purple Modern Neon Ai Art Instagram Post .png

AI's Impact on the Data Center Industry

Generative AI, which powers the impressive technology behind OpenAI’s ChatGPT, sparked a frenzy in every industry as to how it could unlock time and cost efficiencies within businesses. However, as the integration of AI into the day-to-day activities of businesses and consumers continues to grow at an extensive rate, this will have a significant knock-on effect within the Data Centre Industry – the powerhouses that make all this technology possible in the first place.

What is the impact of AI on the wider Data Centre Industry?

 

GPT-3, a single Large Language Model (LLM) with 175 billion parameters, required 936 MWs of data centre power to train. This is a vast amount of power when you consider the size of a tier 1 European market, such as London or Amsterdam, equates to ~900-1000MW. It has been estimated that using NVIDIA A100 GPUs to train the GPT-3 model would have required 1024 GPUS, 34 days and cost $4.6m. The release of GPT-3.5 resulted in a significant increase in performance for users due to optimisations that were made, and a ~90% reduction in inference costs for OpenAI. The release of GPT-4 in March 2023 demonstrates the speed at which these models are advancing. The new model outperforms it predecessors at tasks requiring advanced reason and complex instruction understanding, while also improving creativity. To achieve this the number of parameters has been significantly increased, although the parameter number is still undisclosed at the time of writing.

 

Other major tech companies are also developing their own versions of Generative AI. Google is ensuring their search engine does not fall behind Microsoft’s “Bing” search with the development of their Pathways Language Model, which utilises 540 billion parameters. As other competitors follow suit to join the race in the AI industry, models will continue to grow exponentially and subsequently require a substantial increase in server compute. Alongside this, with Microsoft’s investment in OpenAI, it won’t be long until the capabilities of Generative AI begin to be integrated into search engines (Bing) and word processing tools (Office365).

 

ChatGPT has been reported to be the fastest-growing tool in history, registering 100m users and averaging 13m unique visitors per day in January 2023. As more businesses and users integrate AI into their workflow and tools, the demand will only grow further in the following years. Subsequently, data centre infrastructure will need to be ready to meet this demand.

 

How can AI benefit Data Centre operations?

 

The Data Centre industry is still in the early stages of adopting Artificial Intelligence (AI) and Machine Learning (ML) algorithms as a core part of facility operations, with the goal to run facilities more efficiently, sustainably, and reliably.

 

So far, the industry has seen large operators (e.g. Google, Microsoft, Meta, IBM, Equinix, Digital Realty) successfully implement AI/ML algorithms, such as Regression Analysis, Random Forest/Decision Trees, Clustering, and Neural Networks to optimise and enhance their facilities through predictive maintenance, resource performance and allocation, energy optimisation, and improved network security.

 

In a 2016 report, Google famously reported on implementing DeepMind to operate their Data Centres more efficiently using AI/ML, with human input/vetting, and in doing so were able to achieve a 40% reduction in energy used for cooling or ~15% reduction in PUE. In 2017, DeepMind took this to another level by completely removing the human aspect and implementing an automatic feedback loop to allow the AI control system to directly implement actions without operator input. Within 9 months, the AI control system was able to achieve a ~30% reduction in PUE.

 

In late 2021, Microsoft implemented AI algorithms to act as anomaly detection methods, enabling the mitigation of unusual power and water usage events within the Data Centre facility. Additionally, Meta has reportedly been developing physical models to simulate extreme conditions and introducing this data to the AI models responsible for optimising power consumption, cooling, and airflow across its servers.

 

The industry widely acknowledges that implementing AI within facilities can unlock many cost and operational efficiencies, but due to the cost and complexity of development, major cloud and colocation providers currently lead the charge in AI implementation. However, it is expected that as the AI product market grows year-on-year it will become more cost-effective for smaller operators to use these advanced technologies to enhance their facilities.

 

What does this mean for the future of the Data Centre Industry?

 

As these power hungry, compute intensive AI workloads continue to grow at an extensive rate, along with the increasing user volume, significantly more data centre resources will be required to enable further development and integration. However, as the software and hardware become more optimised and distributed over time, it is expected that after the initial growth to meet demands of new workloads the DC industry will reduce from this peak in the long term.

 

The data centre industry itself has a lot of catching up to do in terms of integrating AI into the day-to-day operations of facilities. With the most headline grabbing activity happening as far back as 2016/2017, this shows how out of touch the industry is in adapting to change. It is likely that over the next 5-10 years, as AI becomes more accessible, cost effective, and reliable, we will see a greater acceptance from customers and adoption from operators of allowing AI to operate facilities on a larger scale. 

 

Note: This article was NOT written by ChatGPT (Is this what our future holds?)

 

Author : Ross Smyth, VIPA Digital

 

Sources: OpenAI, Credit Suisse Research, Deepmind, Reuters.

 

bottom of page