Meta to develop, use its in-house data centre AI-chips this year, AI tools to cost over $30 billion annually

Meta to develop, use its in-house data centre AI-chips this year, AI tools to cost over $30 billion annually

Meta will develop its own custom AI chip and deploy them at their local data centres to power their AI tools. All of this, including the development and deployment of these new AI tolls could cost as much as $30 billion each year

Meta Platforms, the parent company of Facebook, is planning to deploy a new version of its custom chip, known as “Artemis,” into its data centres in 2024, according to an internal document seen by Reuters.

Zuckerberg also outlined the company’s strategy to compete against Alphabet and Microsoft in the high-stakes AI arms race. Meta aims to leverage its vast walled garden of data, emphasizing the hundreds of billions of publicly shared images and tens of billions of public videos available within its platform. This is positioned as a key advantage over competitors like Google, Microsoft, and OpenAI, which primarily train their AI models on publicly crawled data from the web.

This second generation of Meta’s in-house silicon aims to support the company’s AI initiatives and could potentially reduce its reliance on NVIDIA chips, which currently dominate the market.

Related Articles

Meta

Meta declares first dividend, shares spike over 14%, CEO Mark Zuckerberg to get $700mn fat payout check

Meta

AMD, like Intel, hopes to sell a ton of AI Chips, but expects revenue to take a hit, reveals earnings call

In addition to the generative AI technology already offered by Meta, Zuckerberg expressed ambitions for “general intelligence,” the concept of a general-purpose AI capable of outperforming humans in various tasks. The goal is to build the most popular and advanced AI products and services, providing users with a world-class AI assistant to enhance productivity.

However, achieving this vision comes with a hefty price tag. Meta’s capital expenditures could increase by up to $9 billion in the current year, reaching a total of $30 billion or even $37 billion, compared to $28.1 billion in 2023.

The company acknowledges that ambitious long-term AI research and product development efforts will require ongoing infrastructure investments beyond the current year. Despite the significant costs involved, Meta appears committed to positioning itself as a leader in the evolving landscape of artificial intelligence.

Meta has been investing billions of dollars in specialized chips and data centre reconfigurations to accommodate the growing demand for AI products in its platforms, including Facebook, Instagram, WhatsApp, and hardware devices like smart glasses.

The deployment of its own chip could help Meta optimize performance and efficiency on its specific workloads, potentially saving significant costs in annual energy and chip purchasing.

The new chip is designed for inference processing, where models use their algorithms to make ranking judgments and generate responses to user prompts. Meta CEO Mark Zuckerberg previously mentioned plans to have around 350,000 NVIDIA “H100” processors by the end of the year, contributing to Meta’s overall computing capacity.

This move comes after Meta’s decision in 2022 to halt the first iteration of its in-house chip and opt for purchasing NVIDIA GPUs. The Artemis chip is part of Meta’s broader AI silicon project, with the company also reportedly working on a more ambitious chip capable of both training and inference processes.

Despite earlier challenges, an inference chip like Artemis could offer greater efficiency in handling Meta’s recommendation models compared to the power-hungry NVIDIA processors.

(With inputs from agencies)