600% Boost: Scientists Develop Game-Changing AI Chip With Impressive Energy Efficiency

Future Computing Magnetic Semiconductor Chip Concept

Sieun Chae from Oregon State University has pioneered a new AI chip that enhances energy efficiency by six times using a novel material system. This breakthrough aims to cut down AI’s hefty energy consumption by mimicking the integrated processing methods of biological neural networks. Credit: SciTechDaily.com

A new AI chip could potentially increase energy efficiency by six times, aligning computation and data storage in a manner akin to biological neural networks and significantly reducing AI’s electrical footprint.

A researcher from the College of Engineering at Oregon State University has contributed to the development of a new <span class="glossaryLink" aria-describedby="tt" data-cmtooltip="

artificial intelligence
Artificial Intelligence (AI) is a branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence. These tasks include understanding natural language, recognizing patterns, solving problems, and learning from experience. AI technologies use algorithms and massive amounts of data to train models that can make decisions, automate processes, and improve over time through machine learning. The applications of AI are diverse, impacting fields such as healthcare, finance, automotive, and entertainment, fundamentally changing the way we interact with technology.

” data-gt-translate-attributes=”[{"attribute":"data-cmtooltip", "format":"html"}]” tabindex=”0″ role=”link”>artificial intelligence chip that boasts an energy efficiency improvement of six times compared to the current industry standard.

As the use of artificial intelligence soars, so does the amount of energy it requires. Projections show artificial intelligence accounting for half a percent of global energy consumption by 2027 – using as much energy annually as the entire country of the Netherlands.

Sieun Chae, assistant professor of electrical engineering and computer science, is working to help shrink the technology’s electricity footprint. She is researching chips, based on a novel material platform, that allows for both computation and data storage, mimicking the way biological neural networks handle information storage and processing.

Findings from her research were recently published in Nature Electronics.

Efficient AI Processing

“With the emergence of AI, computers are forced to rapidly process and store large amounts of data,” Chae said. “AI chips are designed to compute tasks in memory, which minimizes the shuttling of data between memory and processor; thus, they can perform AI tasks more energy efficiently.”

The chips feature components called memristors – short for memory resistors. Most memristors are made from a simple material system composed of two elements, but the ones in this study feature a new material system known as entropy-stabilized oxides, or ESOs. More than a half-dozen elements comprise the ESOs, allowing their memory capabilities to be finely tuned.

Memristors are similar to biological neural networks in that neither has an external memory source – thus no energy is lost to moving data from the inside to the outside and back. By optimizing the <span class="glossaryLink" aria-describedby="tt" data-cmtooltip="

ESO
Created in 1962, the European Southern Observatory (ESO), is a 16-nation intergovernmental research organization for ground-based astronomy. Its formal name is the European Organization for Astronomical Research in the Southern Hemisphere.

” data-gt-translate-attributes=”[{"attribute":"data-cmtooltip", "format":"html"}]” tabindex=”0″ role=”link”>ESO composition that works best for specific AI jobs, ESO-based chips can perform tasks with far less energy than a computer’s central processing unit, Chae said.

Another upshot is that artificial neural networks would be able to process information that’s time-dependent, such as data for audio and video, thanks to tuning the ESOs’ composition so the device can work on a varied time scale.

Reference: “Efficient data processing using tunable entropy-stabilized oxide memristors” by Sangmin Yoo, Sieun Chae, Tony Chiang, Matthew Webb, Tao Ma, Hanjong Paik, Yongmo Park, Logan Williams, Kazuki Nomoto, Huili G. Xing, Susan Trolier-McKinstry, Emmanouil Kioupakis, John T. Heron and Wei D. Lu, 20 May 2024, Nature Electronics.
DOI: 10.1038/s41928-024-01169-1

Funded by the National Science Foundation, the study was led by researchers at the University of Michigan; Chae participated as a doctoral student at Michigan before joining the faculty at Oregon State.

The collaboration also included researchers from the University of Oklahoma, Cornell University, and Pennsylvania State University.