MIT’s Optical AI Chip That Could Revolutionize 6G at the Speed of Light

Artist’s Interpretation of New Optical Processor for an Edge Device
This image shows an artist’s interpretation of new optical processor for an edge device, developed by MIT researchers, that performs machine learning computations at the speed of light, classifying wireless signals in a matter of nanoseconds. Credit: Sampson Wilcox, Research Laboratory of Electronics

By enabling deep learning to run at the speed of light, this chip could allow edge devices to perform real-time data analysis with enhanced capabilities.

As more connected devices require greater bandwidth for activities like teleworking and cloud computing, managing the limited wireless spectrum shared by all users is becoming increasingly difficult.

To address this, engineers are turning to <span class="glossaryLink" aria-describedby="tt" data-cmtooltip="

artificial intelligence
Artificial Intelligence (AI) is a branch of computer science focused on creating systems that can perform tasks typically requiring human intelligence. These tasks include understanding natural language, recognizing patterns, solving problems, and learning from experience. AI technologies use algorithms and massive amounts of data to train models that can make decisions, automate processes, and improve over time through machine learning. The applications of AI are diverse, impacting fields such as healthcare, finance, automotive, and entertainment, fundamentally changing the way we interact with technology.

” data-gt-translate-attributes=”[{"attribute":"data-cmtooltip", "format":"html"}]” tabindex=”0″ role=”link”>artificial intelligence to manage the wireless spectrum dynamically, aiming to reduce latency and improve performance. However, most AI techniques for processing and classifying wireless signals consume significant power and cannot operate in real-time.

Now, researchers at <span class="glossaryLink" aria-describedby="tt" data-cmtooltip="

MIT
MIT is an acronym for the Massachusetts Institute of Technology. It is a prestigious private research university in Cambridge, Massachusetts that was founded in 1861. It is organized into five Schools: architecture and planning; engineering; humanities, arts, and social sciences; management; and science. MIT's impact includes many scientific breakthroughs and technological advances. Their stated goal is to make a better world through education, research, and innovation.

” data-gt-translate-attributes=”[{"attribute":"data-cmtooltip", "format":"html"}]” tabindex=”0″ role=”link”>MIT have created a new AI hardware accelerator specifically designed for wireless signal processing. This optical processor performs machine-learning tasks at the speed of light, classifying wireless signals within nanoseconds.

The photonic chip operates about 100 times faster than the best available digital alternatives and achieves around 95 percent <span class="glossaryLink" aria-describedby="tt" data-cmtooltip="

accuracy
A measure of how close a result or measurement is to the true value.

” data-gt-translate-attributes=”[{"attribute":"data-cmtooltip", "format":"html"}]” tabindex=”0″ role=”link”>accuracy in signal classification. It is also scalable and adaptable for various high-performance computing tasks. In addition, the chip is smaller, lighter, more affordable, and more energy-efficient than traditional digital AI hardware.

This technology could be especially valuable for future 6G wireless systems, such as cognitive radios that improve data rates by adjusting wireless modulation formats based on real-time conditions.

By allowing edge devices to carry out deep-learning computations in real-time, the hardware accelerator could significantly speed up a range of applications beyond signal processing. These include enabling autonomous vehicles to respond instantly to environmental changes or allowing smart pacemakers to monitor heart health continuously.

“There are many applications that would be enabled by edge devices that are capable of analyzing wireless signals. What we’ve presented in our paper could open up many possibilities for real-time and reliable AI inference. This work is the beginning of something that could be quite impactful,” says Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science, principal investigator in the Quantum Photonics and Artificial Intelligence Group and the Research Laboratory of Electronics (RLE), and senior author of the paper.

He is joined on the paper by lead author Ronald Davis III PhD ’24; Zaijun Chen, a former MIT postdoc who is now an assistant professor at the University of Southern California; and Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research. The research was published in <span class="glossaryLink" aria-describedby="tt" data-cmtooltip="

Science Advances
&lt;em&gt;Science Advances&lt;/em&gt; is a peer-reviewed scientific journal established by the American Association for the Advancement of Science (AAAS). It serves as an open-access platform featuring high-quality research across the entire spectrum of science and science-related disciplines. Launched in 2015, the journal aims to publish significant, innovative research that advances the frontiers of science and extends the reach of high-impact science to a global audience. &quot;Science Advances&quot; covers a broad range of topics including, but not limited to, biology, physics, chemistry, environmental science, and social sciences, making it a multidisciplinary publication.

” data-gt-translate-attributes=”[{"attribute":"data-cmtooltip", "format":"html"}]” tabindex=”0″ role=”link”>Science Advances.

Light-speed processing

Current digital AI accelerators for wireless signal processing work by converting the signal into an image and passing it through a deep-learning model for classification. Although this method is very accurate, deep neural networks require significant computing power, making the approach unsuitable for many applications that require fast, real-time responses.

Optical systems can accelerate deep neural networks by encoding and processing data using light, which is also less energy intensive than digital computing. But researchers have struggled to maximize the performance of general-purpose optical neural networks when used for signal processing, while ensuring the optical device is scalable.

By developing an optical neural network architecture specifically for signal processing, which they call a multiplicative analog frequency transform optical neural network (MAFT-ONN), the researchers tackled that problem head-on.

The MAFT-ONN addresses the problem of scalability by encoding all signal data and performing all machine-learning operations within what is known as the frequency domain — before the wireless signals are digitized.

The researchers designed their optical neural network to perform all linear and nonlinear operations in-line. Both types of operations are required for deep learning.

Thanks to this innovative design, they only need one MAFT-ONN device per layer for the entire optical neural network, as opposed to other methods that require one device for each individual computational unit, or “neuron.”

“We can fit 10,000 neurons onto a single device and compute the necessary multiplications in a single shot,” Davis says.

The researchers accomplish this using a technique called photoelectric multiplication, which dramatically boosts efficiency. It also allows them to create an optical neural network that can be readily scaled up with additional layers without requiring extra overhead.

Results in nanoseconds

MAFT-ONN takes a wireless signal as input, processes the signal data, and passes the information along for later operations the edge device performs. For instance, by classifying a signal’s modulation, MAFT-ONN would enable a device to automatically infer the type of signal to extract the data it carries.

One of the biggest challenges the researchers faced when designing MAFT-ONN was determining how to map the machine-learning computations to the optical hardware.

“We couldn’t just take a normal machine-learning framework off the shelf and use it. We had to customize it to fit the hardware and figure out how to exploit the physics so it would perform the computations we wanted it to,” Davis says.

When they tested their architecture on signal classification in simulations, the optical neural network achieved 85 percent accuracy in a single shot, which can quickly converge to more than 99 percent accuracy using multiple measurements. MAFT-ONN only required about 120 nanoseconds to perform entire process.

“The longer you measure, the higher accuracy you will get. Because MAFT-ONN computes inferences in nanoseconds, you don’t lose much speed to gain more accuracy,” Davis adds.

While state-of-the-art digital radio frequency devices can perform machine-learning inference in a microseconds, optics can do it in nanoseconds or even picoseconds.

Moving forward, the researchers want to employ what are known as multiplexing schemes so they could perform more computations and scale up the MAFT-ONN. They also want to extend their work into more complex deep learning architectures that could run transformer models or LLMs.

Reference: “RF-photonic deep learning processor with Shannon-limited data movement” by Ronald DavisIII, Zaijun Chen, Ryan Hamerly and Dirk Englund, 11 June 2025, Science Advances.
DOI: 10.1126/sciadv.adt3558

Funding: This work was funded, in part, by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.

Never miss a breakthrough: Join the SciTechDaily newsletter.