Johns Hopkins Study Challenges Billion-Dollar AI Models

Abstract Brain Technology Illustration
New research from Johns Hopkins University shows that certain biologically inspired AI architectures can mimic human brain activity even before training on data, challenging long-held assumptions about how AI must learn. Credit: Stock

Choosing the right blueprint can accelerate learning in visual AI systems.

Artificial intelligence systems built with biologically inspired structures can produce activity patterns similar to those seen in the human brain even before they undergo any training, according to new research from Johns Hopkins University.

The study, which was published in Nature Machine Intelligence, suggests that the design of an AI model may be more important than the extensive deep learning processes that often take months, require enormous energy use, and cost billions of dollars.

“The way that the AI field is moving right now is to throw a bunch of data at the models and build compute resources the size of small cities. That requires spending hundreds of billions of dollars. Meanwhile, humans learn to see using very little data,” said lead author Mick Bonner, assistant professor of cognitive science at Johns Hopkins University. “Evolution may have converged on this design for a good reason. Our work suggests that architectural designs that are more brain-like put the AI systems in a very advantageous starting point.”

Bonner and his colleagues examined three major categories of network designs that frequently guide the construction of modern AI systems: transformers, fully connected networks, and convolutional networks.

Testing AI Architectures Against Brain Activity

The scientists repeatedly modified the three blueprints, or the AI architectures, to build dozens of unique artificial neural networks. Then, they exposed these new and untrained AI networks to images of objects, people, and animals and compared the models’ responses to the brain activity of humans and primates exposed to the same images.

When transformers and fully connected networks were modified by giving them many more artificial neurons, they showed little change. Tweaking the architectures of convolutional neural networks in a similar way, however, allowed the researchers to generate activity patterns in the AI that better simulated patterns in the human brain.

Architecture Matters More Than Expected

The untrained convolutional neural networks rivaled conventional AI systems, which generally are exposed to millions or billions of images during training, the researchers said, suggesting that the architecture plays a more important role than researchers previously realized.

“If training on massive data is really the crucial factor, then there should be no way of getting to brain-like AI systems through architectural modifications alone,” Bonner said. “This means that by starting with the right blueprint, and perhaps incorporating other insights from biology, we may be able to dramatically accelerate learning in AI systems.”

Next, the researchers are working on developing simple learning algorithms modeled after biology that could inform a new deep learning framework.

Reference: “Convolutional architectures are cortex-aligned de novo” by Atlas Kazemian, Eric Elmoznino and Michael F. Bonner, 13 November 2025, Nature Machine Intelligence.
DOI: 10.1038/s42256-025-01142-3

Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.