
Personalized algorithms can quietly limit what people explore while making them feel more certain they understand a topic.
Personalized recommendation systems are designed to show people online content based on their past behavior, but new research suggests these same systems may interfere with learning. According to the study, when algorithms determined which information people saw, learning outcomes suffered.
The researchers found that when participants relied on algorithm-selected information to study a topic they knew nothing about, they explored only a narrow slice of the available material. Instead of examining the full range of information, they focused on a limited subset.
As a result, participants often answered test questions incorrectly. Even so, they expressed strong confidence in their wrong answers.
The findings are troubling, said Giwon Bahg, who led the research as part of his doctoral dissertation in psychology at The Ohio State University.
Bias Can Form Even Without Prior Knowledge
Previous studies of personalized algorithms have largely examined how they influence opinions about political or social topics that people already understand to some degree.
“But our study shows that even when you know nothing about a topic, these algorithms can start building biases immediately and can lead to a distorted view of reality,” said Bahg, now a postdoctoral scholar at Pennsylvania State University.
The research was published in the Journal of Experimental Psychology: General.
Brandon Turner, a study co-author and professor of psychology at Ohio State, said the findings indicate that people often treat limited, algorithm-curated information as if it represents the full picture.
“People miss information when they follow an algorithm, but they think what they do know generalizes to other features and other parts of the environment that they’ve never experienced,” Turner said.
How Recommendation Systems Can Skew Understanding
The researchers illustrated this effect with a simple example. Imagine someone who has never watched movies from a particular country and decides to explore them for the first time. An on-demand streaming service offers a list of recommended films.
The person randomly selects an action-thriller because it appears first. After that choice, the algorithm continues to recommend similar action-thriller movies, which the person keeps watching.
“If this person’s goal, whether explicit or implicit, was in fact to understand the overall landscape of movies in this country, the algorithmic recommendation ends up seriously biasing one’s understanding,” the authors wrote.
By following this path, the person is likely to overlook well-regarded films in other genres. They may also develop inaccurate and overly broad ideas about the country’s popular culture and society after seeing only a narrow type of movie, the researchers said.
Testing Algorithmic Learning in a Controlled Experiment
To examine how this process unfolds, Bahg and his colleagues conducted an online experiment involving 346 participants.
To eliminate prior knowledge, the researchers created a fictional learning task. Participants were asked to study categories of imaginary, crystal-like aliens.
Each alien type had six defining features that varied across categories. For example, one part of an alien might appear as a square box that was dark black for some types and pale gray for others.
The goal was to learn how to correctly identify the different aliens without being told how many types existed.
When Algorithms Guide What People Explore
During the experiment, the aliens’ features were hidden behind gray boxes. In one condition, participants were required to examine every feature, allowing them to build a complete understanding of how the features related to each alien type.
In another condition, participants chose which features to click, while a personalization algorithm selected which study items they were most likely to explore. Over time, the algorithm encouraged repeated focus on the same features. Participants were allowed to skip other features, even though they still had access to all available information if they chose to view it.
The results showed a clear pattern. Participants guided by the personalized algorithm examined fewer features and did so in a consistently selective way. When tested on new alien examples they had not seen before, they frequently misclassified them. Despite this, they remained confident in their judgments.
“They were even more confident when they were actually incorrect about their choices than when they were correct, which is concerning because they had less knowledge,” Bahg said.
Real-World Implications for Learning and Society
Turner said the findings raise important concerns beyond the laboratory.
“If you have a young kid genuinely trying to learn about the world, and they’re interacting with algorithms online that prioritize getting users to consume more content, what is going to happen?” Turner said.
“Consuming similar content is often not aligned with learning. This can cause problems for users and ultimately for society.”
Reference: “Algorithmic Personalization of Information Can Cause Inaccurate Generalization and Overconfidence” by Giwon Bahg, Vladimir M. Sloutsky and Brandon M. Turner, September 2025, Journal of Experimental Psychology General.
DOI: 10.1037/xge0001763
Vladimir Sloutsky, professor of psychology at Ohio State, was also a co-author.
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.