How AI’s “Perfect” Emails Can Backfire With Your Team

Millennial Businesswoman Skeptical Distrust
Polished emails may look great, but when managers lean too heavily on AI, employees often see the messages as fake. The study shows that trust takes a hit, even if the writing looks professional. Credit: Shutterstock

AI tools like ChatGPT and Gemini are helping professionals write smoother, more polished emails—but new research reveals a hidden cost.

A large survey of over 1,100 workers found that while AI assistance makes managers’ messages look more professional, it also erodes trust when overused. Employees are generally fine with light editing help, but once AI starts shaping the tone or entire message, supervisors come across as insincere, lazy, or less competent.

AI in the Workplace: From Novelty to Norm

More than 75% of professionals now rely on AI in their daily work, often using tools such as ChatGPT, Gemini, Copilot, or Claude to draft and polish emails. These tools are widely praised for making writing easier, but do they actually help when it comes to communication between managers and employees?

A new survey of 1,100 professionals highlights an important contradiction in workplace communication. While AI can make a manager’s email appear polished and professional, frequent reliance on it can reduce employees’ trust in their supervisors.

“We see a tension between perceptions of message quality and perceptions of the sender,” said Anthony Coman, Ph.D., a researcher at the <span class="glossaryLink" aria-describedby="tt" data-cmtooltip="

University of Florida
The University of Florida is a major U.S. public research university located in Gainesville, known for its work in agriculture, health sciences, and space research. It operates the UF Health system and is a land-grant, sea-grant, and space-grant institution.

” data-gt-translate-attributes=”[{"attribute":"data-cmtooltip", "format":"html"}]” tabindex=”0″ role=”link”>University of Florida’s Warrington College of Business and study co-author. “Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance.”

Perceptions of AI Use in Emails

Published in the International Journal of Business Communication, the study by Coman and co-author Peter Cardon, Ph.D., of the University of Southern California, asked professionals to evaluate emails they were told had been written with varying levels of AI help (low, medium, and high). Participants assessed not only the content of congratulatory messages but also their perception of the person who sent them.

The Perception Gap Between Managers and Employees

While AI-assisted writing was generally seen as efficient, effective, and professional, Coman and Cardon found a “perception gap” in messages that were written by managers versus those written by employees.

“When people evaluate their own use of AI, they tend to rate their use similarly across low, medium and high levels of assistance,” Coman explained. “However, when rating other’s use, magnitude becomes important. Overall, professionals view their own AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors.”

The Tipping Point for Negative Perceptions

While low levels of AI help, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions. The perception gap is especially significant when employees perceive higher levels of AI writing, bringing into question the authorship, integrity, caring, and competency of their manager.

The impact on trust was substantial: Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages. Similarly, while 95% found low-AI supervisor messages professional, this dropped to 69-73% when supervisors relied heavily on AI tools.

When AI Feels Like Laziness

The findings reveal employees can often detect AI-generated content and interpret its use as laziness or lack of caring. When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees perceive them as less sincere and question their leadership abilities.

“In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust.

Choosing the Right Messages for AI Assistance

The study suggests managers should carefully consider message type, level of AI assistance and relational context before using AI in their writing. While AI may be appropriate and professionally received for informational or routine communications, like meeting reminders or factual announcements, relationship-oriented messages requiring empathy, praise, congratulations, motivation or personal feedback are better handled with minimal technological intervention.

Reference: “Professionalism and Trustworthiness in AI-Assisted Workplace Writing: The Benefits and Drawbacks of Writing With AI” by Peter W. Cardon and Anthony W. Coman, 2025, International Journal of Business Communication.
DOI: 10.1177/23294884251350599

Never miss a breakthrough: Join the SciTechDaily newsletter.