AI use damages professional reputation, study suggests

Date:

Share:


Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.

On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.

“Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled “Evidence of a social evaluation penalty for using AI,” reveal a consistent pattern of bias against those who receive help from AI.

What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups.

Fig. 1 from the paper “Evidence of a social evaluation penalty for using AI.”


Credit:

Reif et al.

“Testing a broad range of stimuli enabled us to examine whether the target’s age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations,” the authors wrote in the paper. “We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one.”

The hidden social cost of AI adoption

In the first experiment conducted by the team from Duke, participants imagined using either an AI tool or a dashboard creation tool at work. It revealed that those in the AI group expected to be judged as lazier, less competent, less diligent, and more replaceable than those using conventional technology. They also reported less willingness to disclose their AI use to colleagues and managers.

The second experiment confirmed these fears were justified. When evaluating descriptions of employees, participants consistently rated those receiving AI help as lazier, less competent, less diligent, less independent, and less self-assured than those receiving similar help from non-AI sources or no help at all.



Source link

━ more like this

From Microsoft to “microslop”: The AI backlash that forced a reset

At some point in 2025, Windows stopped feeling like an operating system and started feeling like a demo for AI. Open Notepad to...

Apple smart glasses might avoid the creepy reputation of Meta Ray-Bans with a light trick

Apple’s upcoming smart glasses could sidestep one of the biggest issues facing the category – privacy concerns – by rethinking something as simple...

The MacBook Neo is moonlighting as a Windows gaming machine, and it’s doing it well

Apple didn’t position its most affordable MacBook as a gaming machine. The MacBook Neo, a budget-leaning laptop that runs on Apple’s A18 Pro...

Apple glasses won’t go brand shopping like Meta did with Ray-Ban and Oakley

When it comes to smart glasses, Apple seems to be taking the road less traveled. While others have leaned on big-name eyewear brands...

I tried this Pokémon-inspired weather app, and checking the weather now feels like a Pokédex hunt

Weather apps are usually one of the most boring things on your phone. You open one, glance at the temperature, maybe check if...
spot_img