Researchers surprised to find less-educated areas adopting AI writing tools faster

Date:

Share:



Corporate and diplomatic trends in AI writing

According to the researchers, all sectors they analyzed (consumer complaints, corporate communications, job postings) showed similar adoption patterns: sharp increases beginning three to four months after ChatGPT’s November 2022 launch, followed by stabilization in late 2023.

Organization age emerged as the strongest predictor of AI writing usage in the job posting analysis. Companies founded after 2015 showed adoption rates up to three times higher than firms established before 1980, reaching 10–15 percent AI-modified text in certain roles compared to below 5 percent for older organizations. Small companies with fewer employees also incorporated AI more readily than larger organizations.

When examining corporate press releases by sector, science and technology companies integrated AI most extensively, with an adoption rate of 16.8 percent by late 2023. Business and financial news (14–15.6 percent) and people and culture topics (13.6–14.3 percent) showed slightly lower but still significant adoption.

In the international arena, Latin American and Caribbean UN country teams showed the highest adoption among international organizations at approximately 20 percent, while African states, Asia-Pacific states, and Eastern European states demonstrated more moderate increases to 11–14 percent by 2024.

Implications and limitations

In the study, the researchers acknowledge limitations in their analysis due to a focus on English-language content. Also, as we mentioned earlier, they found they could not reliably detect human-edited AI-generated text or text generated by newer models instructed to imitate human writing styles. As a result, the researchers suggest their findings represent a lower bound of actual AI writing tool adoption.

The researchers noted that the plateauing of AI writing adoption in 2024 might reflect either market saturation or increasingly sophisticated LLMs producing text that evades detection methods. They conclude we now live in a world where distinguishing between human and AI writing becomes progressively more difficult, with implications for communications across society.

“The growing reliance on AI-generated content may introduce challenges in communication,” the researchers write. “In sensitive categories, over-reliance on AI could result in messages that fail to address concerns or overall release less credible information externally. Over-reliance on AI could also introduce public mistrust in the authenticity of messages sent by firms.”



Source link

━ more like this

How ISTQB Certification can open new career paths in software testing – London Business News | Londonlovesbusiness.com

The demand for skilled software testers has soared with the increasing complexity of technology and the need for quality assurance. As companies strive...

Dr. Mohit Ramsinghani: Curating Lifestyles and Redefining Luxury Real Estate – Insights Success

In the world of high finance, Dr. Mohit Ramsinghani built a reputation as a trusted wealth advisor to some of India’s most affluent...

A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT

The latest generative AI models are not just stand-alone text-generating chatbots—instead, they can easily be hooked up to your data to give personalized...

There’s a Tea app for men, and it also has security problems

Tea bills itself as a safety dating app for women, allowing users to anonymously share details about men they have met. A new...

Apple to invest another $100 billion into the US to avoid tariffs

Apple plans to invest an additional $100 billion in the US, the company announced on Wednesday. The investment follows President Donald's Trump's previously...
spot_img