As per the study, 25 percent of press releases published by Newswire were found to be AI-assisted.
The usage of AI can lead to lowered credibility for companies, the study claimed
Photo Credit: Unsplash/AbsolutVision
Artificial intelligence (AI) as a technology has permeated through the layers of human society, and we continue to find its new and broader use cases with each passing day. A new study has now found that more than 25 percent of external corporate communication and press releases were either generated using AI chatbots or were heavily modified using them. In particular, the press releases in the technology and business categories witnessed the sharpest AI usage uptick. Apart from that, the study detected AI usage in job postings on LinkedIn and communication from the United Nations (UN).
According to a new study published in the journal Patterns, the adoption of AI-generated and AI-assisted writing in corporate communications increased significantly in early 2023, just a couple of months after the launch of ChatGPT in November 2022.
The study analysed 5,37,413 corporate press releases and noted an upward spike in the usage of AI. It claimed that before the launch of ChatGPT, the percentage of AI-assisted content remained at the 2-3 percent mark, which is common for false positives. But after the launch of the OpenAI chatbot, the numbers quickly spiked. Newswire, for instance, jumped to 25 percent of AI-assisted content by the end of 2023. Similarly, PRWeb and PRNewswire reached a plateau at 15 percent.
Additionally, the study noted that the categories “business and money” and “science and technology” witnessed the highest adoption of AI writing, with tech alone reaching nearly 17 percent by the fourth quarter of 2023.
While the trend highlighted that businesses are readily adopting AI technology to improve the speed and cost of content creation, the usage of the tool can also lead to the information being presented in a less nuanced manner, making the company appear less credible, the study concluded.
To determine whether a press release contains AI-generated content, the researchers used a distributional large language model (LLM) quantification framework. It measures the share of AI-generated content using a system where the frequency of each word appearing in the content is measured with a distribution graph of a typical AI-generated content on the same topic.
This method also has some limitations. The study highlighted that the framework only focuses on widely available AI chatbots such as ChatGPT, meaning the usage of smaller models might escape its scrutiny. The biggest limitation is that the framework cannot reliably detect language that was generated by AI models, but then was heavily edited by humans or humanised using AI. Such content might also escape its purview.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.