The US FTC announced that the inquiry will focus on steps taken by companies to evaluate the safety of their chatbots.
Photo Credit: Reuters
FTC raised concerns over the potential of children building unhealthy relationships with chatbots
Google, OpenAI, Meta, and several other artificial intelligence (AI) companies are now facing an inquiry over how they handle safety and mitigate risks associated with their chatbots. The order was passed by the US Federal Trade Commission, primarily to understand the potentially negative impacts of this technology on children and teens. Seven different companies that have built and released their chatbots will be facing this inquiry that also investigates allied topics such as user engagement, monetisation, usage and sharing personal information obtained by chatbots, and more.
On Thursday, the FTC announced that the company is issuing orders to seven companies with an AI chatbot in the market to seek “information on how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens.” The seven companies include Google's parent company Alphabet, Character AI, Instagram, Meta Platforms, OpenAI, Snap, and Elon Musk-owned xAI.
FTC's primary inquiry is around the concern that these chatbots, which can simulate human-like interactions and appear like a friend or confidant, can lead children, teenagers, and some adults to form unhealthy relationships with them, which can then lead to potential negative effects. The governing body is also concerned about whether users and (for minors) their parents are apprised of the risks associated with these AI products.
Part of the investigation, the FTC will be seeking information on how these companies are monetising user engagement; how the chatbots process input and generate output; how AI characters are developed and approved; whether AI bots are tested and monitored for negative impact before and after deployment; measures taken by companies to mitigate the negative impacts; and more.
Focusing on parental control, the FTC also highlighted that it intends to understand how these companies manage disclosures, advertising, and other representations to inform parents about features, potential negative impacts, and data collection and handling practices.
Notably, several of these companies have recently faced public backlash and lawsuits due to users forming unhealthy attachments with the chatbots. For instance, a man who committed suicide after killing his mother was found to have confided in ChatGPT. Separately, Character.AI is also facing a lawsuit over a teenager committing suicide after the chatbot allegedly encouraged him.
“As AI technologies evolve, it is important to consider the effects chatbots can have on children. The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children,” said FTC Chairman Andrew N. Ferguson.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.