OpenAI has also formally joined the Coalition for Content Provenance and Authenticity (C2PA).
Photo Credit: Pexels/Sanket Mishra
OpenAI and Microsoft have launched a societal resilience fund to create awareness about AI content
OpenAI unveiled its new artificial intelligence (AI) image identification and detection tool on Tuesday. The AI firm announced the new tool highlighting the need to authenticate AI-generated content and to create awareness around it. The company has also formally joined the Coalition for Content Provenance and Authenticity (C2PA) committee, which has created an open standard for labelling AI-generated content. Notably, OpenAI has been using this standard in its Dall-E-generated images since February 2024 and continues to add AI-related information in the images' metadata.
In a blog post, OpenAI highlighted the new challenges that have emerged with the inception of AI-generated content. The company said, “As generated audiovisual content becomes more common, we believe it will be increasingly important for society as a whole to embrace new technology and standards that help people understand the tools used to create the content they find online.” Further, the ChatGPT-maker said it was taking two distinct measures to contribute to AI content authentication.
In its first step, OpenAI formally joined the C2PA committee and called it a widely used standard for digital content certification. The company also highlighted that the standard is followed by a wide range of software companies, camera manufacturers, and online platforms. Put simply, C2PA advocates the addition of information in the metadata of images and other file types to reveal how they were created. While an image taken by a camera will include the name and specifications of the camera, an AI-generated image will include the name of the AI model.
This type of authentication method is used as it is difficult to remove or alter the metadata from an image and it continues to stay even if the image is shared, cropped, or altered in any way or form.
Highlighting its second step, OpenAI said it was working on a new tool that can identify AI-generated images. Without naming the tool, the company called it “OpenAI's image detection classifier”. The tool predicts the likelihood of an image being created by Dall-E. As per the post, the tool was able to correctly tag 98 percent of the Dall-E-generated images when compared to real images, despite using filters or cropping the image. However, the tool struggles when AI images of Dall-E are compared with other AI models. The AI firm said in those instances the tool makes mistakes in about 5-10 percent of the sample.
However, OpenAI has now opened the tool for limited public testing and invited research labs and research-oriented journalism nonprofits to register with the AI firm and get access to the tool.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.
Engineers Turn Lobster Shells Into Robot Parts That Lift, Grip and Swim
Strongest Solar Flare of 2025 Sends High-Energy Radiation Rushing Toward Earth
Raat Akeli Hai: The Bansal Murders OTT Release: When, Where to Watch the Nawazuddin Siddiqui Murder Mystery
Bison Kaalamaadan Is Now Streaming: Know All About the Tamil Sports Action Drama