SAG-AFTRA, OpenAI, Bryan Cranston, and others have released a joint statement addressing celebrity deepfakes.
Recently, OpenAI also put restrictions on how Martin Luther King, Jr. is portrayed in videos
Photo Credit: OpenAI
OpenAI has strengthened the guardrails of Sora to ensure that celebrities and public figures who have not consented to be portrayed in the AI-generated videos are not featured in any videos. The San Francisco-based artificial intelligence (AI) giant confirmed the same in a joint statement with the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA), actor Bryan Cranston, and several others. The statement came after the actor expressed concerns about his likeness and voice appearing in several Sora-generated videos, despite not opting in.
Ever since the launch of the Sora app, users have been generating videos of celebrities and public figures. From Stephen Hawking jumping into a swimming pool to portraying Einstein as a wrestler, the Internet has made full use of its collective creativity. However, some of these generations have also resulted in backlash from celebrities.
Last week, OpenAI and the Estate of Martin Luther King, Jr. issued a statement announcing their collaboration to address the representation of his likeness and voice in Sora generations. OpenAI acknowledged that several users generated “disrespectful depictions of Dr King's image,” resulting in it strengthening guardrails for historical figures.
Now, on Monday, SAG-AFTRA, OpenAI, Cranston, United Talent Agency, Creative Artists Agency, and Association of Talent Agents jointly released a statement addressing how the AI giant handles the generation of celebrity likeness. The action was taken after Cranston brought the issue to SAG-AFTRA.
“I was deeply concerned not just for myself, but for all performers whose work and identity can be misused in this way. I am grateful to OpenAI for its policy and for improving its guardrails, and hope that they and all of the companies involved in this work respect our personal and professional right to manage replication of our voice and likeness,” Cranston said.
The guardrails were strengthened at a time when the No Fakes Act (short for Nurture Originals, Foster Art, and Keep Entertainment Safe Act), a proposed US federal bill, is pending for legislation. The act is aimed at protecting artists, actors, and musicians from the unauthorised creation or use of their likeness, voice, or performance using AI.
“OpenAI is deeply committed to protecting performers from the misappropriation of their voice and likeness. We were an early supporter of the NO FAKES Act when it was introduced last year, and will always stand behind the rights of performers,” said Sam Altman, CEO, OpenAI.
Helplines | |
---|---|
Vandrevala Foundation for Mental Health | 9999666555 or help@vandrevalafoundation.com |
TISS iCall | 022-25521111 (Monday-Saturday: 8 am to 10 pm) |
(If you need support or know someone who does, please reach out to your nearest mental health specialist.) |
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.