OpenAI’s video generator Sora might allow nudity. Experts are worried

Seeing a video playback of a cool dream or idea you have might become a reality soon. OpenAI’s text-to-video AI generator, Sora, is going to be publicly released “definitely this year,” Mira Murati, the company’s chief technology officer, told The Wall Street Journal. The news outlet was provided with examples of Sora’s generations of a mermaid with a smartphone and a bull in a china shop.

But when asked about nudity on Sora, Murati said she wasn’t sure if it would be allowed in video generations, adding that artists might use nude generations in creative settings. Murati said OpenAI is “working with artists and creators from different fields to figure out exactly what’s useful,” along with “what level of flexibility” Sora should have.

Advertisement

But despite efforts from the startups and companies working on the models to implement guardrails on the kind of content that can be generated, sensitive material like deepfake nudes and deepfake pornography are churned out regularly by major generative AI tools. And experts say OpenAI and other tech companies working on similar technology, as well as the U.S. government, should be more proactive about regulating tools like Sora before they’re widely released.

Advertisement

Getting on guardrails

In a February poll of U.S. voters by the AI Policy Institute (AIPI), 77% of respondents said that when it comes to AI video generators like Sora, including guardrails and safeguards to prevent misuse is more important than making the models widely available. More than two-thirds of respondents said the developer of an AI model should be held legally responsible for any illegal activity by the model, like generating fake videos for slander or revenge porn.

Advertisement

“That really points to how the public is taking this tech seriously,” said Daniel Colson, the founder and executive director of AIPI. “They think it’s powerful. They’ve seen the way that technology companies deploy these models and algorithms and technologies, and it leads to completely society-transforming results.”

But Colson said the public also “doesn’t trust the tech companies to do that in a responsible manner.”

Advertisement

“OpenAI has a challenging decision to make around this,” he said, “because for better or worse, the reality is that probably 90% of the demand for AI-generated video will be for pornography, and that creates an unpleasant dynamic where, if centralized companies creating these models aren’t providing that service, that creates an extremely strong incentive for the gray market to provide that service.”

Colson said this has already happened with open-source AI image models where there isn’t much content restriction or oversight.

Advertisement

Sora is currently being tested by red teamers, or “domain experts in areas like misinformation, hateful content, and bias,” OpenAI has said as it prepares to make the model available. The company also said it’s working on tools to “detect misleading content” including Sora-generated videos. OpenAI did not respond to a request for comment for this story.

After AI-generated pornographic images of Taylor Swift flooded social media in January, an AIPI poll found that 84% of respondents supported legislation to make non-consensual deepfake porn illegal. Meanwhile, 86% of respondents supported legislation to require companies developing AI models to prevent them from being used to create deepfake porn. More than 90% of respondents said they believed people who use AI models to create deepfake porn should be held accountable under the law, while 87% of respondents said they believe companies developing the models should be held legally liable. There are currently no U.S. laws or regulations around this. (In the European Union, the newly-passed Artificial Intelligence Act, which will assess and regulate AI software for risk, has yet to be officially enacted.)

Advertisement

“The [U.S.] government hasn’t really taken any notable step in the last 25 years — since the dawn of the internet — to substantially regulate these entities, and that’s notable, given the degree to which more and more of American society is truly governed by these non-democratically elected entities,” Colson said.

Video generators like Sora could also be used by cyber criminals to create deepfakes of executives, actors, and politicians in compromising situations to exert influence or to ask for a ransom. In February, the Federal Communications Commission banned AI-generated voices in robocalls after multiple incidents, including a fake call that circulated in January with an AI-generated of President Joe Biden encouraging voters in New Hampshire to stay home instead of voting in the state’s primary election. And during his unsuccessful campaign for the Republican presidential nomination, Florida Gov. Ron DeSantis released an AI-generated video in June of former President Donald Trump hugging former White House medical adviser Dr. Anthony Fauci — without disclosing it wasn’t real.

Advertisement

But mandating guardrails like user verification, content tagging, risk rating, and putting restrictions on how and where AI generated content can be exported could help circumvent cybercrime.

“We need to move from a reactive posture to a proactive posture,” Jason Hogg, an executive-in-residence at Great Hill Partners and a former global chief executive of the cybersecurity firm Aon Cyber, said of U.S. regulation on AI models. (Great Hill Partners is an investor in Quartz’s parent company G/O Media.)

Advertisement

Hogg said there need to be regulations and penalties in place to deal with “a tsunami of cybercrime that is heading towards our shore.”

Source: qz.com

Latest news
Related news