OpenAI’s text-to-video generator, Sora 2, is being praised for its technical sophistication but heavily criticized for how easily it can be misused. A recent analysis by NewsGuard found that when prompted to produce content based on false online claims, Sora generated realistic yet fabricated videos 80% of the time.

These AI-generated clips include everything from false election footage to staged news reports, raising alarm about how the tool could be weaponized to spread misinformation. Some of the tested claims even originated from known Russian disinformation efforts, demonstrating how actors could exploit the technology for propaganda purposes.

OpenAI launched Sora 2 as a free iOS app on September 30, 2025, quickly reaching over a million downloads in just a few days. While the company touts Sora as a creative tool for filmmakers, educators, and advertisers, experts warn that the line between entertainment and deception is increasingly blurred.

Safeguards under pressure

OpenAI states that Sora includes several safety mechanisms, policies restricting harmful content, limitations on the portrayal of public figures, and a visible watermark marking each video as AI-generated. According to OpenAI spokesperson Niko Felix, the company uses digital signatures to trace videos back to Sora, helping enforcement teams identify misuse.

However, the NewsGuard study showed these measures can be easily bypassed. Investigators removed the watermark using publicly available editing tools in minutes, producing videos that appeared genuine to untrained viewers. Sora also inconsistently enforced its own content rules, it rejected some prompts involving named public figures but generated near-lookalikes when names were replaced with vague descriptions.

Despite those findings, OpenAI insists that its detection systems and safety monitoring will continue improving. The company says it maintains “high-accuracy internal tools” to identify and flag suspected misuse across platforms.

Hollywood and unions demand protections for likeness and voice

Beyond misinformation concerns, Hollywood performers and unions are pushing back against the unauthorized use of their likenesses on Sora. Following the appearance of AI-generated videos imitating his voice and face, actor Bryan Cranston and SAG-AFTRA announced they would collaborate with OpenAI to strengthen guardrails.

“I am grateful to OpenAI for improving its protections,” Cranston said in a joint statement issued with the actor’s union. “I hope all companies respect our right to manage replication of our voice and likeness.”

OpenAI confirmed it will partner with United Talent Agency (UTA), Creative Artists Agency (CAA), and the Association of Talent Agents (ATA) to establish industry-wide consent protocols. Earlier this month, the estate of Martin Luther King Jr. requested OpenAI to block certain Sora videos that created disrespectful depictions of the civil rights leader, which the company promptly removed.

Policy evolution after public complaints

OpenAI CEO Sam Altman has pledged an expansion of Sora’s opt-in system for the use of voice and image likenesses. Previously, Sora allowed creators to use existing materials unless explicitly restricted; that policy was reversed on October 3, 2025, giving rightsholders greater control over their intellectual property.

Altman reiterated that OpenAI supports the NO FAKES Act, a federal law protecting individuals from unauthorized digital replicas.

“We will always stand behind performers’ rights to control their own likeness,” he said in a statement to CNBC.

Experts warn of deepfake normalization online

Videos from Sora have flooded social platforms such as TikTok, Instagram Reels, and X, often spreading faster than moderation teams can respond. Daisy Soderberg-Rivkin, a former trust and safety manager at TikTok, told NPR that AI-generated clips have become “as if deepfakes got a publicist and a distribution deal.”

Aaron Rodericks, head of trust and safety at Bluesky, cautioned that the public is “not ready for a collapse between reality and fakery this drastic.” With millions of users now encountering synthetic clips every day, experts fear the very concept of visual evidence could lose meaning.

The challenge ahead

Sora’s release showcases both the promise and perils of next-generation AI creativity. On one hand, it enables unprecedented accessibility for visual storytelling; on the other, it opens a path for highly realistic disinformation to flourish.

OpenAI says it is committed to refining safety measures in partnership with unions, regulators, and digital safety researchers. Yet as critics note, even the best guardrails mean little if audiences can no longer tell real from fake.

For a public already navigating a flood of manipulated media, the rise of tools like Sora underscores an urgent challenge: how to innovate responsibly in a world where seeing may no longer mean believing.

Elon Musk’s xAI to Launch Wikipedia Alternative “Grokipedia” in Two Weeks | HODL FM
Elon Musk says his artificial intelligence startup xAI will release…
hodl-post-image

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource, and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require adviceHODL FM strongly recommends contacting a qualified industry professional.