OpenAI is launching a new video-generation model, and it’s called Sora. The AI company says Sora “can create realistic and imaginative scenes from text instructions.” The text-to-video model allows users to create photorealistic videos up to a minute long — all based on prompts they’ve written.
- Home
- Technology
- News
OpenAI introduces Sora, its text-to-video AI model
OpenAI is launching a new text-to-video model called Sora. The AI model is capable of producing a video based on a text prompt.


Sora is capable of creating “complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” according to OpenAI’s introductory blog post. The company also notes that the model can understand how objects “exist in the physical world,” as well as “accurately interpret props and generate compelling characters that express vibrant emotions.”
The model can also generate a video based on a still image, as well as fill in missing frames on an existing video or extend it. The Sora-generated demos included in OpenAI’s blog post include an aerial scene of California during the gold rush, a video that looks as if it were shot from the inside of a Tokyo train, and others. Many have some telltale signs of AI — like a suspiciously moving floor in a video of a museum — and OpenAI says the model “may struggle with accurately simulating the physics of a complex scene,” but the results are overall pretty impressive.
A couple of years ago, it was text-to-image generators like Midjourney that were at the forefront of models’ ability to turn words into images. But recently, video has begun to improve at a remarkable pace: companies like Runway and Pika have shown impressive text-to-video models of their own, and Google’s Lumiere figures to be one of OpenAI’s primary competitors in this space, too. Similar to Sora, Lumiere gives users text-to-video tools and also lets them create videos from a still image.
Sora is currently only available to “red teamers” who are assessing the model for potential harms and risks. OpenAI is also offering access to some visual artists, designers, and filmmakers to get feedback. It notes that the existing model might not accurately simulate the physics of a complex scene and may not properly interpret certain instances of cause and effect.
Earlier this month, OpenAI announced it’s adding watermarks to its text-to-image tool DALL-E 3, but notes that they can “easily be removed.” Like its other AI products, OpenAI will have to contend with the consequences of fake, AI photorealistic videos being mistaken for the real thing.
Pakistan won’t allow use of neighboring territory to destabilize peace: President
- ایک دن قبل
Hockey World Cup Qualifier: Pakistan beat Malaysia to reach semi-finals
- ایک دن قبل
67 Afghan Taliban operatives killed in latest repulsive attacks: Tarar
- 13 گھنٹے قبل

Ultrahuman’s new flagship smart ring has a 15-day battery life
- 14 گھنٹے قبل
US Marines fired on protesters storming consulate in Karachi, officials say: Reuters
- 13 گھنٹے قبل
Iran war enters fourth day in 'smoke and blood' as markets slide
- 9 گھنٹے قبل

Trump’s Iran war is uniting a strange new anti-war alliance
- 21 گھنٹے قبل

What does “America First” even mean anymore?
- 12 گھنٹے قبل

Meet the toymaker who helped take down Trump’s tariffs
- 21 گھنٹے قبل
Global oil and gas shipping costs surge as Iran vows to close Strait of Hormuz
- 9 گھنٹے قبل
Pentagon says Iran will not be ‘endless war’
- ایک دن قبل
Apple launches new generation of MacBook laptops starting at $1,099
- 9 گھنٹے قبل


:no_upscale():format(webp)/cdn.vox-cdn.com/uploads/chorus_asset/file/25288007/sora.gif)









