OpenAI is launching a new video-generation model, and it’s called Sora. The AI company says Sora “can create realistic and imaginative scenes from text instructions.” The text-to-video model allows users to create photorealistic videos up to a minute long — all based on prompts they’ve written.
- Home
- Technology
- News
OpenAI introduces Sora, its text-to-video AI model
OpenAI is launching a new text-to-video model called Sora. The AI model is capable of producing a video based on a text prompt.


Sora is capable of creating “complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” according to OpenAI’s introductory blog post. The company also notes that the model can understand how objects “exist in the physical world,” as well as “accurately interpret props and generate compelling characters that express vibrant emotions.”
The model can also generate a video based on a still image, as well as fill in missing frames on an existing video or extend it. The Sora-generated demos included in OpenAI’s blog post include an aerial scene of California during the gold rush, a video that looks as if it were shot from the inside of a Tokyo train, and others. Many have some telltale signs of AI — like a suspiciously moving floor in a video of a museum — and OpenAI says the model “may struggle with accurately simulating the physics of a complex scene,” but the results are overall pretty impressive.
A couple of years ago, it was text-to-image generators like Midjourney that were at the forefront of models’ ability to turn words into images. But recently, video has begun to improve at a remarkable pace: companies like Runway and Pika have shown impressive text-to-video models of their own, and Google’s Lumiere figures to be one of OpenAI’s primary competitors in this space, too. Similar to Sora, Lumiere gives users text-to-video tools and also lets them create videos from a still image.
Sora is currently only available to “red teamers” who are assessing the model for potential harms and risks. OpenAI is also offering access to some visual artists, designers, and filmmakers to get feedback. It notes that the existing model might not accurately simulate the physics of a complex scene and may not properly interpret certain instances of cause and effect.
Earlier this month, OpenAI announced it’s adding watermarks to its text-to-image tool DALL-E 3, but notes that they can “easily be removed.” Like its other AI products, OpenAI will have to contend with the consequences of fake, AI photorealistic videos being mistaken for the real thing.

Black Mirror season 7 trailer released: New cast, old faces, and more thrills await
- 4 hours ago
Gold sets new price record in Pakistan
- 5 hours ago

PM directs crackdown on sugar hoarders to control prices
- 24 minutes ago

DIG Imran Kishwar resigns from Punjab Police over ‘internal conflicts’
- an hour ago

Sources: Saints' Mathieu, Jordan take pay cuts
- 5 hours ago

Zulfiqar Ali Bhutto Jr. announces his entry into politics
- 6 hours ago

DG ISPR exposes Indian media propaganda over Jaffar Express hijacking
- 5 hours ago
Meteorologists warn of dangerous storms, flooding across Central and Eastern US
- 19 minutes ago

US imposes sanctions on Iranian oil minister and entities in latest pressure move
- 27 minutes ago
American Airlines jet catches fire after landing at Denver airport, 12 injured
- 4 hours ago

Pakistan triumphs at Special Olympics World Winter Games, wins third gold medal
- 5 hours ago
Two attackers strike at Golden Temple, leaving five injured in India
- 3 minutes ago