Text-to-video model

{{short description|Machine learning model}}

{{Use dmy dates|date=November 2024}}

File:OpenAI Sora in Action- Tokyo Walk.webm text-to-video model, using the prompt: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.]]

A text-to-video model is a machine learning model that uses a natural language description as input to produce a video relevant to the input text.{{cite report|url=https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf|title=Artificial Intelligence Index Report 2023|publisher=Stanford Institute for Human-Centered Artificial Intelligence|page=98|quote=Multiple high quality text-to-video models, AI systems that can generate video clips from prompted text, were released in 2022.}} Advancements during the 2020s in the generation of high-quality, text-conditioned videos have largely been driven by the development of video diffusion models.{{cite arXiv |last1=Melnik |first1=Andrew |title=Video Diffusion Models: A Survey |date=2024-05-06 |eprint =2405.03150 |last2=Ljubljanac |first2=Michal |last3=Lu |first3=Cong |last4=Yan |first4=Qi |last5=Ren |first5=Weiming |last6=Ritter |first6=Helge|class=cs.CV }}

Models

{{Globalize|section|date=August 2024}}

There are different models, including open source models. Chinese-language input{{Cite web |last=Wodecki |first=Ben |date=2023-08-11 |title=Text-to-Video Generative AI Models: The Definitive List |url=https://aibusiness.com/nlp/ai-video-generation-the-supreme-list |access-date=2024-11-18 |website=AI Business |publisher=Informa}} CogVideo is the earliest text-to-video model "of 9.4 billion parameters" to be developed, with its demo version of open source codes first presented on GitHub in 2022.{{Citation |title=CogVideo |date=2022-10-12 |url=https://github.com/THUDM/CogVideo |publisher=THUDM |access-date=2022-10-12}} That year, Meta Platforms released a partial text-to-video model called "Make-A-Video",{{Cite web |last=Davies |first=Teli |date=2022-09-29 |title=Make-A-Video: Meta AI's New Model For Text-To-Video Generation |url=https://wandb.ai/telidavies/ml-news/reports/Make-A-Video-Meta-AI-s-New-Model-For-Text-To-Video-Generation--VmlldzoyNzE4Nzcx |access-date=2022-10-12 |website=Weights & Biases |language=en}}{{Cite web |last=Monge |first=Jim Clyde |date=2022-08-03 |title=This AI Can Create Video From Text Prompt |url=https://betterprogramming.pub/this-ai-can-create-video-from-text-prompt-6904439d7aba |access-date=2022-10-12 |website=Medium |language=en}}{{Cite web |title=Meta's Make-A-Video AI creates videos from text |url=https://www.fonearena.com/blog/375627/meta-make-a-video-ai-create-videos-from-text.html |access-date=2022-10-12 |website=www.fonearena.com}} and Google's Brain (later Google DeepMind) introduced Imagen Video, a text-to-video model with 3D U-Net.{{Cite news |title=google: Google takes on Meta, introduces own video-generating AI |url=https://m.economictimes.com/tech/technology/google-takes-on-meta-introduces-own-video-generating-ai/articleshow/94681128.cms |access-date=2022-10-12 |website=The Economic Times| date=6 October 2022 }}{{Cite web |last=Monge |first=Jim Clyde |date=2022-08-03 |title=This AI Can Create Video From Text Prompt |url=https://betterprogramming.pub/this-ai-can-create-video-from-text-prompt-6904439d7aba |access-date=2022-10-12 |website=Medium |language=en}}{{Cite web |title=Nuh-uh, Meta, we can do text-to-video AI, too, says Google |url=https://www.theregister.com/AMP/2022/10/06/google_ai_imagen_video/ |access-date=2022-10-12 |website=The Register}}{{Cite web |title=Papers with Code - See, Plan, Predict: Language-guided Cognitive Planning with Video Prediction |url=https://paperswithcode.com/paper/see-plan-predict-language-guided-cognitive |access-date=2022-10-12 |website=paperswithcode.com |language=en}}{{Cite web |title=Papers with Code - Text-driven Video Prediction |url=https://paperswithcode.com/paper/text-driven-video-prediction |access-date=2022-10-12 |website=paperswithcode.com |language=en}}

In March 2023, a research paper titled "VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation" was published, presenting a novel approach to video generation.{{Cite arXiv |eprint=2303.08320 |class=cs.CV |first1=Zhengxiong |last1=Luo |first2=Dayou |last2=Chen |title=VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation |date=2023 |last3=Zhang |first3=Yingya |last4=Huang |first4=Yan |last5=Wang |first5=Liang |last6=Shen |first6=Yujun |last7=Zhao |first7=Deli |last8=Zhou |first8=Jingren |last9=Tan |first9=Tieniu}} The VideoFusion model decomposes the diffusion process into two components: base noise and residual noise, which are shared across frames to ensure temporal coherence. By utilizing a pre-trained image diffusion model as a base generator, the model efficiently generated high-quality and coherent videos. Fine-tuning the pre-trained model on video data addressed the domain gap between image and video data, enhancing the model's ability to produce realistic and consistent video sequences.{{Cite arXiv |title=VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation |eprint=2303.08320 |last1=Luo |first1=Zhengxiong |last2=Chen |first2=Dayou |last3=Zhang |first3=Yingya |last4=Huang |first4=Yan |last5=Wang |first5=Liang |last6=Shen |first6=Yujun |last7=Zhao |first7=Deli |last8=Zhou |first8=Jingren |last9=Tan |first9=Tieniu |date=2023 |class=cs.CV }} In the same month, Adobe introduced Firefly AI as part of its features.{{Cite web |date=2024-10-10 |title=Adobe launches Firefly Video model and enhances image, vector and design models. Adobe Newsroom |url=https://news.adobe.com/news/2024/10/101424-adobe-launches-firefly-video-model |access-date=2024-11-18 |publisher=Adobe Inc.}}

In January 2024, Google announced development of a text-to-video model named Lumiere which is anticipated to integrate advanced video editing capabilities.{{Cite web |last=Yirka |first=Bob |date=2024-01-26 |title=Google announces the development of Lumiere, an AI-based next-generation text-to-video generator. |url=https://techxplore.com/news/2024-01-google-lumiere-ai-based-generation.html |access-date=2024-11-18 |website=Tech Xplore}} Matthias Niessner and Lourdes Agapito at AI company Synthesia work on developing 3D neural rendering techniques that can synthesise realistic video by using 2D and 3D neural representations of shape, appearances, and motion for controllable video synthesis of avatars.{{Cite web |title=Text to Speech for Videos |url=https://www.synthesia.io/text-to-speech |access-date=2023-10-17 |website=Synthesia.io}} In June 2024, Luma Labs launched its Dream Machine video tool.{{Cite web |last=Nuñez |first=Michael |date=2024-06-12 |title=Luma AI debuts 'Dream Machine' for realistic video generation, heating up AI media race |url=https://venturebeat.com/ai/luma-ai-debuts-dream-machine-for-realistic-video-generation-heating-up-ai-media-race/ |access-date=2024-11-18 |website=VentureBeat |language=en-US}}{{Cite web |last=Fink |first=Charlie |title=Apple Debuts Intelligence, Mistral Raises $600 Million, New AI Text-To-Video |url=https://www.forbes.com/sites/charliefink/2024/06/13/apple-debuts-intelligence-mistral-raises-600-million-new-ai-text-to-video/ |access-date=2024-11-18 |website=Forbes |language=en}} That same month,{{Cite web |last=Franzen |first=Carl |date=2024-06-12 |title=What you need to know about Kling, the AI video generator rival to Sora that's wowing creators |url=https://venturebeat.com/ai/what-you-need-to-know-about-kling-the-ai-video-generator-rival-to-sora-thats-wowing-creators/ |access-date=2024-11-18 |website=VentureBeat |language=en-US}} Kuaishou extended its Kling AI text-to-video model to international users. In July 2024, TikTok owner ByteDance released Jimeng AI in China, through its subsidiary, Faceu Technology.{{Cite web |date=2024-08-06 |title=ByteDance joins OpenAI's Sora rivals with AI video app launch |url=https://www.reuters.com/technology/artificial-intelligence/bytedance-joins-openais-sora-rivals-with-ai-video-app-launch-2024-08-06/ |access-date=2024-11-18 |publisher=Reuters}} By September 2024, the Chinese AI company MiniMax debuted its video-01 model, joining other established AI model companies like Zhipu AI, Baichuan, and Moonshot AI, which contribute to China’s involvement in AI technology.{{Cite web |date=2024-09-02 |title=Chinese ai "tiger" minimax launches text-to-video-generating model to rival OpenAI's sora |url=https://finance.yahoo.com/news/chinese-ai-tiger-minimax-launches-093000322.html |access-date=2024-11-18 |website=Yahoo! Finance}}

Alternative approaches to text-to-video models include{{Citation |title=Text2Video-Zero |date=2023-08-12 |url=https://github.com/Picsart-AI-Research/Text2Video-Zero |access-date=2023-08-12 |publisher=Picsart AI Research (PAIR)}} Google's Phenaki, Hour One, Colossyan, Runway's Gen-3 Alpha,{{Cite web |last=Kemper |first=Jonathan |date=2024-07-01 |title=Runway's Sora competitor Gen-3 Alpha now available |url=https://the-decoder.com/runways-sora-competitor-gen-3-alpha-now-available/ |access-date=2024-11-18 |website=THE DECODER |language=en-US}}{{Cite news |date=2023-03-20 |title=Generative AI's Next Frontier Is Video |url=https://www.bloomberg.com/news/articles/2023-03-20/generative-ai-s-next-frontier-is-video |access-date=2024-11-18 |work=Bloomberg.com |language=en}} and OpenAI's Sora,{{Cite web |date=2024-02-15 |title=OpenAI teases 'Sora,' its new text-to-video AI model |url=https://www.nbcnews.com/tech/tech-news/openai-sora-video-artificial-intelligence-unveiled-rcna139065 |access-date=2024-11-18 |website=NBC News |language=en}} {{Cite web |last=Kelly |first=Chris |date=2024-06-25 |title=Toys R Us creates first brand film to use OpenAI's text-to-video tool |url=https://www.marketingdive.com/news/toys-r-us-openai-sora-gen-ai-first-text-video/719797/ |access-date=2024-11-18 |website=Marketing Dive |publisher=Informa |language=en-US}} Several additional text-to-video models, such as Plug-and-Play, Text2LIVE, and TuneAVideo, have emerged.{{Cite book |last1=Jin |first1=Jiayao |last2=Wu |first2=Jianhang |last3=Xu |first3=Zhoucheng |last4=Zhang |first4=Hang |last5=Wang |first5=Yaxin |last6=Yang |first6=Jielong |chapter=Text to Video: Enhancing Video Generation Using Diffusion Models and Reconstruction Network |date=2023-08-04 |title=2023 2nd International Conference on Computing, Communication, Perception and Quantum Technology (CCPQT) |chapter-url=https://ieeexplore.ieee.org/document/10336607 |publisher=IEEE |pages=108–114 |doi=10.1109/CCPQT60491.2023.00024 |isbn=979-8-3503-4269-7}} FLUX.1 developer Black Forest Labs has announced its text-to-video model SOTA.{{Cite web |date=2024-08-01 |title=Announcing Black Forest Labs |url=https://blackforestlabs.ai/announcing-black-forest-labs/ |access-date=2024-11-18 |website=Black Forest Labs |language=en-US}} Google was preparing to launch a video generation tool named Veo for YouTube Shorts in 2025.{{Cite web |last=Forlini |first=Emily Dreibelbis |date=2024-09-18 |title=Google's veo text-to-video AI generator is coming to YouTube shorts |url=https://www.pcmag.com/news/googles-veo-text-to-video-ai-generator-is-coming-to-youtube-shorts |access-date=2024-11-18 |website=PC Magazine}} On May 2025, Google launched the Veo 3 iteration of the model. It was noted for it's impressive audio generation capabilities, which were a previous limitation for text-to-video models.{{Cite web |last=Subin |first=Jennifer Elias,Samantha |date=2025-05-20 |title=Google launches Veo 3, an AI video generator that incorporates audio |url=https://www.cnbc.com/2025/05/20/google-ai-video-generator-audio-veo-3.html |access-date=2025-05-22 |website=CNBC |language=en}}

Architecture and training

There are several architectures that have been used to create Text-to-Video models. Similar to Text-to-Image models, these models can be trained using Recurrent Neural Networks (RNNs) such as long short-term memory (LSTM) networks, which has been used for Pixel Transformation Models and Stochastic Video Generation Models, which aid in consistency and realism respectively.{{Cite book |last1=Bhagwatkar |first1=Rishika |last2=Bachu |first2=Saketh |last3=Fitter |first3=Khurshed |last4=Kulkarni |first4=Akshay |last5=Chiddarwar |first5=Shital |chapter=A Review of Video Generation Approaches |date=2020-12-17 |title=2020 International Conference on Power, Instrumentation, Control and Computing (PICC) |chapter-url=https://ieeexplore.ieee.org/document/9362485 |publisher=IEEE |pages=1–5 |doi=10.1109/PICC51425.2020.9362485 |isbn=978-1-7281-7590-4}} An alternative for these include transformer models. Generative adversarial networks (GANs), Variational autoencoders (VAEs), — which can aid in the prediction of human motion{{Cite book |last1=Kim |first1=Taehoon |last2=Kang |first2=ChanHee |last3=Park |first3=JaeHyuk |last4=Jeong |first4=Daun |last5=Yang |first5=ChangHee |last6=Kang |first6=Suk-Ju |last7=Kong |first7=Kyeongbo |chapter=Human Motion Aware Text-to-Video Generation with Explicit Camera Control |date=2024-01-03 |title=2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) |chapter-url=https://ieeexplore.ieee.org/document/10484108 |publisher=IEEE |pages=5069–5078 |doi=10.1109/WACV57701.2024.00500 |isbn=979-8-3503-1892-0}} — and diffusion models have also been used to develop the image generation aspects of the model.{{Cite book |last=Singh |first=Aditi |chapter=A Survey of AI Text-to-Image and AI Text-to-Video Generators |date=2023-05-09 |title=2023 4th International Conference on Artificial Intelligence, Robotics and Control (AIRC) |chapter-url=https://ieeexplore.ieee.org/document/10303174 |publisher=IEEE |pages=32–36 |doi=10.1109/AIRC57904.2023.10303174 |isbn=979-8-3503-4824-8|arxiv=2311.06329 }}

Text-video datasets used to train models include, but are not limited to, WebVid-10M, HDVILA-100M, CCV, ActivityNet, and Panda-70M.{{cite arXiv |last1=Miao |first1=Yibo |title=T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models |date=2024-09-08 |eprint=2407.05965 |last2=Zhu |first2=Yifan |last3=Dong |first3=Yinpeng |last4=Yu |first4=Lijia |last5=Zhu |first5=Jun |last6=Gao |first6=Xiao-Shan|class=cs.CV }}{{Cite book |last1=Zhang |first1=Ji |last2=Mei |first2=Kuizhi |last3=Wang |first3=Xiao |last4=Zheng |first4=Yu |last5=Fan |first5=Jianping |chapter=From Text to Video: Exploiting Mid-Level Semantics for Large-Scale Video Classification |date=August 2018 |title=2018 24th International Conference on Pattern Recognition (ICPR) |chapter-url=https://ieeexplore.ieee.org/document/8545513 |publisher=IEEE |pages=1695–1700 |doi=10.1109/ICPR.2018.8545513 |isbn=978-1-5386-3788-3}} These datasets contain millions of original videos of interest, generated videos, captioned-videos, and textual information that help train models for accuracy. Text-video datasets used to train models include, but are not limited to PromptSource, DiffusionDB, and VidProM. These datasets provide the range of text inputs needed to teach models how to interpret a variety of textual prompts.

The video generation process involves synchronizing the text inputs with video frames, ensuring alignment and consistency throughout the sequence. This predictive process is subject to decline in quality as the length of the video increases due to resource limitations.

Limitations

Despite the rapid evolution of Text-to-Video models in their performance, a primary limitation is that they are very computationally heavy which limits its capacity to provide high quality and lengthy outputs.{{Cite book |last1=Bhagwatkar |first1=Rishika |last2=Bachu |first2=Saketh |last3=Fitter |first3=Khurshed |last4=Kulkarni |first4=Akshay |last5=Chiddarwar |first5=Shital |chapter=A Review of Video Generation Approaches |date=2020-12-17 |title=2020 International Conference on Power, Instrumentation, Control and Computing (PICC) |chapter-url=https://ieeexplore.ieee.org/document/9362485 |publisher=IEEE |pages=1–5 |doi=10.1109/PICC51425.2020.9362485 |isbn=978-1-7281-7590-4}}{{Cite book |last=Singh |first=Aditi |chapter=A Survey of AI Text-to-Image and AI Text-to-Video Generators |date=2023-05-09 |title=2023 4th International Conference on Artificial Intelligence, Robotics and Control (AIRC) |chapter-url=https://ieeexplore.ieee.org/document/10303174 |publisher=IEEE |pages=32–36 |doi=10.1109/AIRC57904.2023.10303174 |isbn=979-8-3503-4824-8|arxiv=2311.06329 }} Additionally, these models require a large amount of specific training data to be able to generate high quality and coherent outputs, which brings about the issue of accessibility.

Moreover, models may misinterpret textual prompts, resulting in video outputs that deviate from the intended meaning. This can occur due to limitations in capturing semantic context embedded in text, which affects the model’s ability to align generated video with the user’s intended message. Various models, including Make-A-Video, Imagen Video, Phenaki, CogVideo, GODIVA, and NUWA, are currently being tested and refined to enhance their alignment capabilities and overall performance in text-to-video generation.

Another issue with the outputs is that text or fine details in AI-generated videos often appear garbled, a problem that stable diffusion models also struggle with. Examples include distorted hands and unreadable text.

Ethics

{{One source section

| date = December 2024

}}

The deployment of Text-to-Video models raises ethical considerations related to content generation. These models have the potential to create inappropriate or unauthorized content, including explicit material, graphic violence, misinformation, and likenesses of real individuals without consent.{{cite arXiv |last1=Miao |first1=Yibo |title=T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models |date=2024-09-08 |eprint=2407.05965 |last2=Zhu |first2=Yifan |last3=Dong |first3=Yinpeng |last4=Yu |first4=Lijia |last5=Zhu |first5=Jun |last6=Gao |first6=Xiao-Shan|class=cs.CV }} Ensuring that AI-generated content complies with established standards for safe and ethical usage is essential, as content generated by these models may not always be easily identified as harmful or misleading. The ability of AI to recognize and filter out NSFW or copyrighted content remains an ongoing challenge, with implications for both creators and audiences.

Impacts and applications

{{One source section

| date = December 2024

}}

Text-to-video models offer a broad range of applications that may benefit various fields, from educational and promotional to creative industries. These models can streamline content creation for training videos, movie previews, gaming assets, and visualizations, making it easier to generate content.{{Cite book |last=Singh |first=Aditi |chapter=A Survey of AI Text-to-Image and AI Text-to-Video Generators |date=2023-05-09 |title=2023 4th International Conference on Artificial Intelligence, Robotics and Control (AIRC) |chapter-url=https://ieeexplore.ieee.org/document/10303174 |publisher=IEEE |pages=32–36 |doi=10.1109/AIRC57904.2023.10303174 |isbn=979-8-3503-4824-8|arxiv=2311.06329 }}

Comparison of existing models

class="wikitable sortable"

|+

!Model/Product

!Company

!Year released

!Status

!class="unsortable" | Key features

!class="unsortable" | Capabilities

!class="unsortable" | Pricing

!class="unsortable" | Video length

!class="unsortable" | Supported languages

Synthesia

|Synthesia

|2019

|Released

|AI avatars, multilingual support for 60+ languages, customization options{{Cite web |title=Top AI Video Generation Models of 2024 |url=https://deepgram.com/learn/top-ai-video-generation-models-of-2024 |access-date=2024-08-30 |website=Deepgram |language=en}}

|Specialized in realistic AI avatars for corporate training and marketing

|Subscription-based, starting around $30/month

|Varies based on subscription

|60+

InVideo AI

|InVideo

|2021

|Released

|AI-powered video creation, large stock library, AI talking avatars

|Tailored for social media content with platform-specific templates

|Free plan available, Paid plans starting at $16/month

|Varies depending on content type

|Multiple (not specified)

Fliki

|Fliki AI

|2022

|Released

|Text-to-video with AI avatars and voices, extensive language and voice support

|Supports 65+ AI avatars and 2,000+ voices in 70 languages

|Free plan available, Paid plans starting at $30/month

|Varies based on subscription

|70+

Runway Gen-2

|Runway AI

|2023

|Released

|Multimodal video generation from text, images, or videos{{Cite web |title=Runway Research {{!}} Gen-2: Generate novel videos with text, images or video clips |url=https://runwayml.com/research/gen-2 |access-date=2024-08-30 |website=runwayml.com |language=en}}

|High-quality visuals, various modes like stylization and storyboard

|Free trial, Paid plans (details not specified)

|Up to 16 seconds

|Multiple (not specified)

Pika Labs

|Pika Labs

|2024

|Beta

|Dynamic video generation, camera and motion customization{{Cite web |last=Sharma |first=Shubham |date=2023-12-26 |title=Pika Labs' text-to-video AI platform opens to all: Here's how to use it |url=https://venturebeat.com/ai/pika-labs-text-to-video-ai-platform-opens-to-all-heres-how-to-use-it/ |access-date=2024-08-30 |website=VentureBeat |language=en-US}}

|User-friendly, focused on natural dynamic generation

|Currently free during beta

|Flexible, supports longer videos with frame continuation

|Multiple (not specified)

Runway Gen-3 Alpha

|Runway AI

|2024

|Alpha

|Enhanced visual fidelity, photorealistic humans, fine-grained temporal control{{Cite web |title=Runway Research {{!}} Introducing Gen-3 Alpha: A New Frontier for Video Generation |url=https://runwayml.com/research/introducing-gen-3-alpha |access-date=2024-08-30 |website=runwayml.com |language=en}}

|Ultra-realistic video generation with precise key-framing and industry-level customization

|Free trial available, custom pricing for enterprises

|Up to 10 seconds per clip, extendable

|Multiple (not specified)

OpenAI Sora

|OpenAI

|2024

|Alpha

|Deep language understanding, high-quality cinematic visuals, multi-shot videos{{Cite web |title=Sora {{!}} OpenAI |url=https://openai.com/index/sora/ |access-date=2024-08-30 |website=openai.com}}

|Capable of creating detailed, dynamic, and emotionally expressive videos; still under development with safety measures

|Pricing not yet disclosed

|Expected to generate longer videos; duration specifics TBD

|Multiple (not specified)

See also

References