Generative AI for Video from Runway! | | | | Turtles AI

Generative AI for Video from Runway!
DukeRem8 February 2023
Runway, the AI startup that helped create Stable Diffusion, the highly acclaimed text-to-image model from last year, has unveiled a new AI tool that has the capability of altering existing videos into brand new ones by employing any style depicted by either a text command or reference image. In a demo video posted on its official website, Runway demonstrates how Gen-1, its new software, can convert footage of individuals on a street into clay animation figures, or turn a stack of books on a table into a night-time cityscape. Runway anticipates that Gen-1 will have a similar impact on the video industry as Stable Diffusion did on the image generation market. Runway CEO, Cristóbal Valenzuela, believes 2023 will be the year of video, as there's been a huge increase in image-generation models. Runway was founded in 2018 and specializes in developing AI-powered video-editing software. Its tools are used by a wide range of content creators, from TikTokers and YouTubers to mainstream movie and TV production companies. One notable example of Runway's software in action is in the graphics editing of "The Late Show with Stephen Colbert". Another is in the visual effects of the movie "Everything Everywhere All at Once", where its technology was utilized to bring specific scenes to life. In 2021, Runway collaborated with researchers at the University of Munich to create the first version of Stable Diffusion. This model was later supported by stability AI, a UK-based startup, which covered the cost of training the model on a larger dataset. In 2022, stability AI helped bring Stable Diffusion to the masses, making it a global phenomenon.