Runway, an AI startup, has announced its new AI video model, Gen-4, which aims to improve the consistency of scenes and characters across multiple shots. The company claims that this advancement in AI-generated video technology will provide users with enhanced “continuity and control” in storytelling, addressing a common challenge in the field.
The model is currently being rolled out to subscribers and enterprise users. Gen-4 allows for the generation of characters and objects from a single reference image, enabling users to detail the desired composition. The model then produces consistent visuals from various angles, enhancing the storytelling experience.
To showcase its capabilities, Runway released a video demonstrating a woman who maintains her visual characteristics throughout different scenes and lighting scenarios.
This announcement follows the introduction of Runway’s Gen-3 Alpha video generator less than a year ago. While that model allowed for longer video productions, it faced criticism for reportedly being trained on a vast array of content, including numerous scraped YouTube videos and unauthorized films.