Runway unveiled its latest artificial intelligence (AI) video generation model on Friday, named Aleph, which possesses the capability to edit multiple elements within existing videos. This innovative video-to-video model enables a variety of edits, including the addition, removal, and transformation of objects, new camera angles, environmental changes, season alterations, and style transformations. The AI firm, based in New York City, announced that Aleph will soon be made available to its enterprise and creative clientele, along with general users of the platform.
Runway’s Aleph AI Model Can Edit Videos
Progress in AI video generation has advanced significantly, evolving from simple animated scenes to fully realized videos complete with narratives and audio. Runway continues to lead in this area, with its tools already employed by major production companies such as Netflix, Amazon, and Walt Disney.
The newly introduced Aleph model enhances this momentum, allowing seamless video edits through a sophisticated video-to-video framework. In a social media post on X (formerly Twitter), the company highlighted Aleph as a cutting-edge in-context video model, capable of transforming input videos with basic descriptive prompts.
Introducing Runway Aleph, a new way to edit, transform and generate video.
Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing… pic.twitter.com/zGdWYedMqM
— Runway (@runwayml) July 25, 2025
In a blog post, Runway detailed some of Aleph’s features set for release. The model will initially be offered to enterprise and creative customers, followed by a wider release to all platform users in the subsequent weeks. However, the announcement did not specify whether users on the free tier will have access to the model or if it will be limited to paid subscribers.
Aleph’s functionality allows it to generate new perspectives on existing scenes. Users may request various camera shots, such as a reverse low angle, an extreme close-up from the right side, or a comprehensive wide shot. The system can also produce subsequent frames based on the original footage and user prompts.
One of the standout features of Aleph lies in its ability to alter settings and conditions, such as seasons, locations, and times of the day. For example, a video showing a sunny park scene can be transformed to display rain, a sandstorm, or a wintry night, while keeping all other components intact.
Moreover, Aleph can introduce new objects, remove elements like reflections or buildings, modify materials, alter character appearances, and change colors of objects. It can even replicate specific movements from a video, such as a flying drone sequence, and apply it to a different environment.
So far, Runway has not disclosed technical specifications for Aleph, including the maximum length for input videos, compatible aspect ratios, or any potential charges for the application programming interface (API). Further details are expected to be released closer to the official launch of the model.