Runway unveiled its latest artificial intelligence (AI) video generation model on Friday, which enhances the editing capabilities of existing videos. Named Aleph, this breakthrough video-to-video AI model offers users the ability to add, remove, and transform various elements within input videos. Users will have the option to modify environmental factors such as seasons and time of day, as well as alter camera angles and styles. The New York City-based company is set to make this model available to its enterprise and creative customers shortly, followed by a broader rollout for its platform users.
Runway’s Aleph AI Model Can Edit Videos
The evolution of AI video generation technology has progressed significantly, moving from simple animated snippets to comprehensive video content complete with narratives and audio. Runway has played a pivotal role in this advancement, providing tools that are now instrumental to major production companies like Netflix, Amazon, and Walt Disney.
The introduction of Aleph marks a significant step forward, enabling extensive manipulation and generation of elements in input videos. In a post on X (previously Twitter), the company touted Aleph as a cutting-edge in-context video model capable of transforming videos using straightforward descriptive prompts.
Introducing Runway Aleph, a new way to edit, transform and generate video.
Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing… pic.twitter.com/zGdWYedMqM
— Runway (@runwayml) July 25, 2025
In a recent blog post, Runway highlighted some features that Aleph will offer upon its release. Initially, the AI model will be accessible to enterprise and creative clients, with a wider rollout expected in the following weeks. However, specifics regarding access for users on the free tier remain unclear, raising questions about whether it will be exclusive to paid subscribers.
Among its capabilities, Aleph allows users to generate new perspectives of the same scene from an input video. This includes the option for different shot types, such as low angles, extreme close-ups, or wide shots. Additionally, it can use the original video as a reference to produce subsequent frames guided by user prompts.
Aleph’s ability to transform various factors within the original video is particularly noteworthy. For example, a video of a sunny park can be edited to simulate rain, snow, or even nighttime, all while retaining the other visual elements.
Furthermore, the AI model can insert objects, remove visual distractions like reflections or buildings, completely change objects and materials, and modify character appearances and colors. Notably, it can also replicate the motion of elements from one scene and apply them to a different context, such as a drone’s flight pattern.
As of now, Runway has not disclosed specific technical details about Aleph, such as the length of supported input videos, aspect ratio compatibility, or API pricing structures. Further information is likely to be unveiled when the model is officially launched.