Google has introduced a new generative AI tool aimed at assisting developers in transforming initial UI concepts into fully functional app designs rapidly. The “Stitch” experiment, powered by Gemini 2.5 Pro, is now accessible through Google Labs. It enables users to convert text prompts and reference images into detailed UI designs and frontend code within minutes, thereby alleviating developers from the task of manually crafting design components followed by separate programming.
The tool creates a visual interface based on chosen themes and natural language inputs, currently available in English. Developers can specify elements they desire in the final product, including color schemes and user experience features. Additionally, users can upload visual references to help guide Stitch’s output, such as wireframes, rough sketches, or examples of existing UI designs.
Stitch offers the capability to produce “multiple variations” of a user interface, which facilitates experimentation with various aesthetics and layouts. The tool generates not only UI assets but also fully operational front-end code, which can be directly incorporated into applications or exported to Figma for further refinement and integration with existing workflows, enabling collaboration with designers.
The ability to export to Figma aligns with the platform’s well-established role in product design, which makes it more suited for adjusting specific visual details. However, the automatic coding feature of Stitch may compete with Figma’s recently announced Make UI building app. Google appears to be positioning Stitch as a means to retain designers who were utilizing Gemini’s Code Assist tool and to provide them with innovative solutions.