On Tuesday, Meta unveiled a standalone application for its artificial intelligence chatbot, known as Meta AI. The app is available for download on Android through the Play Store and on iOS via the App Store. The Menlo Park-based company has integrated social features, allowing users to view other individuals’ posts and images in a Discover feed. A voice mode has also been introduced, enabling users to engage in two-way verbal dialogues with the chatbot, although this feature is currently limited to select countries.
In an official newsroom announcement, Meta discussed the app’s launch and outlined its various functionalities. CEO Mark Zuckerberg previously indicated that the company was focused on developing a standalone AI solution. The app draws its power from the advanced Llama 4 AI model and competes with other AI chatbots like ChatGPT, Gemini, Grok, and Claude.
The design of the Meta AI app emphasizes a social experience. Its Discover feed allows users to share prompts and responses generated by the chatbot, along with AI-created images. Users can engage with the content by liking, commenting, and remixing posts from others. Importantly, content does not appear in the feed unless shared by the user.
To utilize the app, users need a Meta account, which can be linked to either Instagram or Facebook. When signing in, the app accesses user account information, which may include profiles, content interactions, and previous conversations with Meta AI. This access allows the chatbot to deliver a tailored user experience, a capability currently offered in the United States and Canada.
In addition to integrating information from social media accounts, Meta AI has introduced a memory feature that retains contextual details from user interactions, enhancing its personalized responses.
The app also facilitates hands-free usage, allowing users to have verbal conversations with the chatbot in a fluid, natural tone. Meta revealed adjustments to the Llama 4 model to introduce this functionality. Users can even request the AI to generate or edit images through voice commands.
The company is also testing a full-duplex speech technology, aimed at allowing the AI to produce voice directly rather than converting text into speech. According to Meta, this mode provides a more human-like communication experience. However, this feature currently lacks access to the internet and real-time information and is available only in Australia, Canada, New Zealand, and the United States.
The Meta AI app will also merge with the Meta View companion app, designed for Ray-Ban Meta glasses. This integration will require users to connect their smart glasses to the app to access specific functionalities, such as managing their gallery, editing images, or reviewing conversation history.
In select regions, users will have the option to switch from the AI app to the glasses without losing the context of their conversation. Currently, conversations can initiate on the Ray-Ban Meta glasses but can also be continued through the Meta AI app or the website. It is noteworthy that the Meta AI app is available for free, with no current plans for paid subscriptions.