On Monday, Anthropic introduced a much-anticipated memory feature for its Claude chatbot. The company showcased this new capability in a YouTube demonstration, where a user queried Claude about previous conversations they had before their vacation. Claude efficiently retrieved and summarized past interactions, then prompted the user to continue with the ongoing project.
“Never lose track of your work again,” Anthropic stated. “Claude now remembers your past conversations, allowing you to seamlessly continue projects, refer to earlier discussions, and expand on your ideas without having to begin anew each time.”
The memory function is compatible with web, desktop, and mobile platforms, and it can maintain boundaries between different projects and workspaces. This feature began rolling out to users subscribed to Claude’s Max, Team, and Enterprise tiers on the same day. Users can activate it by navigating to “Settings” under their profile and toggling on the “Search and reference chats” option. The company announced that it plans to extend access to additional subscription plans soon.
However, it is crucial to note that this capability is not a persistent memory function like that found in OpenAI’s ChatGPT. According to Anthropic spokesperson Ryan Donegan, Claude will only pull up past conversations upon user request and does not develop a user profile.
The competition between Anthropic and OpenAI has intensified, with both companies rapidly unveiling competing features and enhancements—such as voice modes, expanded context windows, and new subscription tiers—while seeking substantial funding. Just last week, OpenAI unveiled GPT-5, whereas Anthropic is in the process of securing investments that could elevate its valuation to near $170 billion.
The introduction of memory functions represents another strategy employed by leading AI companies to enhance user retention and engagement with their chatbot services. Recent discussions surrounding these memory features have sparked debate online, as users have expressed both admiration and concern regarding how ChatGPT references previous interactions. Some individuals have controversially utilized it in therapeutic contexts, while others have reported adverse mental health effects attributed to interactions with the AI, leading to discussions about “ChatGPT psychosis.”