San Francisco-based AI startup Anthropic has announced a new set of productivity-focused features for its Claude AI platform, bringing memory capabilities to teams using its Team ($30/$150 per person per month for standard or premium) and Enterprise plans (variable pricing).

Starting today, users can enable Claude to remember project details, team preferences, and work processes —aiming to reduce repetitive context-setting and streamline complex collaboration across chats.

They can also download their memories on a project-by-project basis and move them to other chatbots such as OpenAI's ChatGPT and Google's Gemini, according to the company's documentation.

While memory imports are currently experimental, the feature represents a step toward interoperability among AI systems. However, Claude prioritizes work-related context, so imported personal details that are unrelated to professional use may not be retained.

To get started, users can enable memory in settings and choose to generate memory from past conversations. Claude can then respond to queries such as "what were we working on last week?" using saved memory and chat history.

Memory designed for workplaces

This new memory feature is designed specifically for professional settings.

Claude can now retain information about ongoing projects, client needs, and team workflows.

The system is structured around project-based memory, meaning users can maintain separate memory contexts for distinct initiatives.

For example, a product team planning a launch can keep that context siloed from client service discussions or internal operations. Anthropic says this helps maintain boundaries between unrelated conversations and protects sensitive data from being inadvertently shared across contexts.

Building on individual user memory

This launch for workplace teams builds on an earlier version of memory introduced to individual users on Max, Team, and Enterprise plans in August 2025.

As reported by TechRadar, that update enabled Claude to recall past conversations only when prompted, providing continuity without automatic personalization.

Unlike competitors like ChatGPT and Google Gemini — which proactively store and integrate past conversations — Claude’s memory was designed from the start as an opt-in, user-controlled tool.

The same is true today: for organizations concerned about data control, Anthropic has made memory optional. Users have full control over whether memory is enabled, and enterprise administrators can disable the feature at the organizational level. This opt-in approach reflects Anthropic’s stated commitment to rolling out memory features with a focus on safety and responsible use.

At the time, Anthropic emphasized that memory would only be activated upon request, framing this approach as a privacy-focused alternative to persistent background tracking.

Users could ask Claude to surface past discussions, but the AI would otherwise maintain a generic persona. This boundary-first philosophy continues in the workplace rollout, with added tools to view, edit, and control memory directly.

Transparency and user control

Users can view and manage what Claude remembers through a memory summary interface.

This summary, available via settings, offers a transparent look into what the AI has retained from past chats. Users can make edits directly or update the summary by prompting Claude in a conversation. Claude adjusts what it remembers and references based on this feedback.

Introducing Incognito Chat for private, unmemorized conversations

The company has also introduced Incognito chat, a new mode available to all Claude users, regardless of plan.

Incognito chat sessions do not appear in conversation history and do not contribute to Claude’s memory. This mode is intended for situations where confidentiality or a fresh, context-free exchange is needed — such as brainstorming sessions or sensitive strategic discussions.

However, a spokesperson told us — as with rival OpenAI — incognito chats aren't immediately deleted. They are still stored for a "minimum of 30 days for safety purposes and to comply with legal requirements." Instead, they are just disabled from being used by memory or appearing in users' chat histories. But don't think the messages instantly disappear — it's unclear if they ever do.

Meanwhile, Team and Enterprise customers' standard data policies apply.

Conversations held in Incognito mode do not alter existing memory or history, and standard data retention settings continue to apply for Team and Enterprise users.

Building long-term, persistent context for teams

With this update, Anthropic positions Claude as a more context-aware collaborator for teams managing multiple projects and workflows. By combining memory management, privacy controls, and portability, the new capabilities aim to improve continuity and reduce friction in workplace AI interactions.