It dropped in the midst of a chaotic and tragic news day yesterday, but OpenAI made a significant upgrade to ChatGPT that's worth further consideration among software developers: the company added support for the emerging Model Context Protocol (MCP) standard directly into ChatGPT itself when switched into developer mode, allowing third party developers to be able to connect their own external (MCP compatible) servers and tools directly into the developers' ChatGPT accounts.
This provides a huge advantage to third-party developers who want to access and modify their own websites, products and services directly within ChatGPT's web interface.
Instead of logging into separate apps, clicking through menus, or juggling multiple dashboards, a developer who has ChatGPT dev mode switched on (and is using the Plus or Pro plans at $20 or $200 monthly, respectively) can ask ChatGPT a natural language question and get a direct answer from the developers' own service, or even make changes to it, all from within a single chat.
The company cautions that while the feature is powerful, it is comes with risks.
In OpenAI’s own words: “it's powerful but dangerous, and is intended for developers who understand how to safely configure and test connectors. When using developer mode, watch for prompt injections and other risks, model mistakes on write actions that could destroy data, and malicious MCPs that attempt to steal information."
Why OpenAI's move is so significant and helps further enshrine MCP as an AI industry standard
MCP itself is an open standard, first introduced by Anthropic in November 2024, that provides a consistent way to connect AI assistants to external systems such as content repositories, enterprise software, or developer tools.
Anthropic likens MCP to a kind of USB-C port for AI applications: just as USB-C simplifies hardware connections, MCP standardizes how AI models communicate with external data and services.
Since its release, MCP has gained rapid adoption across the industry, with early adopters including Block, Apollo, Cloudflare, MongoDB, and PayPal, all of whom have made it possible for developers to connect AI chatbots and other gen AI tools to these respective third party services and obtain information from them. There's a whole website showing public MCP servers to which AI developers can now connect large language models.
Thus, the purpose of ChatGPT’s new developer mode with MCP support is to give developers a standardized, relatively simple and easy way to connect their systems, tools, or services directly into ChatGPT and maintain that connection going forward, retrieving information or executing operations on the connected MCP server from within the developer's own ChatGPT interface.
Instead of building custom integrations or relying on OpenAI’s old plugin system, developers can now host an MCP server that exposes specific functions — such as checking inventory, updating records, or processing payments — and then execute them from within their own ChatGPT Plus or Pro accounts.
Industry observers and participants argue that MCP is fast becoming a common language for enterprise AI. A recent VentureBeat article noted that despite only launching months earlier, MCP appears to be the leading candidate for interoperability in the agentic AI ecosystem.
Unlike traditional application programming interfaces (APIs), MCP allows for more granular control and security. Enterprises can define what tools are exposed, require authentication from agents, and enforce rules about what models can or cannot access. This fine-grained control makes MCP particularly appealing in corporate environments where security and compliance are top priorities.
For developers working with ChatGPT’s new developer mode, this means the connectors they create may not just serve one-off integrations — they could be building into a broader ecosystem standard. MCP was created as an open protocol so that one connector can work across different AI ecosystems, not just ChatGPT.
How it works
Enabling developer mode requires users to navigate to Settings → Connectors → Advanced → Developer mode.
Once active, the option to add connectors appears in conversations. Developers can link remote MCP servers over supported protocols such as SSE and streaming HTTP. Authentication options include OAuth or no authentication at all.
Within the connector settings, developers can toggle individual tools on or off, refresh connectors to pull the latest descriptions, and inspect tool details.
During conversations, ChatGPT can then invoke the selected tools. Developers may also steer tool usage by writing explicit prompts, such as telling the model to only use a specific connector for a given task, to avoid ambiguity with built-in features.
From Stripe to Slack: example use cases
OpenAI posted a video demo on X showing how its new dev mode with MCP support can perform linked actions across different services — once a developer has taken the time to expose and connect the MCP server to ChatGPT.
In one example, ChatGPT used an MCP Stripe connector to check a balance, create a customer invoice, and confirm the write action before generating it. The invoice details were then viewable directly in the chat session.
A second demonstration layered multiple connectors. ChatGPT first processed a refund through Stripe, then used a Zapier connector to send a Slack message notifying the customer.
Other possible integrations include updating Jira tickets via Atlassian’s MCP server or using Cloudflare’s connector to convert web pages to Markdown.
These examples highlight the flexibility of chaining connectors together to automate multi-step processes, with ChatGPT managing the sequencing between tools.
Controls and safeguards
To mitigate risks, all write actions require explicit confirmation by default. Before a tool executes a write, the developer can expand the tool call details to inspect the full JSON payload, including both inputs and expected outputs.
Developers may choose to remember an approve or deny choice for the duration of a conversation, but new or refreshed sessions will require confirmation again.
Tools flagged with the readOnlyHint annotation are treated as read-only, while all others are classified as write actions.
Guidance for developers
OpenAI provides recommendations for making connectors easier and safer to use.
Tool names and descriptions should be action-oriented and include clear instructions about when to use them, as well as parameter explanations.
This guidance helps the model distinguish between similar tools and avoid defaulting to inappropriate built-in options.
The documentation also advises developers to disallow alternatives when necessary, specify input shapes for tool calls, and define sequencing when multiple steps are required.
For example, one prompt might instruct ChatGPT to read a file from a repository first and then write modified content back, avoiding any intermediate or unintended actions.
Connecting to the broader MCP ecosystem
The release of developer mode follows other recent updates to OpenAI’s developer-facing tools, particularly the Responses API which received support for MCP in May 2025.
Among its features are support for remote MCP servers, integration of GPT-4o’s image generation model, and access to built-in tools such as Code Interpreter and improved file search.
That API was designed as a unified toolbox for building agentic applications and has already processed trillions of tokens since its launch earlier this year.
Building on earlier commitments
The expansion of MCP support across OpenAI’s ecosystem also ties back to comments from CEO and co-founder Sam Altman earlier this year.
In March, Altman wrote on X that “people love MCP and we are excited to add support across our products,” noting that it was already available in the Agents SDK and would soon extend to the ChatGPT desktop app and Responses API.
With developer mode now live, that roadmap is taking clearer shape as the company moves to integrate MCP more deeply into both its APIs and its flagship ChatGPT product.
Comparing OpenAI’s and Anthropic’s MCP guidance
Although both OpenAI and Anthropic are building around the same open MCP standard, their guidance for developers reflects differences in focus and product integration.
OpenAI’s instructions for developer mode are tightly tied to ChatGPT’s interface. The company emphasizes practical prompting techniques for ensuring the right tool is called, such as telling the model to only use a specific connector and to avoid built-in tools.
It also advises developers to specify input formats and sequencing when chaining multiple tool calls. Much of the guidance centers on guardrails and safety: inspecting JSON payloads, confirming write actions, and understanding that model mistakes could delete or alter important data. In other words, OpenAI frames MCP use within ChatGPT as powerful but risky, and stresses developer responsibility in setting up and testing connectors safely.
Anthropic’s approach, by contrast, is more infrastructure-oriented. Its documentation highlights MCP as an open protocol that enables developers to either expose data through servers or build clients that connect to them. Rather than focusing on prompt design or in-chat usage, Anthropic stresses the architecture: MCP servers as connectors to enterprise systems, MCP clients as AI applications, and a growing ecosystem of prebuilt servers for popular platforms like Google Drive, GitHub, and Slack.
Its guidance encourages developers to quickly spin up servers, connect them to Claude or other tools, and contribute to the open-source ecosystem.
Where they converge is in treating MCP as a way to overcome the problem of fragmented integrations. Both also frame it as critical for building agentic systems—AIs that not only generate text but also act on external systems in structured ways.
The differences reflect their respective product strategies. OpenAI’s guidance is tailored to developers who want to experiment inside ChatGPT and its surrounding APIs, where the risks of write actions are immediate and visible to end users. Anthropic, meanwhile, positions MCP as a foundational building block for enterprise infrastructure and developer platforms, encouraging organizations to standardize their tool connections at the protocol level. For developers, this means OpenAI’s focus is on safe in-chat usage, while Anthropic’s is on building scalable systems that can serve entire organizations.
What it all means
For developers already experimenting with MCP servers, the new mode significantly broadens what can be done inside ChatGPT.
Instead of only fetching data, users can now carry out full workflows — updating records, generating invoices, issuing refunds, or coordinating with third-party services like Slack — directly within a conversation. The ability to chain connectors together also opens the door to more complex automations.
At the same time, the emphasis on careful setup and review reflects the dual nature of the update: powerful but potentially risky if misused. By keeping confirmations in place and requiring developers to inspect tool calls, OpenAI appears to be prioritizing responsible use as the feature rolls out.
