Looks similar to most other mid-level remote procedure call protocols, from XMLRPC to CORBA.
The usual sync, async, poll, progress test problems apply.
Things I'd expected to see and didn't:
- Client to server: "tell me what you can do". This has always been hard, but in the LLM era, it could potentially work, because a textural response would work.
- Similarly, being able to ask "How do I..." might be feasible now. It should be possible to talk to a new server and automatically figure out how to use it.
- "How much is this going to cost me?" Plus some way to set a cost limit on a query.
cost isn't part of MCP in the same way that cost isn't part of HTTP. It wouldn't really make sense to include that in the protocol, just put it in the application layer on top.
It's a little different. These are systems which are explicitly able to achieve better or worse outcomes by tuning the cost, in ways that aren't especially configurable otherwise. For an HTTP API, you can read the docs and use the small image vs large image endpoint or whatever and have a clear idea of what you're getting and for what cost. For LLMs, it would be very nice to be able to communicate about the desired and actual cost breakdowns for each sub-action.
honestly, it looks like an unnecessary additional protocol to a REST API. Couldn't you just add a "LLM-description" (optional) field to any RESTAPI that provides a JSON description of how to use it? That's what it sounds like because every LLM already will have it's own "idea" of how to use a MCP interface. So why have a totally disparate thing.
Given their anti-trust struggles, if Google for some reason dominates AI, they'd not want people to bring up anti-competitive behavior as a reason for that. Adopting open standards, especially open standards conceived outside Google is good for everyone including Google. They're well placed - from research to hardware to software and data.
They'll also want the industry to rapidly move forward and connect data to AI. MCP has momemtum.
To escape the anti-trust struggles, they'll need to provide MCP servers (meaning provide callable tools). Stopping at providing MCP client (the chatbot that connects to MCP servers) isn't enough.
I'll believe in Google not actively being anti-competitive when I (a paying customer) can access/modify my gmail, google contacts, google sheets, plan routes in google maps, ... from my local llm chatbot using mcp.
It doesn't really matter what it is as there are many equally good implementations, but whoever sets up the framework first and cements usage is likely to guarantee dominance for the foreseeable future. Probably into AGI and post.
I agree the idea seems much better - and I think it's what a lot of big-shops are doing internally too. An earlier article [1] showed that internally, gemini has a python sandbox it uses to call other google services.
I'm guessing the main limitation is that it's harder to orchestrate, especially on clients.
I think we can assume even if it is any voting power, it’s far less than 14%. No startup growing like that would give up shares with the same voting rights as the founders
I hope they also improve their JSONSchema support for structured output and tool calling. Currently it has many limitations compared to OpenAI’s, for example it doesn’t support “additionalProperties” which eliminates an entire class of use cases and makes it immediately incompatible with many MCP servers.
Marketing the API as OpenAI-compatible and then me getting 400s when I switch to Gemini leaves a sour taste in the mouth, and doesn’t make me confident about their MCP support.
Every time I see MCP I think of the Unisys mainframe OS.
It runs on x86 processors (under emulation), so it'd make some sense if Google offered it as an option in Google Cloud. Maybe they could offer OS2000, GCOS, and GECOS as well.
They have a chance to come-up with a user-friendly framework on top of MCP and make a big difference in acceleration of adoption. Cherry on the cake would be if they can build a UI on top of it to build/monitor/visualize. Hosted by them with a generous free-tier i.e. more private data to munch on for ads (only half joking).
Containers are usually considered pretty weak security at best. Especially since you don’t always control what the user does with it (docker va rootless podman etc)
Phase one is adopting it (you are here). Phase two is somehow turning it into a Web standard deeply integrated with Chrome which they have no real competition from and takes billions of dollars just to stay apace with.
Not sure about Extinguish to be honest, Google just wants the monopoly and they already have it.
it's wild to me how rapidly this has exploded in popularity. there's even a twitter account/site dedicated to news updates - https://x.com/getMCPilled and mcpilled.com
Looks similar to most other mid-level remote procedure call protocols, from XMLRPC to CORBA. The usual sync, async, poll, progress test problems apply. Things I'd expected to see and didn't:
- Client to server: "tell me what you can do". This has always been hard, but in the LLM era, it could potentially work, because a textural response would work.
- Similarly, being able to ask "How do I..." might be feasible now. It should be possible to talk to a new server and automatically figure out how to use it.
- "How much is this going to cost me?" Plus some way to set a cost limit on a query.
cost isn't part of MCP in the same way that cost isn't part of HTTP. It wouldn't really make sense to include that in the protocol, just put it in the application layer on top.
It's a little different. These are systems which are explicitly able to achieve better or worse outcomes by tuning the cost, in ways that aren't especially configurable otherwise. For an HTTP API, you can read the docs and use the small image vs large image endpoint or whatever and have a clear idea of what you're getting and for what cost. For LLMs, it would be very nice to be able to communicate about the desired and actual cost breakdowns for each sub-action.
It would also be nice to do that for http for the same reason. You can also read the find Docs for your MCP, and the LLM can also read the docs
Especially since the cost in some (most?) cases won’t be from the MCP server but from the LLM using it
Http 402: “my time to shine”
The first one is included, you can ask for available actions as well as mcp sever feature support. Is there something else that's missing?
honestly, it looks like an unnecessary additional protocol to a REST API. Couldn't you just add a "LLM-description" (optional) field to any RESTAPI that provides a JSON description of how to use it? That's what it sounds like because every LLM already will have it's own "idea" of how to use a MCP interface. So why have a totally disparate thing.
Just seems like i+1 syndrome with computing.
Given their anti-trust struggles, if Google for some reason dominates AI, they'd not want people to bring up anti-competitive behavior as a reason for that. Adopting open standards, especially open standards conceived outside Google is good for everyone including Google. They're well placed - from research to hardware to software and data.
They'll also want the industry to rapidly move forward and connect data to AI. MCP has momemtum.
To escape the anti-trust struggles, they'll need to provide MCP servers (meaning provide callable tools). Stopping at providing MCP client (the chatbot that connects to MCP servers) isn't enough.
I'll believe in Google not actively being anti-competitive when I (a paying customer) can access/modify my gmail, google contacts, google sheets, plan routes in google maps, ... from my local llm chatbot using mcp.
I mean, people already have MCP wrappers around the Gmail API.
I'm kind glad that the industry is distracted by vibe-coding, "tools" and MCP.
It's so clearly a dead-end. It gives freethinking developers and innovators time to focus on the next generation of software.
Spot on. Decided to get myself a ThreeJS book instead of vibing my way through. Wrote this post on it: https://willem.com/blog/2025-04-15_vibe-coding/
It doesn't really matter what it is as there are many equally good implementations, but whoever sets up the framework first and cements usage is likely to guarantee dominance for the foreseeable future. Probably into AGI and post.
Model Context Protocol seems good enough to me.
The idea behind smolagents is better.
I agree the idea seems much better - and I think it's what a lot of big-shops are doing internally too. An earlier article [1] showed that internally, gemini has a python sandbox it uses to call other google services.
I'm guessing the main limitation is that it's harder to orchestrate, especially on clients.
1. https://news.ycombinator.com/item?id=43508418
Related, discussion on A2A from the other day:
https://news.ycombinator.com/item?id=43631381
"The Agent2Agent Protocol (A2A)", 279 comments
I hope Gemini gets a desktop app where MCP servers are more useful, but wonder if Google's security posture allows it.
Google owns 14% of Anthropic, author of MCP.
No public information whether Google's investment in Anthropic leads to voting power though.
I think we can assume even if it is any voting power, it’s far less than 14%. No startup growing like that would give up shares with the same voting rights as the founders
I hope they also improve their JSONSchema support for structured output and tool calling. Currently it has many limitations compared to OpenAI’s, for example it doesn’t support “additionalProperties” which eliminates an entire class of use cases and makes it immediately incompatible with many MCP servers.
Marketing the API as OpenAI-compatible and then me getting 400s when I switch to Gemini leaves a sour taste in the mouth, and doesn’t make me confident about their MCP support.
Does MCP solve authentication on user's behalf which stifled OpenAI's GPTs?
Tools often need access to data sources but I don't want to hard code passwords.
it depends if you're using stdio or http. the former gets credentials from the environment and the latter oauth.
check out arcade.dev for this!
Every time I see MCP I think of the Unisys mainframe OS.
It runs on x86 processors (under emulation), so it'd make some sense if Google offered it as an option in Google Cloud. Maybe they could offer OS2000, GCOS, and GECOS as well.
Meanwhile I think Tron...
I think of https://en.wikipedia.org/wiki/Metacarpophalangeal_joint
They have a chance to come-up with a user-friendly framework on top of MCP and make a big difference in acceleration of adoption. Cherry on the cake would be if they can build a UI on top of it to build/monitor/visualize. Hosted by them with a generous free-tier i.e. more private data to munch on for ads (only half joking).
Are they going to release a Gemini desktop app with MCP support so normal people can use it?
Well, Google is one of the major investors in Anthropic, so I'm not surprised.
interested to see if Agent-to-Agent protocol duplicates the MCP functionality eventually
I guess they could both expand into the other's current domain, but right now they're solving pretty different problems.
Master Control Program?
It's 2025's ROT13 cipher for API. /s
It's also "Model Context Protocol", a protocol for LLMs to interact with third-party services.
The ROT13 cipher for API is NVK. NVidia Knows
isn't that exactly how the Master Control Program started?
[dead]
It’s terribly insecure as-is [1]. But so was HTTP. The spec isn’t final, so hopefully it will improve.
[1] https://blog.sshh.io/p/everything-wrong-with-mcp
> MCP initially didn’t define an auth spec and now that they have people don’t like it.
Just wrap it in an SSH tunnel or a HTTPS websocket
> MCP servers can run (malicious code) locally.
Just run it in a Docker container
>> MCP initially didn’t define an auth spec and now that they have people don’t like it.
> Just wrap it in an SSH tunnel or a HTTPS websocket
I assume this is sarcasm, but if not (and for people that take it at face value), it fundamentally misunderstands what auth is used for.
> Just run it in a Docker container
You should probably read the original article in the footnotes of OP's article: https://equixly.com/blog/2025/03/29/mcp-server-new-security-...
While a container will surely protect you from those, it will also prevent you using the features implemented by those MCP Servers.
Containers are usually considered pretty weak security at best. Especially since you don’t always control what the user does with it (docker va rootless podman etc)
Anyone else wish Google would just stay away from MCP? They manage to ruin everything.
I hate to say it but Embrace, Extend, Extinguish.
Phase one is adopting it (you are here). Phase two is somehow turning it into a Web standard deeply integrated with Chrome which they have no real competition from and takes billions of dollars just to stay apace with.
Not sure about Extinguish to be honest, Google just wants the monopoly and they already have it.
Is there a good place to read on what the benefit of MCP is? I'm behind the curve on this agentic AI shit and am not quite sure where to look
I'd start with the source and see if you think there's any benefits: https://modelcontextprotocol.io
search on HN for MCP, limit to last week -- there are a few articles there
it's wild to me how rapidly this has exploded in popularity. there's even a twitter account/site dedicated to news updates - https://x.com/getMCPilled and mcpilled.com
[dead]
Didn't Google introduce A2A just few days ago? Why Google itself is not heavily invested in their protocol?
Smells like a new project in Killed By Google graveyard.
A2A - "Agent to Agent", MCP - "Model Context Protocol", they're different things solving different problems.
No
They did.
We're observing a response to takes that A2A meant they weren't going to support MCP.
Everyone's got a take and a response these days, it's a nice little infinite loop of complaints and that keeps PR kicking.