Google launches managed MCP servers for Maps, BigQuery, GKE, and Compute Engine
Google has launched managed Model Context Protocol servers for Maps, BigQuery, Compute Engine, and Google Kubernetes Engine. That matters more than the product name suggests. For the past year, most “agentic” demos have run on custom glue. Teams conn...
Google just made MCP real for production agents
Google has launched managed Model Context Protocol servers for Maps, BigQuery, Compute Engine, and Google Kubernetes Engine. That matters more than the product name suggests.
For the past year, most “agentic” demos have run on custom glue. Teams connect a model to an internal API, burn time on auth and tool schemas, and then hit the real problem: getting reliable access to live systems without creating a security mess or a pile of brittle integrations.
Google’s pitch is straightforward. Point an MCP client at a Google-run endpoint, authenticate with Google Cloud identity, and let the agent call the approved tools that server exposes. No self-hosted connector layer. No custom wrapper for every API. Potentially a lot less duct tape.
That matters because the bottleneck has moved. Model quality still counts, but plenty of teams are now stuck on tool access, governance, and the cost of keeping integrations from falling apart.
Why this matters now
Google introduced Gemini 3 less than a month ago. This fits neatly alongside it. Better reasoning helps. Better reasoning with dependable access to BigQuery or GKE is what gets a project out of a demo and into somebody’s actual workflow.
Google Cloud product management director Steren Giannini summed up the company’s position with “agent-ready by design.”
That would be easy to dismiss if it only worked inside Google’s own stack. The more interesting part is interoperability. Google says the managed MCP servers work with Gemini CLI and AI Studio, but early tests also work with Claude and ChatGPT as MCP clients. That’s the whole point of MCP. The protocol defines how clients discover tools, inspect schemas, and invoke them without every vendor inventing its own plugin format.
MCP started with Anthropic and has since moved into a Linux Foundation standardization effort. Google backing it this directly gives the protocol real weight. It also forces every other cloud vendor to decide whether to support a shared standard or keep pushing users toward a proprietary agent stack.
What Google is shipping
At launch, Google is running managed remote MCP servers for four services:
- Maps
- BigQuery
- Compute Engine
- Google Kubernetes Engine
Each server exposes service-specific tools over MCP. Instead of asking a model to improvise from training data, you let it call a real endpoint with a defined schema.
The examples are pretty obvious:
- A Maps server can expose things like
places.search,routes.compute, orgeocode.lookup - A BigQuery server can expose
query.execute,dataset.list, ortable.schema - A Compute Engine server can offer
instance.start,instance.stop, orimage.list - A GKE server can provide
cluster.describe,deployment.rollout, orpod.logs
Obvious is fine. This solves a real problem. A lot of agent stacks still depend on handwritten tool definitions, prompt instructions about when to use them, and output parsing that breaks as soon as an API changes shape. MCP standardizes tool discovery and metadata, often with JSON Schema, so the client can see what’s available and how to call it.
It’s boring infrastructure. That’s a compliment.
The security model matters most
The protocol is useful. The managed governance layer is what enterprises are actually going to care about.
Google says these MCP servers plug into Cloud IAM for authorization, use Model Armor to defend against prompt injection and data exfiltration, and produce audit logs for observability and compliance. That combination is the product.
A decent agent setup needs three things:
- A clear list of tools it can access
- Hard permission boundaries around those tools
- A record of what it actually did
Google is trying to make all three the default.
If an agent authenticates with a service account and only has read access to a specific BigQuery dataset, that policy should hold no matter what prompt a user throws at it. Same for a GKE assistant that can inspect logs but can’t roll out a deployment without explicit approval. That’s the line between a demo people clap for and a system security will actually sign off on.
Model Armor is also worth watching. Google frames it as protection against agentic threats such as prompt injection and exfiltration. Fair enough, but nobody should kid themselves. Guardrails help. They do not solve prompt injection. They narrow the blast radius.
That’s still useful. Plenty of companies would happily settle for governed, logged, scoped, rate-limited, and harder to abuse.
Where this helps first
BigQuery and GKE are the strongest launch examples.
A BigQuery MCP server gives data teams a sane way to let an agent run live queries without stuffing raw credentials into an app or building a private SQL wrapper from scratch. You can scope access to a project or dataset, audit every call, and let the model answer from current data instead of stale embeddings.
There are obvious catches. Query latency can add up quickly in multi-step agent flows. Costs can get ugly if an agent loops or writes bad SQL. Dry runs, quotas, and caching will matter. So will guardrails around which functions and datasets an agent can touch.
GKE and Compute Engine raise the stakes further. An SRE assistant that can inspect pods, fetch logs, describe clusters, or restart a noncritical instance could save time. One with broad write access could do expensive damage fast. Teams that handle this well will start read-only, put approval gates in front of destructive actions, and log everything with agent identity and session context.
Maps is less dramatic, but it’s one of the cleanest examples of why tool calling beats model recall. If an agent needs current routes, locations, or geocoding, it should call Maps. A surprising amount of agent design comes down to stopping the model from bluffing when a live system should answer instead.
Apigee may matter more than the launch list
Google’s own service endpoints are useful, but Apigee could be the bigger story for large enterprises.
Google says Apigee can translate existing APIs into MCP servers while keeping policy controls such as quotas, keys, allowlists, and analytics in place. That gives companies a path to expose internal or partner APIs to agents without rebuilding their API layer around some brand-new agent framework.
That’s a sensible move. Most businesses do not need another orchestration stack. They need a way to present existing services as discoverable tools with stable schemas and existing governance.
If this works, a product catalog API, order system, or internal incident platform can show up as MCP tools with all the usual enterprise controls still attached. That’s much closer to how real companies buy infrastructure. They wrap what already exists.
The standards fight is still there
Google’s support gives MCP more legitimacy, but it also sharpens the tension in this market.
Every major platform wants to own the control plane for agents. OpenAI has its own action and tool ecosystem. Microsoft has Graph and Copilot integrations. AWS has Bedrock Agents and a growing stack of service connectors. None of them are eager to hand over strategic control.
Still, developers are tired of one-off plugin formats. If MCP becomes the common contract for tool discovery and invocation, buyers get some portability across models and clients. That doesn’t remove lock-in. Data, IAM, quotas, billing, and surrounding cloud services still create plenty of gravity. But it does reduce switching costs at the tool layer, and that’s meaningful.
Google’s position here is pragmatic. Support the open protocol, make it easy to use from rival model clients, and still route the actual work through Google services.
What teams should watch before adopting it
The upside is real. So are the failure modes.
A few things matter immediately:
- Use dedicated agent identities. Don’t run this through a human user’s credentials.
- Scope aggressively. BigQuery read-only access to one dataset is very different from project-wide admin.
- Add approval steps for write actions.
instance.stopanddeployment.rolloutshould not be casually autonomous. - Expect latency. Tool chains that look clean in a diagram can feel slow in production.
- Set quotas and budgets. Agents can hit expensive systems repeatedly without much awareness of cost.
- Feed audit logs into your SIEM. If an agent goes sideways, you’ll want a trace tied to a service account, session, and prompt flow.
The larger limitation is basic but important. Managed MCP servers solve access and governance. They do not fix bad planning, weak prompts, poor retry logic, or models that misuse tools. Teams still need evaluation, fallback behavior, and sane workflow design. The protocol can standardize the socket. It can’t make the application good.
Still, this is one of the cleaner agent infrastructure moves from a major cloud vendor in months. Google picked the right problem. Tool access has been the messy middle layer, and most teams don’t want to keep owning it by hand.
If these servers are reliable, and if Google expands the catalog quickly into storage, logging, monitoring, databases, and security as promised, they could become part of the default stack for enterprise agents. Not because they’re flashy. Because they remove work teams are tired of doing.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Turn data into forecasting, experimentation, dashboards, and decision support.
How a growth analytics platform reduced decision lag across teams.
Google has launched Gemini 3, its latest flagship model, and the benchmark numbers are big. The company says Gemini 3 scored 37.4 on Humanity’s Last Exam, ahead of the previous top mark from GPT-5 Pro at 31.64. It also says the model now leads LMAren...
Google’s Darren Mowry, who oversees startups across Google Cloud, DeepMind, and Alphabet, had a straightforward message for AI founders: if your company is basically a UI on top of someone else’s model, or a switchboard routing prompts between models...
Google has moved Gemini 3 Flash into the center of its AI lineup. It's now the default model in the Gemini app, it powers AI Mode in Search, and it's coming to Vertex AI, Gemini Enterprise, the API preview, and Google's Antigravity coding tool. The p...