Google launches managed MCP servers that let AI agents securely access Maps, BigQuery, Compute and Kubernetes services.
Photo Credit: Google
MCP or Model Context Protocol was developed and open-sourced by Anthropic
Google, on Wednesday, announced official support for the Model Context Protocol (MCP) across its portfolio of services and Google Cloud products. The company said these fully-managed remote servers will make it easier for developers to connect artificial intelligence (AI) agents, such as those powered by Gemini or other AI models. With real-world tools, data sources and enterprise systems. This will allow both developers and enterprises to connect their AI agents to a wide range of third-party data sources. The Mountain View-based tech giant highlighted that the support will be released for all of its services incrementally.
In a blog post, the tech giant announced that during the initial phase of the rollout, four of its services will support MCP servers. These are Google Maps, BigQuery, Google Compute Engine (GCE), and Google Kubernetes Engine (GKE). This means AI agents can now call these platforms and access the data (with permissions) to perform tasks in real-world scenarios.
Highlighting an example, the company said a BigQuery MCP server will allow an agent to interpret table schemas, run queries directly on enterprise data, and get insights without having to move data into the AI's internal memory. Similarly, the MCP server for Maps will provide grounding in real-world location information, including weather, routes, and points of interest. This will enable AI agents to answer travel-planning questions with reliable, up-to-date data.
Google isn't stopping with its own services. The company is extending MCP support to Apigee, its API management platform, that many enterprises use to expose and govern their internal data and workflows. With this integration, organisations can turn their existing APIs into MCP-compatible tools without rewriting or rearchitecting them. This means agents can use a company's own bespoke systems, such as customer databases, workflow systems, business logic, as if they were native MCP tools, all while applying enterprise governance and security policies.
On security, the tech giant says it has implemented multiple measures to ensure that these servers are protected from cyberattacks. For enterprises, adminis can manage access through Google Cloud's Identity and Access Management (IAM), use audit logging to track agent interactions, and apply “Model Armor” protection to help mitigate threats such as indirect prompt injection.
At its core, MCP is an open-standard protocol developed by Anthropic. It iss often described in the industry as something like a “USB-C port for AI,” defining how AI models and their hosting applications connect to data systems, application programming interfaces (APIs), tools and services via a common format and workflow.
Previously, developers had to build custom connectors for each API or data source, a task that was time-consuming and fragile. With MCP, AI clients such as Gemini CLI, AI Studio or other agent runtimes can call a remote MCP server to discover, authenticate and use external resources in a standard way.
Until now, Google's support for MCP usually required community-built servers or open-source projects that developers had to install and manage themselves. With the newly announced managed, remote MCP servers, the tech giant takes care of that heavy lifting. Developers can leverage this to let their Gemini-powered AI agents connect to a globally consistent, enterprise-ready endpoint for Google and Google Cloud services.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.
Instagram Introduces 'Your Algorithm' Tool That Lets You Shape Recommendations in Your Reels Tab