SEO + AI Lab

Google Search Console + Claude Code: Querying SEO Data in Natural Language with MCP

Claudio Novaglio
13 min read
Google Search Console MCP per Claude Code

Search Console has a structural problem. It shows data, but doesn't answer questions.

Anyone working with Google Search Console daily knows it: the data is there, but extracting it usefully requires a ritual of filters, CSV exports, pivot tables, and hours of manual work. Want to know which keywords with average position between 8 and 15 have below-average CTR? That takes at least 20 minutes and a spreadsheet. Want to cross that data with indexing issues? Add another 20 minutes and a second tab.

Anthropic's MCP protocol (Model Context Protocol) changes this paradigm. With a dedicated Search Console MCP server, Claude Code doesn't just read your data: it queries it, crosses it, and returns operational insights in natural language. The dashboard becomes a conversation.

In this article I analyze the MCP server for Google Search Console, the 12 tools it provides, and show how I'd use it on a real site—with real data, real problems, and workflows that transform hours of analysis into seconds of dialogue with AI.

What's an MCP server for Search Console and why it changes the game

MCP is an open standard created by Anthropic that lets AI models interact directly with external tools—databases, APIs, cloud services—through a standardized interface. I've already discussed this protocol in my article on Screaming Frog MCP: the principle is the same, but the application is radically different.

If Screaming Frog MCP tells you what happens on your site, the Search Console MCP server tells you what happens in the SERP.

The difference compared to traditional Search Console API usage is substantial. The GSC API is powerful but raw: it requires OAuth authentication, queries structured in JSON, pagination management, and especially the ability to formulate the right question in the right format beforehand. The MCP server eliminates this friction.

With the MCP server, you tell Claude: "which keywords am I losing CTR on compared to last month?" and the model knows which tool to call, how to filter the data, and how to present the result. It's not a cosmetic simplification. It's an interface shift: from programmatic to conversational.

In my experience, the real bottleneck in SEO analysis has never been data access. Data exists, often in abundance. The bottleneck is the ability to ask the right questions at the right time, without having to build the infrastructure first to get the answer. The MCP server solves exactly this problem.

The 12 available tools: complete feature map

The MCP server for Google Search Console (version 2, remote edition) exposes 12 tools that cover the entire spectrum of operations available through the GSC API. I've grouped them into three functional blocks.

Performance and keyword analysis

ToolFunctionTypical use case
list_propertiesList all connected GSC propertiesMulti-site management, property selection
get_search_analyticsSearch metrics with advanced filtersKeyword analysis, CTR trends, period comparison
get_performance_overviewSynthetic performance overviewQuick report, current situation snapshot
find_keyword_opportunitiesIdentify keywords with untapped potentialLow-hanging fruit, keywords in position 8-20
get_top_pagesPages with best performanceIdentify winning content, success patterns

Indexing and sitemap

ToolFunctionTypical use case
inspect_urlIndexing status of a specific URLDiagnose unindexed pages, verify fixes
get_sitemapsList of submitted sitemapsAudit sitemap structure, verify coverage
submit_sitemapSubmit new sitemapDeploy new sections, structure updates
request_indexingRequest URL indexingAccelerate new or updated content indexing

Segmentation and export

ToolFunctionTypical use case
get_device_comparisonMobile vs desktop comparisonMobile-first priority, device-specific diagnosis
get_country_breakdownTraffic distribution by countryInternational SEO, geographic markets
export_analyticsExport data as CSV/JSONReporting, integration with other tools

The real power isn't in individual tools, but in chaining them together. Claude can call get_search_analytics, filter by position, cross with inspect_url to verify the indexation status of found pages, and return a prioritized report in a single conversational flow.

Setup: from Google Cloud project to first connection

Setup requires three components: Google Cloud OAuth credentials, the deployed MCP server, and configuration in Claude Desktop or Claude Code. It's not complex, but each step must be done correctly.

Requirements

ComponentDetailCost
Claude Code MaxSubscription with MCP access$100/month (or $200/month for teams)
Google Cloud ConsoleProject with OAuth 2.0 configuredFree
MCP ServerDeployed on Railway, Docker, or VPSVariable (Railway from ~$5/month)
GSC PropertyAt least one verified propertyFree

Key steps

  1. Create a Google Cloud Console project and enable the Search Console API.
  2. Configure OAuth 2.0 credentials (type "web application") with the correct redirect URI.
  3. Deploy the MCP server—Railway is the recommended option for simplicity. One-click deploy with environment variables GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, and GOOGLE_REDIRECT_URI.
  4. Authenticate via browser: the server generates a Google login link. You authorize access to Search Console and receive a personal API key.
  5. Configure Claude Desktop or Claude Code by adding the MCP server with the URL and your API key in the MCP configuration file.

The entire procedure takes 15-20 minutes the first time. Subsequent authentications are automatic. The reference repository (AminForou/google-search-console-mcp-v2 on GitHub) includes detailed documentation for each deployment option.

Real case: Roxmir, computer hardware store

To show what becomes possible with this setup, I'll use real data from a site I manage: Roxmir, a computer hardware store. Numbers from the last 3 months in Search Console:

MetricValue
Total clicks40,500
Total impressions1,660,000
Average CTR2.4%
Average position8.1

The site has an interesting query profile dominated by informational and commercial keywords in smartphone, tablet, and components. But it also has significant structural issues: 106 pages with redirects, 61 404 errors, 37 crawled but unindexed pages, and Core Web Vitals issues on mobile (LCP over 2.5 seconds on 158 URLs).

It's the type of site where data quantity exceeds manual analysis capacity. And it's exactly where the Search Console MCP server delivers maximum value.

Workflow 1: performance analysis in natural language

The first workflow I test on every site is performance analysis. In traditional Search Console, this means: open the Performance report, add filters one at a time, export to CSV, open a spreadsheet, sort, filter again, and finally have an answer. With the MCP server, the question becomes literally a question.

Example: high-potential keywords with low CTR

I ask Claude: "Which keywords have average position under 10 but CTR below 2%?" The model calls get_search_analytics with appropriate filters and returns results. On Roxmir's data, interesting patterns emerge immediately.

The keyword "smartphone 2026" generates 2,412 impressions with 280 clicks and average position under 1. It should have much higher CTR than it does. Why? Maybe the title tag isn't optimized, maybe a featured snippet steals clicks, maybe the meta description doesn't convince. In traditional Search Console, reaching this observation takes at least 3 manual steps. With the MCP server, it's the answer to a single question.

Example: temporal trend on specific keywords

Second scenario: "How has traffic for RAM price-related keywords evolved over the last month?" The MCP server queries get_search_analytics with time and keyword filters. Roxmir has several keywords in this area—"RAM prices" (239 clicks), "RAM price increase" (223 clicks), "RAM prices skyrocketing" (207 clicks)—and the aggregated trend tells a story that individual data points don't show.

The real advantage isn't speed (though it's huge). It's the ability to ask questions you wouldn't have formulated if you had to manually construct the filter for each hypothesis.

Workflow 2: identify keyword opportunities in 30 seconds

The find_keyword_opportunities tool is perhaps the most valuable in the entire set. It identifies keywords where the site has visibility but isn't capitalizing—the classic low-hanging fruit every SEO specialist searches for.

The logic is simple: keywords with average position between 8 and 20 (page 1-2 of Google), significant impressions, but low clicks relative to potential. These are keywords where improving the title tag, meta description, or content could move the needle significantly.

On Roxmir's data, this tool would have immediately identified keywords like "best tablet 2026"—1,596 impressions but only 182 clicks, with growing search volume. Or "redmi note 15 5g review"—long-tail keyword with high purchase intent and relatively low competition.

Doing this analysis manually on a site with hundreds of keywords requires 30 to 60 minutes of spreadsheet work. With the MCP server, it's a single request that returns prioritized, actionable results.

As Will Scott wrote on Search Engine Land about a similar setup for paid-organic analysis: the same analysis that took an entire afternoon gets completed in under two minutes. The acceleration factor isn't 2x or 5x. It's an order of magnitude.

Workflow 3: indexing diagnosis and coverage

This is the workflow where the MCP server delivers maximum added value compared to the traditional dashboard. Search Console indexing problems are notoriously hard to diagnose: the dashboard tells you what happened, but rarely why or how to prioritize fixes.

Roxmir has a typical picture of a growing commercial site:

ProblemSourcePages
Redirect (301/302)Website106
Not Found (404)Website61
Excluded by noindexWebsite20
Alternate canonicalWebsite11
Crawled, not indexedGoogle Systems37
Duplicate without canonicalWebsite2
Discovered, not indexedGoogle Systems1

In traditional Search Console, each row is a link that opens a URL list. To understand each one's impact, you have to manually cross with performance data. Those 106 redirects? Some are zero-traffic legacy pages, others might be pages with external links losing equity at every hop. Without cross-reference, you can't prioritize.

With the MCP server, the question becomes: "Which of the redirected pages received impressions in the last 3 months?"

Claude calls inspect_url to verify the status of each flagged page, then crosses with get_search_analytics to understand if those pages still generate traffic. The result is a prioritized list: first the redirects losing traffic, then those with external links, then the rest.

The same approach applies to the 37 crawled but unindexed pages. Why did Google crawl them but decide not to index? Duplicate content? Insufficient quality? Technical issue? The MCP server can't directly answer these questions (GSC data doesn't allow it), but it can tell you which of those pages were previously indexed, if they had traffic, and when they got de-indexed. Information that in the dashboard would require hours of manual navigation.

Workflow 4: device comparison and mobile-first decisions

The get_device_comparison tool segments performance data by device type. In a mobile-first world, this segmentation is critical for operational decisions.

Roxmir has a Core Web Vitals problem on mobile: LCP over 2.5 seconds on 158 URLs and CLS over 0.1 on 86 URLs. But what's the impact on organic performance? Traditional Search Console shows CWV in one report and performance in another. The cross-reference is manual.

With the MCP server, I ask Claude: "Compare CTR and average position mobile vs desktop for the top 20 keywords." The model calls get_device_comparison and returns a view that in the dashboard would require two separate exports and a merge in a spreadsheet.

The pattern that most often emerges in the commercial sites I manage is predictable but undervalued: navigational keywords perform equally on mobile and desktop, but commercial keywords (comparisons, reviews) have a 15-30% mobile CTR gap. This gap translates directly to lost revenue.

The MCP server makes this type of analysis immediate. It's no longer a quarterly analysis you do when you have time. It becomes a weekly check you do in 30 seconds. And analysis frequency changes decision quality.

The comparison: traditional Search Console vs Search Console + MCP

I've used Search Console for over a decade. It's an exceptional tool, but its limitation is the interface: it's designed to show data, not answer questions. The MCP server doesn't replace it—it makes it queryable.

DimensionTraditional GSCGSC + MCP
Time to insight15-30 minutes (export + analysis)30-90 seconds (direct question)
Cross-referenceManual (multiple exports + spreadsheet)Automatic (tool chaining)
Keyword opportunity analysisExport → filters → manual sortingfind_keyword_opportunities + natural filters
Indexing diagnosisURL list → single inspectionbatch inspect_url + performance cross
Device segmentationTwo separate reports, manual mergeDirect get_device_comparison
AutomationNone (manual interface)Repeatable, schedulable workflows
Learning curveLow (intuitive GUI)Medium (requires setup and MCP familiarity)
Additional costNoneClaude Code Max + MCP server hosting

The point isn't that one is better than the other. It's that they operate on different timescales. Traditional Search Console excels at a quick check or exploring a single data point. The MCP server is superior when you need to analyze patterns, cross dimensions, or process hundreds of keywords.

The complete system: Search Console MCP + Screaming Frog MCP

In my article on Screaming Frog MCP and Claude Code I described how to automate on-site technical audits. The MCP server for Search Console is the complementary piece that completes the picture.

The logic is simple but powerful:

  • Search Console MCP: tells you what happens in the SERP—clicks, impressions, positions, indexing
  • Screaming Frog MCP: tells you what happens on the site—structure, technical errors, content, internal links
  • Claude Code: crosses the two datasets and produces insights that neither tool alone could generate

Concrete example: you have 37 crawled but unindexed pages (GSC data). Why? Screaming Frog can tell you 20 of those pages have thin content (under 300 words), 10 have canonical pointing elsewhere, and 7 are orphaned (no internal links). With both MCP servers active, Claude does this cross automatically.

Will Scott on Search Engine Land called this type of setup an "SEO command center." It's an apt definition, but I'd take it further: it's a nervous system for SEO. Data flows from different sources, gets processed contextually, and produces actions—not just reports.

The vision is a closed loop: GSC identifies the problem in the SERP → Screaming Frog diagnoses the on-site cause → Claude suggests (or applies) the fix → GSC verifies impact after recrawl. Continuous audit, not point-in-time.

Limitations, risks, and when not to use this setup

Every tool has limitations. Being transparent about what this setup can't do is as important as showing what it can do.

Technical limitations

  • Data latency: Search Console data has a 24-48 hour lag. It's not a real-time monitoring tool.
  • API rate limits: the GSC API has call limits. On very large sites (100K+ pages) some complex queries may require pagination and more time.
  • Data granularity: the API doesn't expose all data visible in the dashboard. Some reports (like the Links report) aren't available via API.
  • OAuth maintenance: Google tokens expire. The MCP server handles refresh automatically, but occasionally may require re-authentication.

Operational limitations

  • Cost: Claude Code Max ($100/month) plus MCP server hosting. Not justified if you manage a single small site.
  • Learning curve: initial setup requires familiarity with Google Cloud, OAuth, and MCP. Not a plug-and-play tool.
  • Verification needed: Claude's output should always be verified. The AI can cross data at light speed, but strategic interpretation remains your responsibility.

When not to use it

Don't use if your site has 20 pages and 500 impressions per month. Traditional Search Console is more than sufficient. Don't use if you need real-time data—use other tools for that. And don't use if you lack the skills to interpret results: automating data extraction doesn't automate strategy.

Who it's ideal for and competency requirements

This setup isn't for everyone, and it doesn't pretend to be. Here's the profile of those who benefit most.

Ideal profiles

  • SEO specialists with multi-site portfolios: query chaining across different properties is where time savings become dramatic.
  • Agencies with recurring reporting: repeatable workflows across dozens of clients transform hours of work into minutes.
  • Technical teams integrating SEO in CI/CD: the ability to query GSC programmatically enables automatic pre and post-deploy monitoring.
  • Consultants working on large sites: ecommerce with thousands of pages and keywords where manual analysis doesn't scale.

Prerequisites

  • Search Console familiarity: you need to know what you're looking for before automating the search.
  • SEO metrics understanding: CTR, average position, impressions, coverage— the tool speeds analysis, doesn't replace it.
  • Terminal comfort: Claude Code is a CLI environment. There's no GUI.
  • Google Cloud and OAuth basics: initial setup requires configuring a GCP project.

If your current workflow already includes regular Search Console exports, spreadsheet analysis, and manual reports, this setup is the natural next step. If instead you check Search Console once a month for a quick look, the investment probably isn't worth it.

From dashboard to conversation: the future of SEO analysis

The evolution is clear: SEO tools are shifting from passive graphical interfaces to active conversational systems. It's not a fad. It's a paradigm shift in how humans interact with data.

Search Console remains an exceptional tool. But its interface was designed in an era when the only way to query data was filter by filter, tab by tab. The MCP server doesn't replace it—it frees it from interface constraints and makes it accessible at the speed of thought.

With Screaming Frog MCP analyzing the site, Search Console MCP monitoring the SERP, and Claude Code crossing data and suggesting actions, the SEO professional can finally focus on what really matters: strategy, interpretation, and high-impact decisions.

Those who master these tools today have an enormous competitive advantage. Not because AI replaces professional judgment—it doesn't. But because it eliminates the mechanical hours of work that separate the question from the answer.

If you want to explore how to integrate these tools into your SEO workflow, or need support with setup, reach out. I'm one of the first professionals in Italy working with this stack and can help you implement it in your specific context.

Frequently Asked Questions

It's a server implementing Anthropic's MCP protocol that allows Claude Code to interact directly with Google Search Console data through 12 specialized tools: performance analysis, keyword opportunities, indexing status, device comparison, and data export.

Yes, you need to create a project in Google Cloud Console and configure OAuth 2.0 credentials. The Google Cloud project itself is free. OAuth credentials let the MCP server authenticate with the Search Console API on your behalf.

No, Google Search Console data natively has a 24-48 hour lag, regardless of access method (dashboard, API, or MCP). This is a Google limitation, not an MCP limitation. For real-time monitoring, use different tools.

Yes, the list_properties tool shows all GSC properties you have access to. You can query any property in the same session, making this setup particularly valuable for agencies and consultants managing dozens of sites.

The GSC API requires programmatic authentication, structured JSON queries, and manual pagination management. The MCP server adds an abstraction layer: Claude interprets your natural language questions, chooses the appropriate tool, configures parameters, and presents results readably. The underlying data is the same, but access is radically faster.

Yes, both MCP servers can be active simultaneously in Claude Code. This lets you cross SERP data (from Search Console) with on-site data (from Screaming Frog) in a single conversational flow, creating a complete SEO analysis system.

About the author

Claudio Novaglio

Claudio Novaglio

SEO Specialist, AI Specialist e Data Analyst con oltre 10 anni di esperienza nel digital marketing. Lavoro con aziende e professionisti a Brescia e in tutta Italia per aumentare la visibilità organica, ottimizzare le campagne pubblicitarie e costruire sistemi di misurazione data-driven. Specializzato in SEO tecnico, local SEO, Google Analytics 4 e integrazione dell'intelligenza artificiale nei processi di marketing.

Want to improve your online results?

Let's talk about your project. The first consultation is free, no commitment.