SEO + AI Lab

DataForSEO + Claude Code MCP: Market SEO Data Directly in Your Terminal

Claudio Novaglio
18 min read
DataForSEO + Claude Code MCP: SEO Market Data in Your Terminal

DataForSEO via MCP inside Claude Code: market data, keywords, backlinks, and SERPs available on demand โ€” no scripts, no dashboard, no open tabs.

Over the past months I've documented how I use Screaming Frog via MCP for technical audits and how Claude Code skills make workflows repeatable. The missing piece was market data. Search volumes, SERP rankings, backlink profiles, competitive analysis โ€” all the work that normally requires Semrush, Ahrefs, or similar tools.

DataForSEO is a pay-as-you-go API provider that exposes virtually all the SEO data you need. No 200โ‚ฌ/month subscriptions for features you use 10% of the time. You pay for what you consume. And with their MCP server, this data becomes accessible directly from Claude Code โ€” same workflow, same conversation, zero context switching.

In this article I explain how I'd integrate DataForSEO MCP into my stack, what skills I'd build to orchestrate the flow, and most importantly: I show concrete use cases where this combination radically changes how I work.

DataForSEO MCP: what it is and what it offers

DataForSEO released an official MCP server that exposes their APIs as tools usable by AI models. In practice: instead of writing Python code to call the /v3/serp/google/organic/live endpoint, you ask Claude "check the rankings for these keywords" and Claude calls the API, receives the data, analyzes it, and returns insights.

The server installs with a single command: npx dataforseo-mcp-server. It supports three communication protocols โ€” stdio for local use with Claude Code, HTTP for web integrations, and SSE for legacy client compatibility. There's also a remote endpoint at mcp.dataforseo.com for those who don't want to install anything locally.

The 12 available APIs

DataForSEO's full toolkit covers virtually every aspect of SEO analysis. You don't need all of them โ€” in fact, filtering them down is critical to avoid saturating Claude's context window. Here are the main ones with their practical use.

APIWhat it doesWhen I use it
SERP APIReal-time SERPs from Google, Bing, YahooRank monitoring, intent analysis, SERP features
Keywords DataSearch volumes, CPC, keyword suggestionsKeyword research, content planning, gap analysis
DataForSEO LabsProprietary database: keywords per domain, domains per keyword, intersectionsDeep competitive analysis
BacklinksLink profile, referring domains, anchor textBacklink audit, link building strategy
OnPageCrawl and technical analysis via APITechnical audit alternative/complement to SF
Domain AnalyticsTraffic estimates, technology stack, whoisQuick competitive overview
Business DataGoogle My Business, reviews, local dataLocal SEO, reputation analysis
Content AnalysisBrand monitoring, sentimentBrand monitoring, digital PR
AI OptimizationBenchmark LLM responses on specific queriesOptimization for AI search (GEO/AEO)
MerchantE-commerce data, prices, productsE-commerce SEO, pricing analysis
App DataASO, app store rankingsMobile app optimization
Content GenerationText generation via AII don't use this โ€” I prefer generating internally

Pay-as-you-go: the economic model that makes sense

DataForSEO's strength versus Semrush or Ahrefs isn't data quality (they're comparable). It's the pricing model. With Semrush I spend 120-230โ‚ฌ/month for a subscription that includes thousands of features I don't touch. With DataForSEO I pay per API call: a SERP query costs roughly 0.002โ‚ฌ. A keyword analysis with volumes and CPC costs 0.004โ‚ฌ per keyword.

For a consultant running 4-5 projects per month, DataForSEO's monthly cost lands around 30-50โ‚ฌ versus 120-230โ‚ฌ for a tool with a dashboard. If that consultant already uses Claude Code Max (100$/month), the MCP integration has no additional infrastructure costs โ€” the MCP server is free and open source.

Setup: how I configure DataForSEO MCP in Claude Code

Configuration follows the same pattern as the Screaming Frog MCP server I documented in the first article of this series. You add a block in the .mcp.json file in your Claude Code directory.

Base configuration

The server launches via npx, passing API credentials as environment variables. DataForSEO credentials are separate from your dashboard login โ€” you generate them from the API section of your control panel.

The most important part of configuration is the ENABLED_MODULES parameter. DataForSEO exposes hundreds of tools โ€” if you load them all, they consume a significant slice of Claude's context window and slow down every interaction. Filtering to only the modules you actually use is essential.

In my case I'd activate five modules: serp, keywords_data, dataforseo_labs, backlinks, and business_data. This covers 95% of an SEO consultant's use cases without wasting context on APIs I won't use (merchant, app data, content generation).

Permissions in settings.local.json

Like any MCP, you need to authorize the tools in your project's permissions file. Each tool goes on the allow list โ€” this is a Claude Code security mechanism that prevents MCP servers from running operations you haven't explicitly approved.

In practice: the first time Claude tries to use a DataForSEO tool, you get asked for confirmation. You can approve it once or add it permanently to permissions. After the initial setup phase, the flow is transparent.

Use case 1: keyword research for a new content cluster

Concrete scenario: I'm planning a new section of the site on a specific topic โ€” say "SEO for real estate in Brescia". Before writing any page, I need data: which keywords have volume, what intent they signal, who ranks already.

The workflow without DataForSEO MCP

  1. I open Semrush or Ubersuggest.
  2. I search "SEO real estate Brescia" and variations.
  3. I export results as CSV.
  4. I open the CSV in a spreadsheet.
  5. I filter by volume, KD, intent.
  6. I manually group keywords into clusters.
  7. I open Claude (web) and paste the data to help with content planning.
  8. I go back to the editor to start writing.

Eight steps, three different tools, at least 40 minutes. Every time I need additional data (another volume check, a SERP, a competitor), I start over from step one.

The workflow with DataForSEO MCP

In Claude Code with the MCP server active, the workflow becomes a conversation.

I ask: "Run keyword research for real estate SEO in Brescia. Find keywords with volume, group by intent, show me who ranks in the top 10."

Claude executes sequentially: calls Keywords Data for seed keyword volumes, then keyword_suggestions to expand the list, then SERP API on the main keywords to understand who ranks. All without me lifting a finger.

The result comes back as structured analysis: keyword clusters grouped by intent (informational, transactional, navigational) with aggregated volumes, estimated difficulty, and a map of competitors with their ranking pages.

From that point, I'm already inside Claude Code โ€” I can move directly to content creation, editorial briefing, or page creation in the CMS. No context switching, no copy-pasting between tools.

Why it matters: keyword research as commodity vs. strategic analysis

Keyword research as a list of keywords with volumes is a commodity. The value is in interpretation: which clusters make sense for my site, where's a gap I can fill, which existing pages should I optimize instead of creating new ones. With data available in conversation, Claude can cross-reference keyword research with current site structure and suggest where to intervene โ€” something no standalone keyword research tool does.

Use case 2: competitive analysis with real data

Typical example: a client asks me "why does competitor X rank better than us on these keywords?". The answer requires data from at least three sources: SERP rankings, backlink profile, and content structure.

The integrated workflow

With DataForSEO MCP, I do it all from Claude Code.

Step 1 โ€” SERP positions: I ask Claude to compare my site's and the competitor's rankings on a target keyword set. DataForSEO Labs has a specific endpoint for domain intersections โ€” it shows keywords where both rank and where only one is present.

Step 2 โ€” Backlink gap: The Backlinks API returns both domains' link profiles. Claude compares: number of referring domains, DR distribution, dominant anchor text, dofollow vs nofollow ratio. It identifies sites linking to the competitor but not to me.

Step 3 โ€” Content analysis: Using SERP data, Claude analyzes the competitor's ranking pages โ€” length, heading structure, keyword density, SERP features captured (featured snippet, people also ask, FAQ). This comes from the SERP itself, not the API directly.

Step 4 โ€” Actionable summary: Claude produces a document that's not a data dump but analysis with priorities. "Your competitor has 3x your referring domains โ€” focus on links. You have better content on these 5 keywords but less authority. You have no dedicated page for these 3 keywords โ€” create one."

All of this happens in a single conversation. In a traditional workflow, the same insights require access to 2-3 different tools, multiple exports, and at least 2-3 hours of manual analysis.

Use case 3: complete audit combining Screaming Frog and DataForSEO

This is the scenario where having two MCPs in the same Claude Code session becomes truly powerful. Screaming Frog gives technical data (crawl, status codes, meta tags, canonical). DataForSEO gives market data (rankings, keywords, backlinks). Together, they cover the full spectrum of a professional SEO audit.

The combined workflow

Technical phase (Screaming Frog MCP): site crawl, export of main tabs, analysis of technical issues โ€” missing titles, descriptions out of range, empty alt text, redirect chains, wrong canonicals. I already do this today; it's documented in the dedicated article.

Market data phase (DataForSEO MCP): for each important page, I verify current ranking on target keywords, volume for those keywords, direct competitors in SERP. I identify pages with growth potential (rankings 4-20) and pages at risk of decline.

Data crossover: this is where value explodes. Claude combines technical data with market data. Example: a page has a too-long title (technical) AND ranks 6th on a keyword with 500 monthly searches (market). The fix priority rises because it has real traffic impact. Conversely, a too-long title on a page that ranks for no relevant keywords has low priority.

Impact-prioritized report: the final report doesn't order issues by type (titles first, then descriptions, then images) but by estimated impact. "Fix this title because you'd gain 2 positions on a 800-search/month keyword" beats "fix these 15 empty alts on pages no one searches for".

Concrete example: prioritizing fixes on claudio-novaglio.com

On my site, a technical audit with Screaming Frog might find 10 technical issues โ€” all classified as "fix these" in a traditional workflow. But with DataForSEO market data, I discover:

  • 3 issues affect pages ranking in top 10 on volume keywords โ†’ maximum priority.
  • 4 issues affect pages with no organic traffic โ†’ low priority; I can wait.
  • 2 issues affect pages with potential (rankings 11-20) โ†’ high priority; the fix could push them to first page.
  • 1 issue affects a page I decided to remove โ†’ I ignore it entirely.

Without market data, I'd treat all 10 issues equally. With DataForSEO, I invest time where it has the most impact.

Use case 4: rank monitoring after publication

When I publish a new page โ€” a blog post, landing page, or new service page โ€” I want to know how it ranks. Not a month from now when I check Search Console, but in the days right after publication.

The workflow

After publishing, I create a prompt (or better, a skill) that:

  1. Takes the target keyword list for the newly published page.
  2. Every 3-5 days, queries the DataForSEO SERP API for those keywords.
  3. Records my domain's position (if it appears in the top 100).
  4. Compares to the previous position and flags significant movements.
  5. If the page enters the top 20, suggests optimizations to push into top 10.
  6. If the page doesn't appear after 2 weeks, flags that content analysis might be needed.

The cost of this monitoring is minimal: one SERP query per keyword costs 0.002โ‚ฌ. Monitoring 20 keywords for 30 days (4 checks) costs roughly 0.16โ‚ฌ. For comparison, a dedicated rank tracking tool starts at 30-50โ‚ฌ/month.

Real value: fast feedback loop

Immediate post-publication monitoring isn't vanity. It's a feedback loop that lets me intervene while the page is still "fresh" for Google. If I see the page ranking 15th after a week, I can optimize the title, enrich the content, or add internal links while Google's still deciding where to place it. Waiting a month means missing that window of opportunity.

Use case 5: local SEO data for city pages

I have a network of geo-targeted pages on my site for municipalities in Brescia province. Each page targets keywords like "SEO consultant + city" or "SEO service + city". For these pages, DataForSEO's Business Data and localized SERPs are particularly useful.

What I'd do with local data

  • Localized SERPs: DataForSEO lets you specify location in SERP queries. I can see how my Brescia page ranks when searching from Brescia, and how my Desenzano page ranks when searching from Desenzano. Results can be very different.
  • Google Maps / Business Data: to understand local competition level. How many SEO consultants are in the area? Which have reviews? This helps me calibrate the content of city pages.
  • Keyword volume by location: the volume for "SEO consultant Brescia" differs from "SEO consultant Desenzano". DataForSEO provides localized volumes that let me prioritize which cities to target first.

Combined with Screaming Frog's technical crawl, I can verify each city page is technically perfect (SF) and strategically aligned with local search volume (DataForSEO).

Use case 6: AI search optimization

This is the most forward-looking use case. DataForSEO's AI Optimization API lets you test how major LLMs (ChatGPT, Claude, Gemini, Perplexity) respond to specific queries. In practice: I can ask "when a user asks ChatGPT who's the best SEO consultant in Brescia, what does it say?".

Why I care

AI search is growing. More users rely on ChatGPT, Perplexity, or Google SGE to find professional services. If my name doesn't appear in these models' responses, I'm losing visibility on an emerging channel.

With the AI Optimization API I can:

  • Monitor whether I'm cited in LLM responses for my target keywords.
  • Understand which sources the LLMs use to build their responses.
  • Identify content patterns that favor citation (structure, authority, data).
  • Compare my presence to competitors' presence in AI-generated responses.

It's still an evolving area, but having data now lets me build a baseline and measure progress over time.

The skill I'd build to orchestrate DataForSEO MCP

As I explained in my article on Claude Code skills for SEO, a skill is structured documentation that makes Claude's behavior repeatable and consistent across sessions. Without a skill, Claude is capable but unpredictable: each session produces slightly different results. With the skill, the process is identical every time.

Structure of the dataforseo-analysis skill

Here's how I'd structure the skill for competitive analysis with DataForSEO. This isn't theoretical โ€” it follows the same pattern I use for the nano-banana skill (image generation) and the Screaming Frog audit skill.

Frontmatter: the trigger is crucial. I don't write "analyze SEO data with DataForSEO" (that would be a workflow summary, and Claude would use it as a shortcut). I write: "Use when performing competitive analysis, keyword research, backlink audits, or SERP monitoring. Triggers: competitive analysis, keyword research, backlink audit, rank monitoring."

Active modules: the skill specifies which DataForSEO modules to use for each analysis type. Keyword research โ†’ keywords_data + dataforseo_labs. Competitive analysis โ†’ dataforseo_labs + backlinks + serp. Rank monitoring โ†’ serp. This prevents Claude from wasting API calls on irrelevant modules.

Thresholds and decision criteria: this is the differentiator. Keyword volume: >100/month = relevant, >500/month = priority, >1000/month = strategic. SERP position: 1-3 = defend, 4-10 = optimize, 11-20 = push, 21-50 = evaluate, >50 = ignore. Backlink ratio: if competitor has >3x your referring domains, link building is priority #1. Without these numbers, Claude improvises each time.

Output template: the skill defines the exact report format. Fixed sections, data order, detail level. "Executive summary (3 lines) โ†’ keyword gap table โ†’ backlink comparison โ†’ recommended actions with priority โ†’ DataForSEO cost estimate." Same format every time; readable even for clients.

Cost management: the skill includes a budget rule. "Before executing, estimate API call costs and show the quote. If cost exceeds 5โ‚ฌ, ask for confirmation." This prevents surprises on your DataForSEO bill โ€” analyzing 500 keywords could cost 10-15โ‚ฌ if uncontrolled.

The key: thresholds, not adjectives

The pattern that works for all my skills is the same: concrete numbers, never adjectives. "High volume" means nothing โ€” Claude interprets 100, 500, or 5000 differently depending on mood. "Volume >500/month" is unambiguous. Every threshold must be a number, every action must have a precise trigger, every output must follow a fixed template.

This is why skills transform Claude from "brilliant but inconsistent assistant" to "rigorous analyst who follows a process". The difference is obvious when you run 10 analyses for 10 different clients and the report format is identical every time.

The complete picture: three MCPs orchestrated by skills

Let's look at the complete picture of my Claude Code stack for SEO. Three MCP servers, each with its own domain, orchestrated by dedicated skills.

MCP ServerDomainData providedAssociated skill
Screaming FrogTechnical auditCrawl, status codes, meta tags, images, canonical, redirectsseo-audit
DataForSEOMarket dataSERPs, keywords, backlinks, competitors, local SEOdataforseo-analysis
Nano BananaVisual assetsCover images, hero images, branded imagesnano-banana

Each MCP is specialized. None does everything, and they shouldn't. Power comes from combination: Screaming Frog tells me how the site is doing technically, DataForSEO tells me how it's doing on the market, Nano Banana generates visual assets when I create new pages. Claude Code is the brain orchestrating them.

How skills talk to each other

Skills aren't siloed. Here's a real flow where three skills chain together.

  1. A technical audit (seo-audit skill + SF MCP) finds 5 pages needing title optimization.
  2. The dataforseo-analysis skill steps in to check those 5 pages' rankings โ€” we need to know if the title fix has real potential impact.
  3. Claude discovers that 2 of those pages rank 6-8 on keywords with 300+ monthly searches. Maximum priority.
  4. For those 2 pages, Claude rewrites the title and meta description.
  5. If either page lacks a proper cover image, the nano-banana skill generates a cover image aligned with the brand.
  6. Re-crawl with Screaming Frog to verify the technical fixes are live.
  7. After a few days, monitor SERP rankings via DataForSEO to measure impact.

In the traditional paradigm, this workflow would involve 4-5 different tools, just as many dashboards, and probably a full day of work. With three MCPs and the right skills, it's done in an hour.

Cost comparison: MCP stack vs. traditional stack

Let's put the numbers side by side. An SEO consultant running 5 projects per month.

ItemTraditional stackMCP stack + Claude Code
Keyword/backlink toolSemrush/Ahrefs: 120-230โ‚ฌ/monthDataForSEO: ~30-50โ‚ฌ/month (pay-as-you-go)
Technical crawlerScreaming Frog: ~22โ‚ฌ/month (259โ‚ฌ/year)Same: ~22โ‚ฌ/month
AI assistantChatGPT Plus: 20โ‚ฌ/month (manual copy-paste)Claude Code Max: ~92โ‚ฌ/month (native integration)
Rank trackerSE Ranking/AccuRanker: 30-50โ‚ฌ/monthDataForSEO SERP API: ~5โ‚ฌ/month
Image generationCanva Pro/Midjourney: 12-30โ‚ฌ/monthNano Banana via OpenRouter: ~2โ‚ฌ/month
Total monthly204-352โ‚ฌ/month~151-171โ‚ฌ/month
Manual work hours/month40-60 hours10-15 hours

Direct subscription savings are meaningful but not huge. Real savings are in hours: 30-45 hours per month of manual work that frees up. For a consultant billing 50-100โ‚ฌ/hour, that's 1,500-4,500โ‚ฌ/month in recovered capacity.

Limits and practical considerations

It's not all roses. There are aspects to think through seriously before adopting this stack.

Context window and complexity

The most concrete limit is Claude's context window. Every active MCP consumes space with tool definitions. With three MCPs active (SF + DataForSEO + Nano Banana), a significant chunk of context is already occupied before you start working. DataForSEO's ENABLED_MODULES parameter isn't optional โ€” it's a necessity.

Data quality

DataForSEO's data is estimated, like Semrush and Ahrefs. Search volumes are approximations; SERP positions vary based on geolocation and personalization. They're not absolute truth โ€” they're useful indicators for informed decisions. For real performance data (impressions, clicks, CTR), you still need Google Search Console.

API costs on large volumes

Pay-as-you-go is advantageous when volumes are contained. But on a project with 50,000 keywords to monitor and competitive analysis across 20 domains, costs climb quickly. The skill with budget cap becomes essential to prevent billing surprises. For very large projects, a traditional tool's flat fee might still make economic sense.

Learning curve

Configuring MCP, writing skills, managing permissions, orchestrating multi-tool workflows: this isn't a 5-minute setup. It requires familiarity with Claude Code, the terminal, Git, and underlying SEO concepts. ROI is high, but the initial investment in time and expertise is real.

When it doesn't make sense

  • If you audit occasionally and that's it โ†’ traditional workflow is sufficient.
  • If you don't have access to the site source code you're analyzing โ†’ you lose the closed loop.
  • If you work with a non-technical team โ†’ the stack's complexity could be a bottleneck.
  • If your clients want dashboards and charts โ†’ DataForSEO has no UI; you need additional tools for visual reporting.

Conclusion: the SEO consultant as system integrator

What I'm building โ€” and documenting across this three-article series โ€” isn't a replacement for traditional SEO tools. It's an evolution in how you use them.

Screaming Frog remains the most complete crawler. DataForSEO has data comparable to Semrush. Claude Code has reasoning capabilities that no standalone SEO tool integrates natively. MCP is the glue that makes them work together frictionlessly. Skills are the operational instructions that make it all repeatable.

The SEO consultant's role in this paradigm shifts. It's no longer opening 5 tabs, exporting CSVs, and filling in reports. It's designing workflows, writing skills, configuring MCPs, and intervening on strategic decisions that require human judgment. Repetitive manual work โ€” crawling, exporting, baseline analysis, standard reporting โ€” gets delegated to a system that does it better, faster, and without error.

This isn't science fiction. It's the setup I use daily, built piece by piece over the past year. And every new MCP I add to the stack makes the system a bit more complete and a bit more autonomous.

To see the first piece of the puzzle, read how I use Screaming Frog MCP for automated technical audits.

To understand how skills make all this repeatable, read my article on building SEO skills for Claude Code.

To discuss how this approach could work for your project, get in touch. I work with companies and professionals who want to take SEO to the next level.

Frequently Asked Questions

DataForSEO MCP is a Model Context Protocol server that exposes DataForSEO's APIs as tools usable by Claude Code. Install it with "npx dataforseo-mcp-server" and configure it in your .mcp.json file with API credentials. It supports stdio, HTTP, and SSE protocols.

DataForSEO uses a pay-as-you-go model: a SERP query costs roughly 0.002โ‚ฌ, keyword analysis costs 0.004โ‚ฌ per keyword. For a consultant with 4-5 monthly projects, monthly costs run 30-50โ‚ฌ versus 120-230โ‚ฌ/month for a Semrush or Ahrefs subscription.

Yes. Claude Code supports multiple MCP servers simultaneously. Screaming Frog provides technical data (crawl, status, meta), DataForSEO provides market data (rankings, keywords, backlinks). Combined in the same session, you get a complete SEO audit with impact-based prioritization.

Yes. MCP support is available only with Claude Code Max (100$/month). You also need a DataForSEO account with API credentials. The MCP server itself is free and open source.

Best practice is to create a Claude Code skill that includes a budget cap. The skill requires Claude to estimate API call costs and show a quote before execution. If cost exceeds a threshold (e.g. 5โ‚ฌ), confirmation is requested. This prevents billing surprises on large-volume keyword analysis.

For most consultants, five modules suffice: serp (rankings), keywords_data (volumes/suggestions), dataforseo_labs (competitive analysis), backlinks (link profile), and business_data (local SEO). Filtering modules with ENABLED_MODULES is critical to avoid saturating Claude's context window.

No. DataForSEO provides estimated data (volumes, approximate rankings, backlinks). Google Search Console provides real performance data (impressions, clicks, CTR, actual positions). They complement each other: DataForSEO for research and competitive analysis; Search Console for measuring actual performance.

About the author

Claudio Novaglio

Claudio Novaglio

SEO Specialist, AI Specialist e Data Analyst con oltre 10 anni di esperienza nel digital marketing. Lavoro con aziende e professionisti a Brescia e in tutta Italia per aumentare la visibilitร  organica, ottimizzare le campagne pubblicitarie e costruire sistemi di misurazione data-driven. Specializzato in SEO tecnico, local SEO, Google Analytics 4 e integrazione dell'intelligenza artificiale nei processi di marketing.

Want to improve your online results?

Let's talk about your project. The first consultation is free, no commitment.