AI agentic browsers and SEO: how Atlas, Comet, and Dia change the click pattern

Comet by Perplexity passed 10 million monthly active users in 2025 and a 1.9% global browser market share, per Perplexity's own numbers. OpenAI's Atlas launched on October 21, 2025. Dia by The Browser Company was acquired by Atlassian for $610 million in September 2025. Three products, one shared premise: the visitor on your site is no longer always human.
When I plan an SEO strategy, I always start with the same question. Who lands on the page, and what do they want to take away? For fifteen years, the answer was "a person who clicked on Google." Today, that person can be an AI agent running a task on its own, opening ten tabs in twenty seconds, reading the accessibility tree without looking at the rendered pixels. The ai browser changes the contract between site and visitor.
In short: AI agents read content in their sidebar instead of sending the user to your page. They lower classic organic CTR but raise the relative value of cited sites. SEO doesn't die, it redistributes. This article maps what Atlas, Comet, and Dia are, what really changes for SEO and GEO practitioners, and which concrete actions make sense today on the sites I work on. No predictions, no "death of SEO." Just verified data, an honest comparison, and a list of limits I declare upfront.
AI agentic browsers: what they are and why they matter
An AI agentic browser is a traditional browser (Chromium engine rendering HTML pages) plus an autonomous execution layer that lets a language model read, decide, and act on behalf of the user. Comet and Dia are confirmed Chromium forks. Atlas is Chromium-based, but OpenAI hasn't published full architectural details. All three integrate an AI assistant that sees page content, has memory of previous sessions, and can run sequences of actions: clicking, filling forms, navigating between sites, synthesizing the result.
The difference with AI chatbots that have web access is point of view. ChatGPT browsing reads a site through its dedicated crawler (OAI-SearchBot, ChatGPT-User), which presents an identifiable user-agent. Atlas in agent mode uses a browser session indistinguishable from any Chrome and triggers a sequence of actions on your domain like a real user would. The site often doesn't know there's an agent on the other side.
Cloudflare's 2026 reports on AI traffic split non-human traffic into three categories: training crawlers (around 89% of AI traffic), search crawlers (about 8%), and agentic bots (just over 2%, the emerging category). Agentic browsers fall into the third bucket, and that's the one growing fastest. The scale signal that matters: OpenAI bots alone account for roughly 69% of all AI-driven traffic by volume.
Atlas, Comet, and Dia: the three agentic browsers compared
Launch status, business model, and agentic capabilities of the three products differ enough that a comparison helps before talking SEO impact. Data updated as of April 29, 2026.
| Browser | Launch | Status | Base pricing | Differentiator |
|---|---|---|---|---|
| Atlas (OpenAI) | Oct 21, 2025 | GA on macOS, Windows beta | Free + Plus/Pro $20/mo | Agent mode + ChatGPT integration |
| Comet (Perplexity) | Jul 9, 2025 | GA Windows/macOS/iOS/Android | Free + Pro $20 + Max | Background Assistants and Model Council |
| Dia (The Browser Co) | Jun 2025 beta, Oct 2025 GA | macOS only | Free + Pro $20/mo | Inherits Arc features, Atlassian integrations incoming |
Comet has the most complete public numbers. Perplexity has stated at least 10 million MAU in 2025, with monthly growth faster than Firefox and a 30-day retention around 48%. These are vendor self-reported numbers, not third-party audited. Atlas and Dia have never published official download or MAU numbers, so any estimate beyond "we don't know" should be taken with skepticism.
Atlas has a detail that changes the picture for advertisers. Several analyses published after launch flagged the risk that the browser drains Google Ads budgets: when the agent navigates and clicks for the user, it generates traffic that looks human but doesn't convert like a human would. The risk is double: ad budget burned plus attribution models polluted by false retargeting audiences.
Dia is the most peculiar case. The Browser Company built Arc, a niche but beloved browser among power users, then froze it in May 2025 to focus on Dia. In September 2025, Atlassian acquired the company for $610M. From November 2025, Dia started absorbing Arc's most-loved features (vertical sidebar, custom shortcuts). The strategic direction suggests it will become the default AI browser of the Atlassian ecosystem (Jira, Confluence, Trello), a B2B use case distinct from the consumer angle of Atlas and Comet.
How AI agentic browsers change click patterns
The first measurable effect is a redistribution of organic traffic, not its disappearance. Data from major tracking providers tells a two-act story: sites cited in AI answers gain higher-quality clicks, sites in the top 10 not cited lose substantial traffic. CTR is still a valid metric, but the main driver now is the citation, not the position.
Pew Research, on a sample of 68,000 monitored queries in 2025, measured a 46.7% relative drop in organic CTR when an AI Overview appears. In absolute terms, organic CTR drops from 1.76% to 0.61% on affected queries, a roughly 65% decline. Position 1 loses about 18% of CTR, position 2 loses 39%. These numbers are specific to Google's AI Overviews, but the principle extends to Atlas and Comet behavior: the agent reads the answer in its sidebar instead of sending you the user.
The flip side, per BrightEdge research relayed by Position Digital, is that sites actually cited in AI responses see a 35% click increase and a conversion rate up to five times higher than uncited top-10 sites. The data has obvious selection bias (cited sites tend to already be stronger brand authorities), but the signal direction is consistent with other independent sources. The point is clear: passive top-10 visibility is worth less than before, active citation visibility is worth a lot more.
On the data side I've written about how Google's bounce click theory doesn't explain the CTR collapse with Similarweb numbers and an estimate of the impact for Italian SMBs.
The operational point: total traffic from AI search engines, per Similarweb, dropped about 15% between October 2025 and January 2026, after a fast climb in the months before. ChatGPT alone accounts for around 87% of all AI referral traffic. Translation: LLM traffic stays under 1% of total organic traffic and grows slowly. The CTR crisis isn't "death of SEO," it's a metamorphosis that rewards cited sites and penalizes uncited ones.
AI browsers and user-agents: the identification problem
Here's the technical part that's less comfortable to explain to clients. Atlas and Comet, when navigating in agent mode for the user, send the same user-agent string as any Chrome on the user's OS. There's no identifiable "Atlas" or "Comet" token server-side. It's a vendor design choice, motivated by the agent "acting as the user," not as an independent crawler. On the site side, that means user-agent based defenses don't work. Behavioral fingerprinting (navigation patterns, action sequences, absence of mouse movement, execution speed) remains an alternative detection layer that Cloudflare Bot Management and DataDome use in production, but it's not within reach for a small site without an evolved CDN.
OpenAI tried to mitigate the problem by adopting HTTP Message Signatures (RFC 9421, standardized by IETF in February 2024), which let the server cryptographically verify a request actually comes from an authenticated OpenAI agent. It's elegant but adopted by few sites because it requires server-side infrastructure, and it only applies to user-agents declared as ChatGPT-User: it doesn't solve the masked-as-Chrome agent problem. Meanwhile, Robb Knight had documented in 2024 a public case where Perplexity sent declared user-agents different from the real ones, sparking some controversy in the SEO industry on vendor declaration reliability.
Crawlers dedicated to training and search are identifiable. The table below is the updated mapping I use in client audits.
| User-agent | Vendor | Function | Respects robots.txt |
|---|---|---|---|
| GPTBot | OpenAI | Training | Yes |
| OAI-SearchBot | OpenAI | Live search | Yes |
| ChatGPT-User | OpenAI | User agent (agentic) | No (user-initiated) |
| PerplexityBot | Perplexity | Live search | Yes |
| Perplexity-User | Perplexity | User agent | No (user-initiated) |
| ClaudeBot | Anthropic | Training | Yes |
| Google-Extended | Gemini training | Yes (separate from Googlebot) | |
| Google-Agent | Agent (Gemini) | No |
The "training" vs "user-initiated" user-agent distinction is the key point. ChatGPT-User and Perplexity-User ignore robots.txt because they formally "act on behalf of a user who typed the request," and OpenAI states this openly in its public publisher documentation. Awkward consequence: if you blocked GPTBot thinking you kept OpenAI out, ChatGPT-User passes anyway. To actually block it you need a firewall rule on official OpenAI IPs, not a robots.txt directive.
The Reddit vs. Perplexity case, filed in New York federal court on October 22, 2025, brought this exact issue to court. Reddit used "marked bills," content deliberately exposed only to Google's index, to prove Perplexity reproduced them in its responses while bypassing blocks. The case is still open but it crystallized the position of major platforms: robots.txt is a signal, not a binding contract, and real enforcement requires lawyers.
For a deeper dive into the layers of an AI-aware audit, I've recently written a complete technical SEO audit guide beyond Googlebot with five concrete layers and a 30-second curl test to simulate GPTBot on your site.
What to do on your site when visitors are AI agents
Three categories of intervention, in increasing cost order. I apply them in client projects following the effort-to-return ratio, because optimizing for AI agents without first solving technical basics stays an exercise in style.
Zero-cost interventions: robots.txt cleanup and GA4 segments
Open robots.txt and check there are explicit rules for every user-agent in the table above. The most common error I see is a generic Disallow: / for User-agent: * overridden by more permissive specific rules below it. If you want to be visible in ChatGPT Search but not feed GPT-5's training, block GPTBot and let OAI-SearchBot and ChatGPT-User through. If you want to opt out of Gemini training but stay in Google, block Google-Extended and keep Googlebot.
In GA4, create dedicated segments for referral traffic from chat.openai.com, perplexity.ai, claude.ai, copilot.microsoft.com, and gemini.google.com. It's the most direct, least noisy way to measure whether AI users actually land on your site. On the Italian sites I monitor, this source weighs 1-3% of total traffic, with a conversion rate often higher than the organic channel. The relative value already exceeds the absolute volume.
Low-cost interventions: schema markup and semantic HTML
JSON-LD schema remains the LLM-preferred format because it's a contiguous, parseable block. BrightEdge research from 2025 found sites with valid FAQ schema and structured data had 44% more AI citations, though the data has clear selection bias (better-curated sites tend to already be more authoritative). An independent late-2024 study found no correlation between schema coverage and citation rate, so the evidence is mixed. What's certain: schema is cheap to implement and does no harm.
Semantic HTML is equally important. Agents like Atlas and Comet read the accessibility tree, not pixel rendering. A <div onclick> isn't a button to them, it's a text block. Replacing generic divs with <button>, <nav>, <main>, <article> and adding descriptive alts to images does more for AI SEO than many expensive tweaks. WebAIM's Million Report from February 2026 found 56.1 average accessibility errors per page in the top million sites: there's margin everywhere.
Technical interventions: rendering, performance, entity-rich schema
If your site is a client-side single-page application, the first HTML the server sends is an empty shell. GPTBot and ClaudeBot don't execute JavaScript, so they see nothing. The fix is server-side rendering or static site generation: in Next.js both are activatable in a few hours of work, in Nuxt it's the default, in SvelteKit too. The 30-second test is a curl with GPTBot user-agent on your homepage: if the response doesn't contain the main text of your articles or offers, you have a structural problem no surface fix solves.
On performance, AI agents work with tight retrieval timeouts. The shared engineering estimate in the field is 1-5 seconds per request, though vendors don't publish official numbers. A site that takes three seconds to load critical content gets read poorly or not at all by an agent running ten tasks in parallel. Core Web Vitals stay relevant, and per Web Almanac 2025 data, 52% of mobile sites fail at least one core metric in field data. Improving LCP and INP isn't optional.
On the content strategy side I've written a piece that complements this on the text angle, my GEO 2026 content strategy with up-to-date data explains how to write to be cited in AI responses.
Limits of this analysis and what we still don't know
I want to be clear on what this article doesn't tell you, because I've seen too many consultants sell "agentic browser optimization" as a magic pill. Five things declared upfront, not after.
First limit: adoption data for Atlas and Dia is opaque. OpenAI and Atlassian haven't published MAU, active downloads, or market share. Anything you read about Atlas having "X million users" comes from press releases or indirect extrapolations. Comet is the only case with verifiable public data. Any SEO impact estimate based on adoption numbers rather than behavioral data should be treated with skepticism.
Second limit: AI referral traffic is still small. Similarweb measures 0.5-1% of total organic traffic as attributable to AI platforms, and this number is slightly declining in Q1 2026. Optimizing for AI browsers while 99% of traffic still comes from classic Google is a future positioning bet, not today's imperative. Don't touch the standard Google side.
Third limit: the llms.txt standard lives only on paper. No LLM vendor has publicly confirmed consuming it from external sites, and John Mueller has explicitly stated Google doesn't use it. SE Ranking, on 300,000 domains, found no correlation between llms.txt presence and AI citations. You add it because it costs zero, not because it moves the needle.
Fourth limit: "agentic capability" benchmarks are all vendor-driven. There's no independent report comparing Atlas, Comet, and Dia on task success rate, speed, reliability in a replicable way. Anything you read on "agentic capabilities" comes from journalistic reviews or consultant blog posts, rarely from systematic testing.
Fifth limit: the regulatory frame is moving. The Reddit vs. Perplexity case, W3C proposals on AI access manifest, the European AI Act, and moves by Italy's Privacy Authority on generated content can change the rules in the next eighteen months. What's a voluntary best practice today could become a compliance requirement.
Where to start if you have an hour today
If you have a free hour and you manage your site, this is the order I run things in client projects to get the most with the least investment.
- Minutes 0-15: user-agent audit: open server logs or your CDN dashboard. Identify the three most frequent AI user-agents in the last 4 weeks (GPTBot, ClaudeBot, PerplexityBot typically). Compare them with robots.txt. Decide consciously what to block and what to let through, based on your content value.
- Minutes 15-30: rendering test with curl: simulate GPTBot with a curl command on the homepage and three main pages. If the response doesn't contain article text or offers, you have a server-side rendering problem that beats everything else.
- Minutes 30-45: quick schema audit: open Google Rich Results Test on five representative pages. Check Organization, Article, Product, FAQPage. List missing properties, prioritize the most visible ones (datePublished, author, sameAs).
- Minutes 45-55: GA4 AI segments: create the "AI Traffic" segment with sources chat.openai.com, perplexity.ai, claude.ai, copilot.microsoft.com, gemini.google.com. Compare conversion rate and average time on site with the organic channel. AI referral often converts better than organic even at lower volumes.
- Minutes 55-60: action plan: rank the three fixes with the best impact-cost ratio. Schema markup is almost always first. SSR is second if you have a SPA. Robots.txt is free but a strategic call, take it well.
This isn't the full roadmap, it's the first hour. A serious audit takes weeks of work on logs, monitoring, systematic tests. But in sixty minutes you understand where your site stands and you stop chasing ideas like llms.txt while your homepage is invisible to GPTBot.
What I'm watching in the coming months
Three fronts I'm tracking that I don't have enough data on yet to tell you how to act, but that will shift the picture by end of 2026. First, Dia's integration with Atlassian could create the first natively B2B agentic browser, with crawling characteristics distinct from consumer ones. Second, OpenAI announced Atlas on Windows for Q2 2026, which will widen the audience beyond the Mac segment dominating today. Third, the Reddit vs. Perplexity case, whatever the outcome, will set precedent on the legitimacy of anti-scraping blocks in Western jurisdictions.
On ChatGPT Search and Bing visibility specifically I've written a focused guide on how to appear in AI search results with practical steps to be cited even if you're not a major brand.
If you manage a site and want to know where you stand on these three browsers, the SEO consulting services page describes how I work and how I structure engagements. You can write me from the contact page to start, the first twenty minutes are free.
Frequently Asked Questions
Click patterns change and the type of non-human reader consuming your site. Agents read the page in their sidebar and often don't send the user to your landing, lowering classic organic CTR (Pew Research measured a 46.7% relative drop when an AI Overview appears). Sites cited in AI responses, however, gain higher-quality clicks with conversion rates up to 5x higher than uncited top-10 sites. SEO doesn't die, it shifts from "being first" to "being cited."
OpenAI dedicated crawlers (GPTBot, OAI-SearchBot) and Perplexity (PerplexityBot) respect robots.txt. But when Atlas or Comet operate in agent mode, they send the same user-agent string as any Chrome, indistinguishable from a human visitor: robots.txt doesn't apply. ChatGPT-User and Perplexity-User, the declared user-agents for user-initiated requests, explicitly ignore robots.txt per OpenAI's official policy. To actually block an agent you need a firewall block on the vendor's official IPs, not a robots.txt directive.
Yes, because they're Chromium-based and use the standard Blink+V8 engine. They render pages the same way Chrome does. It's different for training crawlers (GPTBot, ClaudeBot, CCBot), which per Cloudflare and the 500 million GPTBot fetches analyzed by Passionfruit in 2025 read only initial HTML, no JS execution. For agents like Atlas and Comet, what matters is not JavaScript but the accessibility tree: they read the page's semantic structure (headings, buttons, links, forms), not the pixels.
GA4 doesn't yet distinguish agentic browser user-agents from standard Chrome, because vendors mask them deliberately. What you can track is referral traffic from AI platforms: chat.openai.com, perplexity.ai, claude.ai, copilot.microsoft.com, gemini.google.com. Create a dedicated GA4 segment with these sources and compare conversion rate and average time with the organic channel. On the Italian sites I monitor this source weighs 1-3% of total traffic, often with a conversion rate higher than organic.
Yes, and I say this with data. Similarweb measures total traffic from AI platforms below 1% of global organic traffic in early 2026, slightly declining from the October 2025 peak. Google remains the main traffic driver for 95% of sites, and ChatGPT alone accounts for around 87% of all AI referral traffic. Classic SEO (Core Web Vitals, indexing, internal linking, schema) stays the foundation. Optimizing for AI browsers without solving the basics is a future-positioning bet, not a substitute for the present.
It's a real risk flagged after the Atlas launch. When the agent navigates and clicks on paid results for the user, it generates traffic that looks like a human visitor on the Google Ads side but doesn't convert like one. Side effect: smart bidding models and retargeting audiences get "polluted" with non-human signals, and the system optimizes toward an audience that isn't really your target. Mitigation goes through placement exclusions, click-to-conversion ratio monitoring, and evaluating solutions like Vercel BotID or Cloudflare Bot Management on sites where ad spend matters.
About the author
Claudio Novaglio
SEO Specialist, AI Specialist e Data Analyst con oltre 10 anni di esperienza nel digital marketing. Lavoro con aziende e professionisti a Brescia e in tutta Italia per aumentare la visibilità organica, ottimizzare le campagne pubblicitarie e costruire sistemi di misurazione data-driven. Specializzato in SEO tecnico, local SEO, Google Analytics 4 e integrazione dell'intelligenza artificiale nei processi di marketing.
Want to improve your online results?
Let's talk about your project. The first consultation is free, no commitment.