← All Articles

LLM Traffic Is a Blind Spot in Your Analytics. Here's Why.

LLM traffic attribution testing results across ChatGPT, Gemini, and Claude on mobile apps, mobile browsers, Mac, and Windows

Your AI Channel Is Growing. Your Analytics Aren’t Keeping Up.

ChatGPT, Gemini, Claude, Perplexity and other LLMs are generating more traffic than you think. The volume is not trickling in, it is accelerating at a pace that is starting to rival established channels. Some analysts estimate AI referral traffic could surpass traditional search referrals by 2028. We think, on some sites, that is already happening.

This article explains why, using hands-on testing across mobile and desktop devices, browser debug modes, server-side request logging, and other techniques across three leading LLM platforms: ChatGPT, Gemini, and Claude.


First, Understand Where Your Users Are

Before getting into what breaks and when, it helps to understand the scale of the problem by device type.

Mobile is the dominant platform for web traffic. According to Cloudflare Radar, which tracks HTTP requests across a network handling more than 20% of all global web traffic, mobile devices account for roughly 50% of all web requests. That number is best treated as a floor: Cloudflare’s network carries a significant volume of API traffic, B2B services, and developer tooling, all of which skews heavily desktop. Consumer-facing web traffic runs considerably higher on mobile. Across the GA4 accounts we work with day to day, mobile typically accounts for somewhere between 70% and 90% of total sessions. Desktop traffic is real, but secondary.

~50%
of global web requests come from mobile devices
Cloudflare Radar
70–90%
of consumer-facing site sessions are mobile
GA4 accounts, WISLR client data
20%+
of global web traffic passes through Cloudflare's network
Cloudflare Radar
2028
projected year AI referrals could surpass traditional search
Analyst estimates

Mobile is where LLM attribution fails most completely. The device most of your audience uses is the device where measurement is most broken.


What We Tested

To understand exactly where tracking breaks, we ran hands-on tests across mobile and desktop devices using the native apps and web browsers for ChatGPT, Gemini, and Claude. Testing was conducted on an iPhone running the latest version of iOS, an Android device running the latest version of Android, a Mac running the latest version of macOS, and a Windows PC running the latest version of Windows.

For each scenario, we looked at whether UTM parameters were present, whether a referrer was passed to the destination site, and whether session identity persisted when a user moved from the LLM interface into a standard browser.

Here is what we found.


The Results at a Glance

PlatformDeviceInterfaceUTMs PresentReferrer PresentTrackable
ChatGPTMobileApp Yes
utm_source=chatgpt.com
No Partial
ChatGPTMobileBrowser Yes Yes Yes
ChatGPTMac / WindowsApp Yes
utm_source=chatgpt.com
No Yes
ChatGPTMac / WindowsBrowser No Yes Yes
GeminiMobileApp No No No
GeminiMobileBrowser No No Yes
GeminiMac / WindowsBrowser No Yes Yes
ClaudeMobileApp No No No
ClaudeMobileBrowser No No Yes
ClaudeMac / WindowsApp No No No
ClaudeMac / WindowsBrowser No No No
Note 01 · The "partial" flag

The ChatGPT mobile app row is marked as partial because even though a UTM is present, the realistic user journey, discovering something in the app and converting later in a device browser, means UTMs rarely survive conversion. Our estimate is somewhere between 10% and 20% of the time.

Note 02 · Why the trackable rows are misleading

All browser rows show as trackable, but the majority of mobile LLM usage happens through native apps, not the browser. The trackable scenarios are the minority. The app rows, where tracking largely fails, are what most of your audience is actually experiencing.


Mobile: Where Most Traffic Is, and Where Tracking Breaks

The table above tells the story pretty clearly. For mobile apps, there is low to no trackable signal. But why?

The answer is the web view, and it is worth understanding what that actually means before getting into the platform specifics.

Key Term
Web view

An in-app browser that opens when a user taps a link inside a mobile app like ChatGPT, Gemini, or Claude. It looks like a browser, but it is isolated from the device's real browser, Safari or Chrome, and does not share cookies, session data, or identity with it.

Why it matters for measurement: any tracking that fires inside a web view stays trapped there. When the user later opens their real browser to continue the journey, they arrive as a brand new visitor with no history. The original LLM referral is gone.

The practical consequence: when a user finishes browsing inside the LLM app and later opens their regular device browser to keep researching or convert, there is no continuity. No shared identity. No handoff. That visit becomes unattributed direct traffic.

ChatGPT on mobile is the most trackable of the options tested. The app passes utm_source=chatgpt.com on outbound links, and technically there is a path to attribution, but it depends on a sequence of events that isn’t very likely. For that UTM to survive all the way to a conversion, the user would need to tap the link inside ChatGPT, get taken into the in-app web view, and then explicitly tap “Open in Browser” to hand the session off to their native browser before continuing. Note that this journey only persists the session on Android devices. On iOS, even this workaround breaks session continuity.

In practice, someone discovers you inside ChatGPT. They read about you, maybe tap through to your site briefly, and move on. Later, maybe an hour later, maybe two days later, they search for you on Google, or just type your URL directly, and that is when they may begin the process of converting. That session carries no trace of the ChatGPT interaction that started the whole thing. The UTM is gone. The referrer is gone. GA4 records it as direct or organic search, and ChatGPT gets no credit. And yes, this is true even if you have tracking features like Google Signals enabled in your GA4 property.

Gemini and Claude on mobile pass neither UTM parameters nor a referrer in any scenario tested. Sessions from these platforms land as pure direct traffic. There is no signal connecting them back to the LLM that drove them.

One nuance worth noting: if a user discovers a site and converts inside the web view, there is internal session continuity within that isolated context. The web view does retain its own cookies across sessions, so a journey completed entirely inside the app is theoretically measurable. Standard cookie tracking limitations still apply, and how often users complete a full conversion loop inside an app web view is itself an open question worth exploring.


Desktop: Better, But Still Inconsistent

Desktop offers more favourable tracking conditions than mobile, but the picture is still fragmented depending on which platform and interface you use.

Unlike mobile, desktop LLM apps do not trap links in isolated web views. They hand outbound links directly to the user’s default browser. So the web view isolation problem does not apply on desktop. What does still apply is whether the platform bothers to pass UTMs or a referrer at all, and on that front, the results are mixed.

ChatGPT on desktop (native app) passes utm_source=chatgpt.com on outbound links, the most reliable desktop scenario of any platform tested. ChatGPT in the browser passes a referrer but no UTMs.

Gemini has no desktop app on Mac or Windows. All desktop Gemini usage goes through the browser. Gemini in the browser passes a referrer but no UTMs, same story as browser-based ChatGPT.

Claude is the weakest performer across all desktop scenarios. The native desktop app passes neither UTMs nor a referrer. Claude in the browser also passes neither. Every session from Claude on desktop lands as completely unattributed direct traffic regardless of how the user got there.


The Compounding Problem: Even Good Attribution Doesn’t Always Make It to Conversion

Assume for a moment that the tracking works. The UTM is populated, the referrer is present, the session is attributable. You might think you are covered.

You are not.

The path from an LLM interaction to a completed conversion is rarely a single session. A user discovers something through ChatGPT on their phone during the day. They click the link, land on the page, browse for a few minutes. They leave without converting. Two days later, they come back on their laptop to follow through. At that point, the original UTM may have expired, the device may be different, the cookies may be gone.

Cookie expiry, cross-device journeys, and multi-session paths each independently break attribution. Together, they make sure that even the fraction of LLM-referred sessions that get correctly tagged at the first touchpoint will infrequently receive credit for the eventual conversion.

This is not a problem unique to LLM traffic. But it hits harder here because LLM-assisted discovery tends to happen earlier in the consideration journey, people are researching and exploring, not ready to buy. The gap between that first AI-influenced visit and the eventual conversion is longer than a typical paid click-to-conversion window, which makes the cross-device problem worse.


What This Means for Your Data

Let’s put some rough numbers to this. The goal is not false precision: it is a reasonable range that helps you understand the scale of what you are likely missing.

Start with the device split. Cloudflare Radar puts global mobile web traffic at roughly 50% of all HTTP requests, and that number skews low for consumer-facing audiences because Cloudflare’s network carries a heavy mix of API and developer traffic. In the GA4 accounts we work with, mobile typically runs between 70% and 90% of sessions. We use 70% here to stay conservative.

Then there is the question of how people actually use LLMs on their phones. Not everyone uses the native app. Some access ChatGPT, Gemini, or Claude through their mobile browser, and browser usage is meaningfully more trackable. Two assumptions bracket the range.

Baseline assumptions used in both cases:

  • Mobile/desktop split: 70% mobile, 30% desktop
  • App survival rate: 20% of mobile app clicks successfully pass a usable signal (e.g. an Android user tapping “Open in Browser”, or a conversion happening inside the web view)
  • Conversion path attrition: 25% penalty applied for signal loss from cross-device jumps and expired cookies
  • Google AIO factor: A portion of AI-influenced discovery now happens through Google AI Overviews and AI Mode, which GA4 records as standard organic search with no way to separate it
Case A · Optimistic
50/50 split of mobile users between the LLM app and the mobile browser.
Your dashboard shows
~40%
of your actual LLM traffic
Real traffic is ~2.5× higher
Breakdown: Apps (35% × 20% signal) = 7%. Browsers (35% fully trackable) = 35%. Desktop (30% × ~50%) = 15%. Click-level: 57%. After 25% attrition: ~43%. AIO misclassification reduces further.
Case B · Pessimistic
80% of mobile users in the LLM app, 20% in the mobile browser.
Your dashboard shows
~20–25%
of your actual LLM traffic
Real traffic is ~4–5× higher
Breakdown: Apps (56% × 20% signal) = 11%. Browsers (14% fully trackable) = 14%. Desktop (30% × ~50%) = 15%. Click-level: 40%. After 25% attrition: ~30%. AIO misclassification reduces further.

The range

The assumptions above are estimates, feel free to plug in numbers that better reflect your own audience or assumptions. Use a different mobile split, a different app-to-browser ratio, a different attrition rate. The model is not meant to be precise. It is meant to show that no matter which reasonable numbers you choose, the conclusion is the same: there is substantial underreporting happening, and the gap is large enough to matter.

Interactive Model
Estimate your own undercount
Mobile share of traffic70%
Share of mobile users in the LLM app50%
App signal survival rate20%
Desktop signal capture50%
Conversion path attrition25%
Your dashboard shows
~43%
of your actual LLM traffic
Real traffic is ~2.3× higher
Breakdown:
Apps (35% × 20%) = 7%. Browsers (35%) = 35%. Desktop (30% × 50%) = 15%. Click-level: 57%. After 25% attrition: ~43%.

With our assumptions, the range runs from roughly 2.5x on the optimistic end to 5x on the more pessimistic end, and potentially higher once you factor in LLM-influenced Google organic sessions that are impossible to separate with standard analytics.

If your dashboard shows 1.5% of your sessions are from LLMs, it’s likely that somewhere between 4% and 8% of those sessions actually came from LLMs, and that number is growing quickly. That is an entire channel hiding in your Direct and Organic reports.


Can You Fix It?

Partially. There is no complete solution, but there are meaningful improvements available to teams willing to invest in more sophisticated analytics and tracking.

Server-side tracking and request logging can capture signals that client-side JavaScript misses entirely. Device fingerprinting tools can maintain probabilistic identity across web view and browser contexts on the same device, partially bridging the isolation gap that standard cookie-based tracking can’t cross. Neither approach gets you to full measurement, and neither solves the cross-device problem without a logged-in user state to anchor both sessions together.

The honest answer: you will never get full measurement. The architecture of mobile operating systems, the referrer policies of LLM platforms, and the nature of multi-session cross-device journeys all work against it. But that is not the most important thing to understand about this channel.

The goal of measurement is not a perfect count of every conversion: it is directional understanding good enough to make decisions. You do not need to attribute every LLM-assisted conversion to know that this channel is growing, that your share of it is something you can influence, and that the relative changes you make to your AI visibility strategy will show up in your numbers. If your LLM-referred traffic doubles after you restructure your content for better AI citation, that movement is meaningful whether or not you can tie every sale back to a specific chat session.

This article is not an argument for giving up on measurement. It is an argument for not letting imperfect measurement become an excuse for inaction. The teams that treat AI visibility as a serious growth channel right now, not when the attribution is cleaner, not when GA4 catches up, but now, are the ones building a lead that will be hard to close.


Methodology

Testing was conducted on an iPhone running the latest version of iOS, an Android device running the latest version of Android, a Mac running the latest version of macOS, and a Windows PC running the latest version of Windows. Linux was not tested directly but is expected to follow similar patterns. Each LLM platform, ChatGPT, Gemini, and Claude, was tested via its native app (where available) and via the device browser. For each scenario, we looked at whether UTM parameters populated, whether a referrer was passed to the destination site, and whether session identity persisted when navigation moved from the LLM interface to the device’s default browser. Server-side logging was used to supplement client-side observations.

This research reflects platform behavior at the time of testing (April 12, 2026). LLM platforms update their apps and web interfaces frequently, and tracking behavior may change.


The Bottom Line

AI channels are not a future consideration. They are active now, growing, and shaping decisions at scale. The problem is that the infrastructure most teams use to measure these channels was built for a world where a click produced a reliable referrer and a cookie that lasted long enough to see a conversion.

That is not how LLM traffic works.

Treat your AI channel numbers as a significant undercount. The sessions you can see are the ones that happened to survive every layer of tracking loss. The ones that didn’t are far more numerous, and they include people who found you, were influenced by what an AI said about you, and converted without leaving any trace you could follow.

Frequently Asked Questions

Why does my GA4 show almost no traffic from Claude or Gemini?

Because both platforms pass no UTM parameters and no referrer in the vast majority of scenarios we tested. On mobile, links open in isolated in-app web views that never connect to your standard analytics environment. On desktop, Claude passes nothing in either the app or the browser. Gemini passes a referrer in the browser but no UTMs. Without either, most of this traffic lands in GA4 as direct or simply goes uncounted.

ChatGPT shows some traffic in my reports. Does that mean it’s being tracked accurately?

Not quite. ChatGPT is the most trackable of the three platforms we tested. It appends utm_source=chatgpt.com in several scenarios, and its desktop app does this reliably. But on mobile, the path to attribution is narrow and platform-dependent. The UTM can survive if the user taps “Open in Browser” inside the web view, but this only works on Android. On iOS, that handoff does not preserve the session at all. The identity is lost when the user leaves the web view regardless of how they exit. So even in the best case, you need the user to be on Android, using the app, and explicitly tapping “Open in Browser” before navigating further. That is a small slice of your actual ChatGPT mobile traffic. The numbers you see in GA4 are real, but they represent a fraction of what ChatGPT is actually driving.

Does this problem only affect mobile users?

Mobile is where it is most severe, but desktop is not clean either. Claude on desktop passes nothing across all scenarios tested, app or browser. Gemini on desktop passes a referrer but no UTMs, which may or may not be classified correctly depending on your GA4 setup. ChatGPT’s desktop app is the one reliable bright spot. So even if your audience were entirely on desktop, you would still be missing a significant portion of LLM-influenced traffic.

What is a web view and why does it cause tracking problems?

A web view is an in-app browser that opens when you tap a link inside a mobile app. It looks like a browser, but it is isolated from your device’s actual browser. It does not share cookies, session data, or identity with Safari or Chrome. So any tracking that fires inside a web view stays trapped there. When the user later opens their real browser to continue their journey, they look like a brand new visitor with no history.

If the final conversion happens inside the app’s web view, is that tracked?

It can be, but it is not clean. The conversion has to happen inside the web view for the attribution chain to hold. The user does not need to have spent their whole journey there, but that final step does. The catch is that iOS makes up the majority of mobile traffic for most audiences, and on iOS the web view is subject to WebKit’s cookie restrictions, including Intelligent Tracking Prevention. Those restrictions can limit or break the continuity you are relying on. So while this scenario is theoretically trackable, in practice it is the least broken option available rather than a dependable path.

Can I fix this with server-side tracking?

Server-side tracking helps, but it does not solve everything. It can capture signals that client-side JavaScript misses, including referrer data that never makes it into GA4. Device fingerprinting tools can also help by maintaining probabilistic identity across web view and browser contexts on the same device. But neither approach solves the cross-device problem, and neither gives you full measurement. You will get meaningfully closer to the truth, but you will not get all the way there.

How much is LLM-assisted conversion traffic actually underreported?

Based on our analysis, somewhere in the range of 2.5x to 5x, depending on your audience and how they use LLM platforms. The conservative estimate (Case A) assumes half of mobile users access LLMs through the browser, where tracking largely works, producing roughly a 2.5x underreport after accounting for conversion path losses. The more realistic estimate (Case B) assumes 80% of mobile users are on the native app, where tracking essentially fails, producing a 4 to 5x underreport. Both estimates are likely still conservative, because a growing portion of what GA4 classifies as Google organic is actually LLM-influenced traffic arriving through AI Overviews and AI Mode, traffic that is impossible to separate with standard analytics.

Should I be investing more in AI visibility if I can’t measure the results?

Yes, arguably more so, not less. The fact that you cannot measure the full impact of this channel does not mean the impact is not there. It means your current tools are under-qualified to see it. The sessions that do make it through to your reports are already showing meaningful engagement. The ones that don’t are orders of magnitude more numerous. Treating this channel as unimportant because the numbers look small is exactly the wrong conclusion to draw.