You are probably funding your competitors’ recommendations without realizing it.
Your content gets cited. Your competitors get recommended. You don’t. That gap is a ghost citation problem.
What Is a Ghost Citation?
A ghost citation is when your content is trusted enough to be sourced, but your brand isn't known well enough to be mentioned.
Here’s two examples of where we see this in our own data:

As you can see, Seer's Content is getting cited, but Seer as an agency is getting ignored. In both of these Prompts, our content is doing the work, but our competitors are getting the recommendations.
It's basically ghosting your brand. The LLM is saying...
We see you lurking. We're just not trained to talk about you.
That's the gap we're measuring. Not "is your content good enough to earn a citation?" It clearly is. The question is whether the AI knows your brand well enough to say your name when it's actively recommending solutions in your category.
We're basically saying to the LLM…”Say my name, bitch!”
Why the Sequencing Matters
Before we get to the data, I want to address something that comes up every time we present this finding: how does this actually happen inside the model?
There’s a growing body of research exploring how LLMs surface brands and sources. For example, SparkToro recently showed that AI systems are highly inconsistent when recommending brands. This suggests that what gets cited and what gets recommended are not as tightly coupled as many marketers assume.
That aligns with what we’re seeing. But it doesn’t fully explain it.
Here’s what the evidence strongly supports, with an honest caveat about what we cannot prove.
The leading hypothesis → backed by six independent behavioral tests performed by Seer Interactive across 362,188 LLM responses → is that citations are post-hoc.
The LLM generates its answer first, deciding which brands to name from its parametric memory (the knowledge encoded during training). Then, in a retrieval step, it goes looking for sources to support those choices.
The citations are the bibliography, not the brainstorm.
Under this model, here is exactly what happens in a ghost citation:
- A buyer asks "who are the best solutions for [insert category here]?"
- The LLM reaches into its parametric knowledge of brands it associates with that category. It surfaces competitors.
- Retrieval runs. It finds a blog post from your domain that is topically highly relevant.
- Your URL gets appended as a supporting source.
- The response goes out: competitors recommended, your content cited, your brand silent.
The data signature that supports this: when a brand IS mentioned in a response, its citation rate jumps to 53.1%. When the brand is NOT mentioned, that same brand's citation rate drops to 10.6%. That is a 5x lift that goes in the wrong direction for anyone arguing citations cause mentions. If retrieval drove recommendations, those numbers would be similar, but they aren't close.
The honest caveat: we do not have access to LLM token generation logs. We cannot observe the sequence of operations inside the model directly. This is strongly supported behavioral evidence, not proven architecture. Some platforms may integrate retrieval differently. The post-hoc model describes the majority case well. We are not claiming it as universal.
Here is what does not change regardless of the sequence: your content cleared the retrieval threshold. Your brand did not clear the mention threshold. Those are two separate systems. The gap between them is exactly where ghost citations live.
How Big Is the Problem?
We analyzed 541,213 LLM responses across 20 brands and found a pattern that should change how every brand thinks about AI visibility. Kinda like seeing a real ghost…once you see ghost citations in your own data, you can't unsee it.
Competitive ghost citations are measurable in every sector we analyzed. Some industries have a manageable problem. Others have a brand crisis hiding inside their citation data.
We grouped our client portfolio into eight industry categories. The range within each reflects real variation among clients in that space and that variation itself is a signal worth understanding.

The Hospitality & Travel range is the finding that should stop you cold. The gap between the lowest and highest performer in that category is more than 20 percentage points. Both are established brands with strong content programs. What separates them is not the quality of their content - it is the strength of their brand entity signals in the AI's knowledge graph. One brand's name consistently surfaces in recommendation contexts. The other's doesn't. The AI is making that distinction, and it is making it every time a traveler asks for a recommendation.
When your brand name is effectively synonymous with the category in how the broader web talks about it - when years of consistent brand investment have made you the default answer - the AI reflects that back. The competitive ghost citation problem nearly disappears. That is not an accident. It is the result of brand work that predates AI by decades.
The story isn't just "one brand is better than the other", it's that category owners have nearly zero ghost citations across every industry we measured.
By funnel stage, Awareness carries the highest competitive ghost rate at 5.0%. This is the most damaging place for it to happen. Awareness-stage prompts are category-formation moments - "what tools exist for X," "how do companies solve Y." The AI is building its list of brands worth knowing. If your content is informing that conversation and your name is not in the answer, you are funding your competitor's first impression on a buyer who has never heard of either of you.
Claude and Meta are excluded from this analysis. Neither platform returns citation URLs, so competitive ghost citations structurally cannot occur there - not because those brands are better known, but because the platform architecture does not surface citations.
This is Stranded Brand Equity
The business impact is simple: stranded brand equity.
Here is what that means in practice.
Every piece of content your team produces costs money. Research, writing, editing, publishing, distribution. When that content earns an AI citation, you have achieved something real: the AI validated your content as a trustworthy source on a topic your buyers care about.
In a competitive ghost citation, that investment generates a recommendation → for a competitor.
Your URL passed the retrieval threshold. It was relevant enough to surface as a supporting source. But passing retrieval and being recalled by name during the brand selection step are two completely different things. The AI already knew which brands to recommend before it went looking for citations. Your content showed up in the bibliography of an answer that recommended someone else.
That is stranded brand equity. The value was created. The brand did not capture it.
What Brands Should Do About This
First, determine if you have a ghost citation problem.
You’ll know you have a problem if…
- Your content gets cited by LLMs but your brand isn’t mentioned
- Competitors show up consistently in “best” prompts but you don’t
- Your branded search volume is flat while content output is rising
Ghost citations are a brand entity recognition problem, not a content problem. Adding more content will not fix a competitive ghost citation rate. It may make it worse. More retrievable content, same broken mention pattern.
The fix operates at three layers. And before I describe them, I want to be direct about timeline: none of these changes produce results overnight. AI models re-index content on their own schedules. We are actively measuring this with clients right now.
We are actively testing this and will share our findings as they’re uncovered, but here’s where we started…
one case we recommended that a risk and compliance software client update a blog post that had been cited more than 100 times over a 25-day baseline period - with zero brand mentions across all citations. Every single citation was a Type 1 footnote: the domain appearing in a URL bracket, used to support a factual claim about a regulatory framework. The brand was never described, discussed, or recommended anywhere in the prose. The AI was using the content as reference material while recommending competitors by name.
We recommended adding explicit brand language - "How [Brand] helps organizations implement these frameworks" - directly into the body of the post, so the AI could not extract the insight without the brand name attached.
The content changes went live on February 20, 2026. Twenty-nine days later, brand mentions on that URL remain at zero. The page is still being cited. The classification has not yet shifted from Type 1 footnote to Type 2 recommendation. Our sigmoid model projects full effect around week eight - mid-April.
This is not a failure. It is the reality of how AI systems work. They do not re-rank content in real time. The models learn from the web they indexed during training, and they update gradually. Changes you make today are investments in how the next model version perceives your brand. Expect weeks to months before measurable movement, not days.
Layer 1: Make your brand name inseparable from your key claims.
Your brand name needs to be the grammatical subject of the ideas the AI is already extracting from your content. Not "there are five approaches to compliance training." Instead: "At [Brand], our approach to compliance training starts with..." Not "whistleblower hotlines require these features." Instead: "[Brand]'s research shows whistleblower hotlines require..."
If your brand name is not in the sentence, the AI can absorb the insight and leave your name behind. The fix is structural - make that extraction impossible.
Layer 2: Build the entity graph the AI reads during brand selection.
The gap that produces ghost citations is between retrieval relevance and parametric brand recall. Your content clears retrieval. Your brand name is not being recalled when the model decides who to recommend. These are different problems.
Fixing entity recognition means building machine-readable signals that connect your brand name to your category expertise across the web: Wikidata entries, Wikipedia presence where achievable, Organization schema with sameAs markup on every page, consistent canonical brand name across all properties, author schema linking named experts to the brand organization, FAQ schema where the brand name appears inside the answer text - not just the question.
These are the signals the model relies on during the brand-naming step. If they are absent or inconsistent, the model defaults to the brands it has seen most frequently in recommendation contexts across the web it was trained on. Smaller, newer competitors with cleaner entity graphs routinely outperform larger brands here. Size does not protect you.
Layer 3: Earn third-party brand mentions in recommendation contexts.
The model learned your competitors' names from somewhere. That somewhere is authoritative third-party sources: analyst reports, press coverage, review platforms, partner pages, industry publications. Each of these reinforces the association between your brand name and your category in the training data future model versions will learn from.
PR is directly GEO-relevant in a way it has not been since the early days of PageRank. A mention of your canonical brand name in an H1 on a Gartner report or a TechCrunch article is not just a backlink. It is a training signal. It teaches the model that when someone asks about your category, your name belongs in the answer.
Prioritize coverage that uses your canonical brand name prominently and in recommendation contexts - not buried in a quote attribution. The model needs to encounter "[Brand] is a leading provider of X" on authoritative third-party domains, repeatedly, before it will recall your name at the moment of brand selection.
The Newest GEO Metric to Track
Competitive Ghost Citation Rate is your new GEO brand health KPI. Calculated as: competitive ghost citations as a percentage of total brand citations, tracked monthly, segmented by platform and funnel stage.
If that rate is trending down, your entity work is paying off. If it is trending up, your content investment is outpacing your brand investment and the gap is growing.
The brands winning in terms of fewest ghost citations → Industrial Services at 0.3%, Financial Services and HR Technology under 2% → got there through years of consistent brand investment that made their names the default answer in their categories. The AI learned the same associations the market already held.
That is the target state. Getting there means treating AI visibility as a brand problem, not a content problem and accepting that the fix will be measured in model training cycles, not in days.
Your content is already doing the work. The question is whether your brand is getting the credit.
Data: Scrunch AI via Seer SeerSignals. 20 brands, 541,213 LLM responses, 6 AI platforms, 5 funnel stages. February 2026. Claude and Meta excluded - neither platform returns citation URLs. Competitive ghost citation = brand URL cited AND brand not mentioned AND competitor mentioned in the same response.