Insights

How LLMs Amplify Brand Misconceptions & How to Address Them With GEO

One of the biggest advantages of using AI for research is the elimination of unnecessary friction. Users no longer need to open multiple tabs or sift through dozens of sources - LLMs do that heavy lifting for them. 

According to OpenAI’s latest consumer report, information-seeking usage in ChatGPT grew from 14% to 24% between July 2024 and July 2025.

But while this shift makes research faster, it also introduces a new challenge: AI answers can easily amplify outdated or inaccurate information if brands don’t actively manage their presence.

The New Reality: Research in Generative Engines

LLMs pull together and regenerate information from scattered sources across the web. For brands, that means it’s more important than ever to understand how and where you’re being referenced. Customers may rely on those references in AI summaries to make real-world decisions.

We saw this firsthand when testing Gemini to compare Seer against another potential agency partner for a client:

What caught our attention was a recurring theme we’ve been monitoring for months: the prompt “What is Seer Interactive’s reputation, or how is it perceived within the marketing industry?”

Over a three-month period, the phrase “high account manager turnover” appeared 67 times, making it one of the most common negative attributes mentioned about Seer.

The Risk: Misconceptions from Old or Isolated Data

What we found was that the same handful of review sites were consistently being cited as sources when LLMs pulled information about our brand.

Across five review domains, Clutch.co appeared most frequently, cited in 16% of branded prompts we’re tracking, followed by AgencySpotter at 6%. 

That’s when we uncovered the root cause of the “high account manager turnover” claim: a single, five-year-old client review that’s been duplicated across multiple sites.

Pages containing that one review show up in 38% of our branded prompts. And on three of the five review sites, including AgencySpotter, that review is the only one listed. But when you see how it’s referenced in an LLM, it reads as though multiple clients raised the same issue.

 

My goal in calling this out isn’t to debate whether that review is “fair.”

Bad reviews happen. But a single, outdated piece of feedback shouldn’t define a brand years later. 

Despite research showing that LLMs tend to favor recent content, this review continues to surface, shaping perception as if it were new.

A human reader might catch that the review is old or isolated - an LLM won’t. It strips out context and presents it as current and representative.

Within an AI UX study, Seer found that 45% of AI interactions resulted in one prompt and one response.

In this study, all users said they checked the accuracy of the AI responses, often through gut checks. Only some visited outside sources.

This was never an issue in an SEO-world, but now it is.

The takeaway: small, outdated pieces of information can have an outsized influence on how AI perceives and presents your brand.

 

Addressing Your Brand’s Misconceptions with GEO

1. Monitor brand reputation

Generative engines can amplify misconceptions quickly if they aren’t monitored.

That’s why it’s helpful to run ongoing branded prompts - not to find the exact searches people are using, but to spot themes in how your brand is being represented.

When I’m tracking branded prompts, I’ll include variations like “What are the weaknesses of [Brand]?” The goal isn’t to assume users are typing that in directly but to understand what information an LLM might surface when a user’s question triggers content about your brand’s weaknesses.

In the first example, we weren’t specifically asking for strengths and weaknesses of Seer compared to the other agency, but Gemini included this type of information as it may be beneficial for a user in that decision-making stage.

Cast a wide net and treat these prompts as directional signals. They won’t tell you exactly what users are searching for, but they’ll reveal how AI systems interpret and present your brand in real-world contexts.

2. Create a review strategy

Brands that want to protect their reputation in AI search need to treat reviews as a living data signal - something that requires ongoing attention, not a one-time effort.

A modern and LLM-relevant review strategy should include:

  • Monitoring which review domains (Clutch, G2, Trustpilot, etc.) appear most often when your brand is referenced in AI tools.
  • Encouraging recent, authentic reviews to accurately represent how your brand is perceived by a variety of customers.
  • Refreshing key profiles from vendors and partners regularly so LLMs pull from current, accurate information.

We know brands have different thoughts when it comes to requesting reviews. At Seer, we hate asking for reviews. In Seer’s 23 years, we’ve never asked clients for public reviews - but with the changes we’re seeing in AI search, it feels important to adapt.

3. Tackle misconceptions with branded content

While that single review showed up as a source in 38% of 1,152 branded prompt outputs, our own website appeared in 286% of LLM outputs - meaning multiple Seer URLs were often cited in a single response.

That’s an important signal: we have the ability to own and influence our brand narrative directly through our own content.

While Seer Interactive is sometimes referenced as having a high turnover rate in LLMs, our actual retention rate over the past 12 months is 79.2% - which aligns with the industry average based on our prior research and external benchmarks. Most agencies typically see a 20–30% turnover rate, referenced in Campaign US’s Agency Performance Review 2025.

Since Covid, our People Team made significant investments in our team to strengthen retention, including:

  • Enhancing performance reviews for greater accuracy and equity
  • Evolving our feedback process, delivery, and types of feedback
  • Maintaining our flexible, remote-first work model
  • Optimizing and expanding our benefits offerings
  • Using Energage results and feedback to guide meaningful changes

If you’re noticing misconceptions about your brand in AI tools, the best move isn’t to ignore them - it’s to publish authoritative, up-to-date content that addresses those points head-on. Over time, that’s what helps LLMs surface your version of the story.

We love helping marketers like you.

Sign up for our newsletter for forward-thinking digital marketers.