Insights

GEO Experiment: How AI highlighted the 1 bad review we got in 24 years

How this GEO experiment came to be

Nick Haigler alerted me to a review that was showing up on a ton on the branded prompts he was tracking for Seer. So I went into my friendly neighborhood LLM to see what it said when we asked about our company's weaknesses.

There it was ... Seer has high account manager turnover, I had no metrics to know if we did or didn’t, yet AI was saying that, so we did another experiment to run tests on our own brand, like we did back here when we tested updating our footer.

I was intrigued, marketers for 6-7 years have alerted me to this, but I never cared, assuming humans would be the ones to see that review and say, 1 bad review, that reads like someone with a bone to pick, I bet on the humans, they would know the difference. 

These review sites are the truck stop bathroom wall of how to pick an agency.

But in an AI world, I now have to care because it's a bot giving answers not a human who can process... 1 bad review, 24 years, they are probably fine.

Nope, we have bots now telling people about your brand. We've watched people interact with results and they one shot prompts and way too often believe what they hear.

We track tons of branded prompt outputs about Seer Interactive, so I was surprised to see that and we dug into the citations. There they were, site after site, after site the same language - it was posted in 2018, we incorporated in 2002 and it is 2026. 

As Amanda Natividad said:

We are entering an era where third party content about you might matter more than the content you produce about yourself… and we're learning just that as we test this on our own site. 

 

  

They were all posted at the same time, with the same person taking a "never in my 10 year career" + "the owner's behavior is abhorrent" bend.

Was the negative review accurate? Who cares.

Was it representative of the 500+ clients we’ve worked with over the last 24 years? Not at all, but here we are with it showing up in at least 1 in three branded prompts.

Because it existed, and AI goes deeper than humans do, AI found it.


When AI generates a response about any brand, it's hardwired to give pros and cons, that is a CON.


If the cons are thin, it goes fishing, deeper and deeper, now that one off that no one really saw before is getting surfaced.

AI today is not smart enough to recognize that one negative review across 24 years of existence is actually a testament to something. Instead, it treats that single data point like a trend which by default amplifies it.

The sources AI pulls from to talk about your brand aren't the ones you'd expect sometimes sites that aren’t that popular get pulled in, especially in deep research, where I have seen 150+ sources pulled sometimes. Directories that exist purely to rank for brands in search are now showing up. Before AI, those sites were invisible, stuck on page 4 or 5 of Google today they ended up in 38% of our branded answers.

I'm not here to litigate what happened, this is about optimizing your brand in a world of AI and how to go on the offense, so LFG!


Over three months, we tracked prompts like:

"Tell me about Seer Interactive"

"What's Seer Interactive's reputation?"


FYI after observing more and more people use AI to get answers, I have a new belief about the branded prompts you should track.

The phrase "high account manager turnover" appeared 67 times.

We traced it back. The same five review domains kept getting cited. Clutch appeared in 16% of our branded prompt outputs. AgencySpotter at 6%. On three of those sites the only review listed was that same negative review posted to each site.

When an LLM sees the same claim on multiple sites, it doesn't flag it as potentially duplicated. It reads it as corroboration. This is why listicles work and you know how I feel about listicles, I think they are the "you have a loan waiting for you spam".

The first step in going on the offense is defensable data

Our team member retention rate is 79.2%, which was in line with a lot of the deep research queries we ran.

Yet we never mentioned that, the amount of awards we've won for best places to work, Newsweek, Inc Magazine, Philly Inquirer, and Ad Age.

  


Step 1: Make your data public

We published a post addressing it directly, if the AI is going to say “be careful about X thing” with your brand we might as well give the AI some actual data. If you don't like the numbers, publish anyway and commit to improving them.

In order to maintain trust with our audience, we just updated our client retention numbers which went from 97% to 92%, you can check our footer. We can't post the numbers when they are good and then when they dip avoid them.

Step 2: Don't go to AI, go to humans, find out what you are doing

We interviewed our People team on what we do to retain team members adding the actual changes we made to the article.


We had two goals: help our audience understand how to handle this for their own brands, and actively correct the record for ours.

Step 3: Ask for reviews, we hated it but we did it for the first time ever

Also begrudgingly, we started asking clients to leave a review and a few did, FYI Seer’s longest standing client is 14 years!

Positive information about your business is stuck in slack, finance, HR, etc. If you as a marketer don't go digging for it, you are on defense.

That is 14 years of them picking us, re-picking us, over and over again. We have another client at 13 years, several at 10+ years. 

We have at least 8 alumni who have hired us in some of the worlds biggest companies, if that isn’t a “vote of confidence” I don’t know what is, the people who saw it from the inside hiring us at their current companies - and then there’s the alumni referrals.  

Even as I write this, I think of a current client, who is on her 5th company with us, 0 "positive" reviews, but 5 contracts worth well over $10 Million over the years, and one again with a major division of one of the biggest AI companies on the planet. 

We had all these stories, all these wins, and we weren’t great at making them public, your new job is to track your brand and be a detective.

 

What happened when we posted about our turnover publicly?

It worked. Fast.

The day we published, Perplexity cited it immediately

After just two citations, LLMs stopped referencing "high turnover" altogether

ChatGPT and AI Overviews hadn't yet cited the post — but they also stopped surfacing the misconception

WE WON!  Not so fast.

 In this ugly chart.

You can see the blue line, it dropped and then came back.
 

The fix didn't stick to improve the AI answers long term

Perplexity changed how it sourced information. Citations to our post dropped sharply and by February had essentially disappeared.

The misconception didn't come back but our 79.2% retention stat stopped getting mentioned. The old reviews stuck, still just one bad review. We republished an updated version of the post. The citations came back. The retention rate started getting mentioned again.

But the honest read is this: a single blog post is not a durable fix. It's marketing “whack-a-mole”. We put something up, update it and it sticks for a few months, then boom it loses steam. We could just keep “updating the post” but that isn’t what we want. That is whack-a-mole.

Time to go back on the offense, part 2

If you don’t own your branded search, someone else will!

Our 79.2% retention rate didn't exist anywhere else on the web. That specificity mattered and the recency of publishing seemed to matter too.

But we can't keep re-publishing the same post and calling it a strategy. I'll let listicle builders do that low quality shit.

We're building a centralized source of truth - a dedicated page that consolidates our retention rate, engagement data, and team stats in one place.

Easy to update. Easy to cite. Designed to be the authoritative URL on this topic rather than a blog post that ages.

This is what it will look like - we’ve always sucked at tooting our own horn, but now if we don’t get better we have to admit that we’re allowing a single persons bad experience be mentioned over and over and over again in an AI world. If you don’t go after these with your real data, then you are allowing it to exist.

 

We're also doing what we probably should have done years ago: getting more recent reviews onto the sites that LLMs actually pull from. Not fake ones. Not volume for volume's sake. Real client feedback that reflects who we actually are and not 1 persons posts on 7 different sites.

Our internal winning chat is FULL of client quotes - we’re going to start being better about publishing those. As Nick says we need a “single source of truth” on Seer by the numbers.

I’ve thoroughly enjoyed following Harpreet who showed a similar strategy and how he went after it for a client:

  

Your brand narrative is being written right now, in AI systems, by whatever sources those systems find most relevant and recent.

A good brand narrative has balance, saying "everything is great" isn't a balanced approach, we want balance and the AI models seek out that balance.

Unfortunately the desire for balance means they don't "think" this company has been around for how long?  They dig, and dig, and dig for something negative-ish to create balance.

Dear Marketer, Be B-Rabbit -  I am white, I am a bum, I do live in a trailer with my mom.

  

As a marketer in this world of managing your brand, you are B-Rabbit, you might as well pre-empt what "they" are going to say about you but taking ownership and going on the offense. 

If not you let the spammers do this and own narratives about your brand, are you going out like that?

If "winning" on AI is listicles and shitty comparisons, then I got another line for you...

I'd rather "not win" in business if that is the new definition of winning. But I think I can pull a "b-rabbit" outta the hat and find a way to beat this with something higher quality.

We love helping marketers like you.

Sign up for our newsletter for forward-thinking digital marketers.