What if there was a Google algorithm shift that you didn’t even know about?
With Google Search, we knew about (most) algorithm updates because Google announced them.
But when it comes to AI search, the metrics are murky, and the behind-the-scenes changes aren’t readily shared.
The ‘Dark Algorithm Updates’ of AI Search
This is what I’m referring to as a ‘dark algorithm update’.
We’ve been analyzing Gemini data for 6+ months and recently discovered an abrupt drop in citations — a whopping 23 percentage points. Here’s what we found and why it matters to your brand.
With Google Search, major algorithm shifts are named and covered within days. Whether that’s Google letting you know or sites like MozCast keeping us aware of changes that can impact our performance.
Gemini’s citation rate dropped from 99% in February to 76% in March
In a dataset of 82,000 responses that we’ve been monitoring since November 2025, we found that Gemini’s citation rate decreased sharply over a two week period from February 16 to March 2.

This dataset represents 20 different brand workspaces that we’re monitoring in Scrunch across multiple industries and verticals. While this is not meant to represent an overall change in how Gemini is performing across the board, it’s worth monitoring.

We discovered ChatGPT changing it’s model behavior 46 days before ads were announced too.
11 of 20 analyzed brands experienced meaningful shifts in visibility when Gemini uses or does not use citations
Here I’m categorizing a ‘meaningful shift’ to refer to the brand’s average visibility in their prompt set moving >5pp when Gemini was using citations vs. when the AI model was not.
The table below shows brand visibility in Gemini with and without citations for non-branded prompts from February 23, 2026 to March 16, 2026 (note: all 20 brands are not shown in the below chart).

Citation declines ranged from -12pp to -92pp
For one ecommerce brand, Gemini historically used citations in 96% of responses. Over the course of one week this dropped to just 3.7%.
Why do we think this occurred? Those responses in Gemini were previously structured as shopping comparisons with cited retailer links. Now, they’re structured more as product recommendations.
This has the potential to affect the ease of purchase for users and any attribution for retailers. Instead of a clear link for shopping, the user would have to return to the search results to find a better path to the product.
Recommendation-style prompts saw the most significant shifts
Prompts related to asking for “best” and “top” recommendations (ex. “Best X in [state]”) were the primary prompts where Gemini began to focus less on external sources. Educational prompts asking “how to” retained 100% citation usage throughout the month.
This directly ties to the sites that experienced the most citation decreases…
“Best of” listicle articles declined in citations by 40%
Editorial sites categorically had the sharpest decrease in citations from Gemini, directly relating to research that we shared last month after discovering that AI models are citing listicles less than before.
Sites like Forbes, Medium, NYMag, and Good Housekeeping were the sites in this category that experienced the most decreases.
| Site | Feb 16 | Feb 23 | Mar 2 | Mar 9 | PP Change |
|---|---|---|---|---|---|
| Medium | 12.3% | 4.8% | 2.7% | 2.2% | -9.6pp |
| Forbes | 8.8% |
6.9% | 1.7% | 1.8% | -7.1pp |
| Good Housekeeping | 3.7% | 6.5% | 1.3% | 2.4% | -2.4pp |
| NYMag | 1.3% | 1.9% | 0.0% | 0.1% | -1.3pp |
The continued shift pushes us closer to (hopefully) the end of self-serving listicles. Building a content strategy around category landing pages and “best of” rankings without comprehensive, methodical information will not be a sustainable ‘quick win’ much longer.
Listicles get a bad rap but not all are inherently bad: in our research we identified some characteristics of listicles actually growing in AI citations too.
At the same time, Reddit and Wikipedia were able to retain consistent citation rates of 44% and 33%, respectively, while Gemini was pulling the plug on other sites.
Our takeaway: Gemini is clearly in the midst of remixing external sources, and relying on the historical preference of Reddit’s UGC and Wikipedia’s authority in the meantime.
YouTube had the steepest decline overall at -15pp
This one was surprising to me. After consistently growing month by month since November 2025 and being cited in 18% of all responses in February, YouTube’s citation ownership declined to just 3%.

Gemini changed its response structure at the same time
The citation decrease didn’t happen in isolation. At the same time, Gemini's response format changed to a structured template with an increase in headings appearing in 99.5% of responses (up from ~3%), markdown tables in 52% of responses (up from 0%), and horizontal rules in 95%.
Average response length also shifted - decreasing by 15%, from ~559 to ~477 words per response in March. While the new format Gemini is using is more visually structured, it’s delivering less total content per response.
-1.png?width=1585&height=1207&name=image%20(3)-1.png)
^ On the left, is our later response from March 14, 2026, with 441 words VS our response from 2 weeks earlier, unstructured, with citations, is 508 words (right picture).
If you take a look at these responses side-by-side, the way the data is presented is very different.
I’m thinking of these elements similar to traditional Google Search SERP features. Changing the layout of responses entirely makes this shift feel like a deliberate product decision more than a one-off adjustment.
So why does this matter?
Brands investing in AI search are primarily tracking two metrics: brand visibility and brand citation ownership. And while we believe there are 15 other KPIs to measure brand performance in AI Search, visibility and citations are a good starting point.
As checking AI metrics becomes routine, it can be easy to monitor just as we would our SEO performance. Check what keywords (prompts) are increasing in ranking (visibility) and how pages (citations) are trending.
The issue here is that while SEOs are primarily owning GEO reporting, and the channels share similarities, there are major differences to stay aware of.
Dark Algorithm Updates: Training Data vs Web Data
When Gemini doesn't use external sources, it's running almost entirely on training data.
No citations means external sources are not supplementing the response, just whatever the model learned before its cutoff. Gemini 3's training data cuts off in January 2025. That's over a year of brand activity it can't see.
The gap has real consequences. A brand that spent the past year building third-party editorial presence may see no improvement in uncited responses. A brand that had a rough stretch in 2024 but recovered in 2025 may still look worse than it actually is.
This can lead to having two separate brand surfaces in AI models: the brand you’re viewed as when models use updated information, and the brand based on a previous version of you.
If you're not monitoring citation rates, you'll misattribute visibility swings and spend hours chasing answers that won’t exist.
That misattribution runs in both directions. A 30-percentage-point drop in brand visibility month over month is going to trigger questions. And because of the ‘dark algorithm updates’, we don't have access to the underlying signals driving shifts in models like Gemini.
On the opposite side, if you’re running GEO experiments and see a 27pp visibility increase, that’ll seem like a win. The next step may be to call it a validated test and scale it across the site next. But if you haven't isolated the variable, you may be scaling the wrong thing entirely.
We’re still in the early stages of AI Search… so DIG!
Dig into those questions you may have on your performance. If something looks off, there’s a good chance it probably is.
Gemini Citation Decrease FAQs
- Did Gemini decrease its usage of citations in responses?
Yes, across a dataset of 82,000 responses Seer found that Gemini’s citation usage decreased 23 percentage points, impacting brand AI performance metrics and user experience within the AI platform.
- How do I know if Gemini's citation changes are affecting my brand's visibility?
The clearest signal would be to monitor citation usage alongside brand visibility, similar to what is shown in the Gemini Brand Visibility: Cited vs Uncited table above. Look at these averages over the past 3 months, how is each trending over time? If you’re seeing a noticeable shift from February to March, that’s a sign your brand may be represented differently when Gemini leverages different mixes of training data and external sources.
- Should I still try to get my content cited by Gemini if citation rates have dropped 23pp?
Yes, but citation ownership is not the entire goal. Create content with the purpose of providing value to users. If this content is well-structured, technically sound, recent, and provides unique information you’ll be putting yourself in a strong position for citations across AI platforms.
- Is this citation change permanent, or could Gemini reverse it?
Gemini can reverse this shift at any moment. ‘Dark algorithm updates’ are as prevalent as ever, which is why continual monitoring is critical for any AI Search strategy.
Want help measuring and improving your brand's visibility across Gemini and other AI models? Let's talk.