I graduated with a graphic design degree in 2009, just as the bottom fell out of the industry I thought I was entering.
Print was collapsing, the Great Recession had frozen hiring, and every “entry-level” posting suddenly wanted someone who could design and build for the web. I only had one web design class to my name and a portfolio full of print work trying to survive in an industry that was reinventing itself in real time.
What got me through wasn't mastery. It was adaptability.
I think about that period a lot when I look at entry-level design roles today, because the shift happening now feels even more substantial even bigger.
In 2009, the floor moved underneath us. Today, the floor feels like it’s disappearing entirely. AI can generate wireframes, layouts, designs, and production-ready components in seconds. The execution-heavy tasks that once helped designers learn the craft are being automated before many people even get the chance to practice them professionally.
The designers who adapted in the late 2000s succeeded because they learned to apply fundamentals differently, and not by just abandoning them entirely. What you should be asking yourself is: Is your design team building the habits to lead in an AI-driven world?
The Shift from UX to AX
As AI changes how people find, evaluate, and interact with digital products, the conversation surrounding design has moved beyond optimizing for bots and humans to optimizing for AI agents acting on humans’ behalf.
We call this the move from designing for the user experience (UX) to designing for the agent experience (AX). Where UX focuses on how a human navigates and responds to a product, AX focuses on how an AI agent interprets, synthesizes, and acts on content on that human's behalf.
While AX is becoming an irrefutable piece of the larger search puzzle, the companies that default towards replacing design for solutions that purely optimize for shipping speed are cutting themselves at the knees when it comes to performance and visibility, along with accelerating their own irrelevance.
That’s because the design thinking (the human-centered, iterative approach to problem-solving) they're automating away is exactly what makes products memorable to users and credible to AI agents.
Think, Make, Break, Repeat
So how does a team navigate this new world of UX/AX design?
Our strategy is centered on a framework that’s always guided how we work:
Think, Make, Break, Repeat.
It’s not a linear process or a project checklist, but a mindset for doing meaningful work in conditions that keep changing.
At an SEO-focused agency like Seer, thinking beyond the human user has always been part of our DNA. Design wants clean, intuitive experiences for our users. SEO wants crawlable structures, keyword-rich copy, and content hierarchies tuned for algorithms. Our methodology has always helped us find common ground in these friendly negotiations.
We may have expanded our user group to also include AI agents, but the approach doesn't change.
Think: Understand the New Landscape
As generative search rewrites the awareness stage of the customer journey, your content isn't just being read by human users anymore. It's being interpreted, synthesized, and acted on by AI agents who are deciding what to surface, what to recommend, and what to ignore. Users are interacting through prompts and with those agents, each of which have their own interpretation layer.
Our Digital Diary Study and AI Search Usability Testing analysis from earlier this year found that people are approaching AI as a collaborative presence, not just a “smarter Google.” They’re writing fully formed questions with personal context: their role, company size, tech stack, and goals. That level of specificity in the prompt demands an equivalent level of specificity in the response, making one-size-fits-all content a barrier to your brand fitting into the conversation the user is already having.
There’s almost no second chance if your generic content fails to turn up in a result. Our research found that 42% of AI sessions are one-and-done: a single prompt, a single response, and the user moves on. If your brand isn't part of that first response, or if it shows up and doesn't feel credible, the moment is gone.
So what? Users are expecting more personalized content, and that content is being rewarded when agents can clearly understand the message. Understanding audience intent has always been table stakes to resonate with your users. This means user testing that allows you to understand how your brand interacts with AI is critical for staying competitive.
We can automate tracking AI prompts, but interpreting what users actually mean (the gap between what they prompt and what they need) is design thinking work. Your UX team brings the irreplaceable human judgment required to act on it.
Make: Design Teams are an Irreplaceable Strategic Asset
With the stakes being so high, it’s tempting for brands to turn to a version of AI-accelerated design that treats our work as a faster output machine that generates more mockups, ships more variants, and iterates on autopilot.
But more content doesn’t automatically translate into better content, especially in the eyes of LLMs.
In our study that analyzed 541,213 LLM responses across 20 brands, we found that brands whose content gets cited by LLMs, but whose names don't get explicitly mentioned, are experiencing what are called ghost citations. This is when your content shows up in an LLM's answer to a prompt, but your brand name is nowhere to be found, making you invisible to potential customers at exactly the moment they're deciding who to trust.
The numbers are stark. When a brand is mentioned in a response, its citation rate jumps to 53.1%. When it's not mentioned, that same brand's citation rate drops to 10.6%. That 5x gap is a brand specificity and structure problem, and design thinking can close it.
AI is building its list of brands worth knowing. If your content is informing the conversation between buyers and LLMs, but your name doesn't appear in the answer, you are funding your competitor's first impression.
When AI tools help us compress production timelines, designers have more capacity to be in the room where the strategy gets made. To ask questions around the “why” of a project and connect a layout decision to a real user goal, or a content structure choice to an AI parsing behavior. That’s the version of design that actually makes a difference for clients and becomes harder to replace or deprioritize.
So what? Research, audience modeling, and understanding what users actually need don't compress just because code generation got faster. By investing in the right AI tools that can augment your design team instead of automating them away, you’ll be able to utilize design as a strategic and executional powerhouse for the content that will reach your target audiences and build your brand equity.
Break: Test Everything, Especially Your Assumptions
Here's what's actually happening inside the companies that replaced design teams with automated output pipelines: their content volume is up, their brand specificity is down, and their awareness stage content (arguably some of the most important for when buyers start thinking about which brands can meet their needs) is seeing ghost citation rates increasing.
When you can generate and deploy faster, you can also create problems faster.
A content structure that confuses an AI agent, an interaction pattern that works for one user segment and causes friction for another, a personalization model that optimizes LLM visibility while degrading the user experience.
These are the kind of issues that surface when you move fast without a plan to learn.
Meanwhile, the brands that are showing up with the lowest ghost citation rates got there through years of consistent brand investment that made their names the default answer in their categories. They are using design thinking to solve problems for users in a way that feels familiar, yet novel, by truly understanding what their users need.
So what? Our team's job is to build feedback mechanisms that constantly test assumptions and validate what's actually working. User research, heuristic reviews, behavioral analysis, and quality checks on AI-generated output are all part of a successful UX and AX strategy because they’re human roles that automation structurally can’t quite fill. We looked to our tried-and-true CRO experimentation approach to evolve it into a methodology that considers both UX and AX together.
Repeat: The Loop Is the Strategy
In an AI-first world, the competitive advantage belongs to those who can learn the fastest and translate those learnings into better user experiences. That requires UX designers who are curious, communicative, and connected to outcomes beyond the individual deliverable.
AI tools alone iterate on what they are trained to optimize, while designers can actually change what they’re optimizing for in real-time based on judgment, context, and client relationships.
Think, Make, Break, Repeat is beyond a project phase model.
Instead, it’s a way of working that acknowledges you won’t get it right the first time. And that’s fine, as long as you keep learning and utilize your team’s human expertise to validate and iterate alongside AI workflows.
Constantly looping through thinking, making, and breaking our own ideas is the clearest path for building a collaborative and functional system.
What Happens Next?
The companies that continue to embrace frameworks like Think, Make, Break, Repeat are on track to lead their categories and be discovered through AI, trusted by users, and cited by agents. The ones that replace design teams with vanity metrics around automation are on track to be invisible where it actually counts.
If you want to close that gap for yourself, start with a better process that incorporates elements of design thinking:
- Invest in User Research: The teams generating the strongest AI visibility are the ones still funding qualitative research, not the ones who cut it as a cost center.
- Commission a Specificity Audit: Dig in with your team to discover where generic content is leaving you invisible to the personas you're trying to reach.
- Demand Visibility Data Across Platforms: If your team can't tell you where you appear in LLMs like ChatGPT, Gemini, Claude, and Perplexity, that's the first investment to make.
-
Protect Your Designers' Seat At The Strategy Table: The compressed production timeline AI offers is only valuable if it buys your design team more time for the questions that matter:
-
Who are our users (both human and machine), and what are they trying to accomplish?
-
How does this experience support a specific business goal?
-
Does the flow match how users actually make decisions?
-
Where does user trust break down?
-
The question every executive should be asking themselves is whether you're funding workflows that reinforce brand equity over time, or whether you're funding output volume that's quietly making you invisible in the rooms where buyers are now making decisions.
Denise Baginski
Lead, UX