We know AI is changing the way professionals search and work. We don’t always know exactly how AI is changing these activities, but our tracking data, hypotheses, and our own behavior tell us that shifts are happening constantly.
While AI makes experiences more personalized and unique, that can be hard to see in the quantitative data. As a UX professional, I’ve seen a lower tolerance for generic value statements and messaging that isn’t user-centric. The problem is brands can’t reach the level of personalization needed if they don’t understand users.
To truly understand user behavior, we conducted an in-depth Digital Diary Study and AI Search Usability Testing. We wanted to understand how professionals were relying on AI — and what they thought about it.
TL;DR: Some of our findings were expected, but others surprised us:
- Nearly half of AI users include personal details in prompts: 43% of users cited brands by name in their AI conversations. If your brand doesn’t have an AI search presence, others will build the narrative for you.
-
42% of AI sessions are one-and-done: Users prompt once, take the output, and move on. Your content gets one shot to be clear, credible, and useful.
-
Looking trustworthy matters as much as being trustworthy: Even though accuracy ranked #1 in importance, most professionals fact-check AI outputs by asking themselves whether the results feel right and not necessarily by cross-referencing sources.
-
No one is loyal to a single AI platform: Users mix ChatGPT, Gemini, Copilot, Perplexity, and others based on access and task. If your visibility strategy starts and ends with ChatGPT, you're already behind.
-
Users are talking to AI like a person: Fully formed questions, personal context (role, company size, tech stack), even using polite gestures like saying "please" and "thank you." Generic content isn't built for that kind of specificity.
Methodology
We designed the study to explore real-world behavior and current perceptions of AI. We captured users’ prompts, their intent, and the context around each use, from routine communication to brainstorming and strategic planning.
Participants
Twenty-six business leaders took part in our research. They represented a range of roles from manager to C-suite. All used AI several times per week and worked within enterprise organizations of 200 to over 10,000 employees. Importantly, none were technical professionals but represented Sales, Marketing, HR, and Operations leaders.
Methods
- Diary Study: 14 participants logged 99 AI interactions over seven days, and completed 6 surveys around mental models, needs, concerns, and expectations.
- Usability Study: 12 participants completed five tasks using both ChatGPT and Gemini, generating 120 encounters, and captured in-screen recordings and think-aloud sessions.
Altogether, we analyzed 249 AI interactions and prompts, giving us insight into where AI can fit into daily work. Here are four key takeaways:
1. Users Are Getting Specific: About Brand Names, Tech Stacks, and Other Details
In our study, almost half of the users (6 out of 14 participants) mentioned specific brand names in their AI prompts. Many also referenced their tech stack, organization size, and/or goals.

Nearly all users included details about their role, industry, or company in prompts when researching software solutions.
-
“Our 40-person remote team uses Google Workspace. How can we work more efficiently with those tools?”
Our study results show that professionals approach AI with the same context they bring to a meeting or strategy discussion. They expect responses tailored to their world, and brands need to adapt to be believed.
What this means:
People are using AI to personalize their experiences. Content, context, and strategies need to match this.
Brands need a clear understanding of who their target audience is and what they care about. Knowing what matters to users will help you match content and messaging with what LLMs are seeking.
This also means if your brand isn’t spending time on managing and shaping your AI presence, you’re letting others craft the narrative for you.
2. Keywords Are Out, Intelligent Conversations Are In
Google taught us how to search, adapting our keywords and search terms over the years to get what we want. But now, AI is allowing us to talk again.
Participants consistently described AI tools as intelligent partners — not databases. One called it “texting the internet.” Another described it as “the brain of the internet.”
Their language supported that idea. Instead of typing a string of keywords, users asked fully formed questions:
- Traditional search: “Hybrid workforce studies”
- AI prompt: “What are the latest insights on hybrid workforce trends from Deloitte and McKinsey?”
Even the tone shifted. The word “please” appeared frequently and consistently, signaling politeness and person-like interaction.
What this means:
Users are approaching AI as a collaborative presence, not a search engine. When users interact with AI, they treat it as a conversation, writing prompts closer to the language of social media questions — vastly different from the transactional language of traditional search queries.
This behavioral change is small on the surface but significant in meaning. Professionals are beginning to expect tools that respond to context, tone, and intent.
Are you tracking prompts based on assumptions, or do you really know how your audience is interacting with AI?
3. Workers Are Relying on Gut Checks and Primarily Using AI Once Per Task
Even as AI use grows, skepticism remains strong.
When ranking what matters most, participants placed accuracy first, followed by ease of use and privacy. All users said they verify AI output before relying on it, but when asked how they did this, most relied on their gut:
- “I reread it to see if it feels right.”
- “I check the sources, but mostly I ask myself if it makes sense.”
In other words, users rely on intuition more than verification, which was one of our most surprising findings. They perform quick “gut checks” rather than deep fact-checking.
For brands and publishers, this behavior has clear implications. Information must not only be correct but also feel trustworthy. Formatting, clarity, tone, and evidence all contribute to that perception.
This is especially true since there was also a strong one-and-done pattern.
Despite the conversational setup, most professionals only engaged once per task.
Across the diary entries, 42.4% of all AI chat sessions involved a single prompt and a single response. Users then copied, edited, or refined the output on their own.

Follow-up prompting occurred mainly in research, creative brainstorming, or image generation tasks.
For marketers and UX teams, that means the first response carries all the weight. If clarity, accuracy, or relevancy are missing, the opportunity is lost.
What this means:
Trust isn't just about being right, it's about looking right.
If your content shows up in an AI response and it feels off, users won't dig deeper. They'll move on, and may even do so with a bad taste in their mouths
That gut-check behavior means things like formatting, tone, and clarity, that may have previously been seen as nice-to-haves, are actually requirements to be believed. Structured content, clear benefits, and confident language all increase the odds that your information passes the instinct test.
Don’t let other sources own your narrative, and don’t let your own content get misinterpreted. Detail pages and deeper level pages all need the same strong and scannable UX as your landing pages, for both machine and human visitors.
And if you aren’t showing up at all, you aren’t part of the consideration. The one-and-done pattern means users are moving on before ever thinking about you when your brand isn’t part of the response.
Being present in LLM responses means having clear and user-focused content across all user journey stages, and making sure your content works, not just for citations, but for brand mentions.
4. AI Platform Loyalty Doesn’t Exist
While ChatGPT remained the most-used platform, participants frequently experimented with others, including Gemini, Copilot, Claude, and Perplexity.
Platform choice depended on three factors:
- Access: Whether a paid or enterprise version was available
- Device: Which tool was easiest to use on mobile or desktop
- Perceived strength: ChatGPT for writing, Gemini for visuals, Perplexity for quick research
One participant summed up the sentiment simply:
“I’d switch to whichever model can do it all or keeps my data safer.”
Loyalty to any single tool was minimal. The behavior reflects an exploratory phase where users are still defining what “good AI” feels like in their workflow.
What this means:
We are still in an early stage with AI adoption. There is no default AI tool yet. Users are mixing and matching based on what works best in the moment. If your strategy starts and ends with ChatGPT, you're missing where your audience is already going.
- Track your visibility across platforms, not just one. ChatGPT, Gemini, Google AI, Copilot, Claude, and Perplexity all surface content differently. You need to know where you're showing up and where you're not.
- Understand your users. Which platforms are they using, and for what? Clear, well-organized, entity-rich content performs across models, but knowing where your audience actually is tells you where to focus first.
- Treat this as a positioning window. Most users aren't loyal yet. They aren’t limiting themselves to one tool. This means to stay relevant you need iterative studies and constant listening. Track visibility and user behavior over time and implement ongoing testing to keep up.
Stop asking how to rank in ChatGPT and start asking where your audience is getting answers, and whether you’re actually showing up there.
Strategic Implications for Marketing and UX
- Follow the Conversation
- Talk to your users. Understand what problems they are truly looking to solve, how they use AI, and why.
- See real user prompts. Our AI User Testing is allowing us to get firsthand clarity into people's behaviors to strengthen GEO.
- Look to tools like SparkToro to understand what AI tools are relevant to your audience right now.
- Design for Personalization
- 42% of sessions are one-and-done. You don't get a second chance to shape the answer.
- Users are loading prompts with personal context — their role, industry, company size, tech stack. Audit your key pages for content that speaks directly to those specifics, not just your product in general.
- Add role-specific and use-case-specific content to product pages, solution pages, and FAQs. Think "how a 50-person remote team uses [your product]" not just "features of [your product]."
- Make that content scannable and surfaceable. Use clear headings, structured formatting, and front-loaded answers so both AI models and humans can find what they need without digging.
- Rethink Visibility
- Track your brand's presence across ChatGPT, Gemini, Copilot, Claude, and Perplexity. Tools like Scrunch can track prompts at scale and monitor if you’re showing up, and which of your competitors are.
- A ranking or citation only matters if what follows holds up. Having clear brand positioning will help consistency and alignment as users read about you in LLMs.
- Don't just optimize for today's models. Build content that solves real problems for real people. Models will change, algorithms will shift, but content that genuinely answers what your audience is asking will keep surfacing.
Looking to increase your brand's visibility in AI search? Let's chat about how Seer can help you.