Insights

The 4 types of AI Marketers - AI Minimalists to AI Builders

The 4 types of AI Marketers - AI Minimalists to AI Builders
40:24

This post is written September 2025. I tried to balance writing both for people to self assess, but also for leaders to assess their people.  This post was largely inspired by me seeing stage 1's through 5's in my own company over the last 2 years, and what I see in each type. 

We see all the CEO AI memos about being AI first, and a memo can’t be too long, so I get it. While I feel we need to be AI first or whatever, I also feel like I gotta look at myself in the mirror and be as specific as possible about what that looks like, to me, and I gotta be sure I communicated that over and over again.  

If a CEO is going to fire people for not being bought into their vision, they need receipts. For me it is critical that I can look back and sit across from someone with 10+ messages showing how long I have been saying these things, and how intense have I been in reiterating myself.

18 months ago I made the Marketing AI survival Kit, which needed an update. It was no longer in congruence with how much I am ratcheting up the importance and want to give my company and my team a peek into my brain on how I see those same stages 18 months later. Like most things in AI, documenting your ideas and thoughts is a moat, so I could easily look back at what I said before and use that as a framework.


If you are in marketing, and you want to stay in marketing, and you need a job to take your vacations, pay your mortgage, feed your family - that is what is at risk for you if you fall behind. I couldn’t ever imagine looking at my kids and knowing I couldn’t pay for their school anymore or we have to change our vacation plans simply because daddy decided not to put in 45 hours a week instead of 40 to try to level up at this crazy time.


If you are concerned about your job, what's automatable in the future, etc the first tool you should use is Smarter X's JobsGPT - it is VERY good at showing you your current job and what might be automatable and by when.

JObs GPT by SmarterX

 

 

 


Stage 1: The bare minimum

Mindset

You are quick to point out 6 fingers, hallucinations, etc, but you don't ever record the prompts that generated those and run them every month to become an ASSET to your company. As my buddy Ryan Brown said, that's the job...if this AI can "only" do 80% of a thing right, your value is doing the 20% left.

 "I have to do this" is way different than "I get to do this". To me, as a CEO leading the charge, this is an exciting time, the industry has been reset, old heads like me gotta hustle, we can't lean back on our "25 years experience" as the win.

Look at this LinkedIn study findings:

 

 


Your perspective on hallucinations

You use them as evidence of how bad these tools can be, an opportunity to prove yourself right. Never as an opportunity to learn, test, and become an asset. You don’t try to understand why hallucinations happen, and when people in your company start sharing reads on it you don’t ask questions for fear of “looking stupid”. Stage 2 employees will read posts like this, stage 3 will have run their own tests. The C-suite values hallucination testing, because it helps them be more knowledgeable for their own usage.

Wanna level up read this recent post.

Self Assessment

The issue is the total addressable market of companies actively seeking or being willing to accept low AI fluency is shrinking and your job security / prospects are sinking with them. 

You can binge watch the heck outta a new Netflix special, but you won’t binge read that ever growing list of “AI reads” floating around your saved folder. 

You are using AI, but it is very often "what you are given" – whatever AI model is used in Otter, you use; whatever model is in HubSpot, you use; your HR tool, you use it; Google ads adds an AI feature, you use it. You don't test outputs against other models; you accept what you're given, so you are technically using AI, but you are not exploring what’s possible. Trust me most tools are NOT going to give you Claude 4.1 Opus to help you with your transcript reviews or to help write your content, you gotta break out of the AI used in tools so you can educate yourself on whether they are good enough or not.

To self-assess here, ask yourself these questions for self reflection, not to say “I am in stage 1 or 3" but to reflect:

  • How long does it take you to name 5 models? 
  • How often do I say "I used ChatGPT" before I say what I say? Why am I saying that, what am I trying to signal?
  • You might be Stage 1, you also could be stage 3 even if you can’t name 5 AI models quickly.
  • If you don’t stay on top of what is out there, how will you know to test it?

Transcript Test:

You might use a tool like Otter to record meetings, but you don't actively analyze the transcripts with AI or compare models, if Otter gives you a recap, you take it and run, never knowing what models they use, and if other models would be better at the task.

Voice AI:

You don't speak into AI, lean into voice capabilities, or go for long walks and ramble to it to get your thoughts and ideas out, you don't record what you think that much. You definitely can not describe the difference between ChatGPT vs Claude vs Gemini when it comes to doing voice transcriptions.

AI as a Coach:

You don't use AI to listen to your calls, practice presentations, or role-play tough conversations. Other people show you this use case, and you do bare minimum, saying "hey I used ChatGPT to analyze my calls" but it stops there. 

Learning & Skill Development:

You think "the AI sucks," never "my prompt sucks." So you don't really learn where the AI truly does suck (or dare I say you are using the AI for the wrong use case). You will take training to keep your job, but you will not proactively seek out training and self-directed self-improvement. If you look at your time on AI, most of it was "forced" on you by the company versus something you ran towards.

If you're in Stage 1 and your boss is in Stage 1, you're going down with that ship. Look up at your manager, then your manager's manager, etc – do these same checks for them. When's the last time you heard your boss say "I built this bot" or "I built this agent"? Did you hear that once, or do you hear that every month? Or every week?

Look at the last 10 people who exited your company. Were they Stage 1? If most of the people who left were Stage 1, take a look at their contributions to AI – do you see any? Now run that same analysis for yourself.




Innovation and Application:

You’ve built GPTs, you have tried, but none of them have stuck. You have a graveyard of the 5 or so you built 18 months ago, but they haven’t been updated or improved upon. The desire to make your AI builds better isn't there, the desire is to "keep management at bay".

Go look back at your last few entries into ChatGPT do you mostly ask it one prompt and accept what you get and and keep moving? You might be stage 1.

LazyGPT is alive and well; you’ve never said "delve" so much, go scan your inbox for “delve” before and after AI, add evolving and other words to that list. 

You copy and paste outputs, not realizing that people who use AI read emails with “I hope this message finds you well” and see you as a checklist ChatGPT user. You use it, but you don’t actually tweak anything you get back.

Efficiency vs Blue Ocean Thinking:

You primarily see AI as a tool for basic tasks, not for creating net new opportunities or solving old problems in novel ways. You default to efficiency always when you are working on AI your mind only goes to, "how can I save time" because you are low on new ideas. Let's say you see a 10 step process, you see AI as a way to automate some of those steps, that is good - but once you wrangle out all the automations and efficiencies you don’t realize that what's left is new ideas. 

What Blue Ocean Sounds Like:

Dana Forman built a CustomGPT that instantly allows us to better understand how people might regionally look at our clients…for a wireless provider client the GPT found people in states who have larger families might be a better target for a wireless plan with multiple lines. 

Neal Brown built a GPT of all of my innovation chats with the marketing team internally, so when I went to hire someone, after the first interview she had built something that evaluated the person against my internal chats.

Agents & Agentic Browsers

Imagine your manager passed you in the hallway and asked your take on agentic browsers. What’s your answer? If you don’t have one, you might be Stage 1. You might say “it's hype,” and that is the end of your curiosity. When you call something hype, you never try to disprove your sentiment or hypothesis. Remember that same group that dismissed ChatGPT as a fad two years ago? You are in it.

Context Window Understanding 

Limitations are frustrating, yes but for Stage 1 workers repeated limitations don't spark curiosity. When you upload 20 massive documents and run out of "space" in Claude or ChatGPT you get frustrated about hallucinations or limitations without understanding why they happen. You don't know what a context window is or how it impacts your outputs or why loading full docs when you only need information from 1/10th of each document is how to get your answers. 

Vocabulary

You are limited, you know what LLM stands for, you might know what “reasoning” means in a model, you might know what the words “tool use” means, you are around enough to understand the basics. It's like you know the words but you don’t know what they mean in a vacuum but you don't know how the pieces work together.

Collaboration and Evangelism

What do people come to you for in the company? To talk about that Eagles game, or that new Netflix special?  When was the last time someone came to you asking you to help them with an AI problem? If they did are they stage 1 like you, are are they a stage 2-5?

What are you known for when it comes to AI inside your company? There’s no user manual for Claude, Gemini, or ChatGPT, and it feels like there are 1,000 different use cases. This is an opportunity to grab a part of it and lead.

You benefit from others who relentlessly try to figure things out using AI, but when it is your turn to burn a little midnight oil, you don’t make time. That means those who do burn that oil and try to figure AI out benefit everyone, but you don’t do the work to benefit others.


When it comes to AI you absorb knowledge but don't add to the company's knowledge pool your attitude is likely defeatist “They are good at AI” vs “They put in the work”. I once had a client say to me - When people call me a freak athlete, they diminish the work I put in to do what I do. The 4am workouts, the 2 a days, the meal prep.


When a client doesn’t let you use AI, you’re okay with it. You almost want to take them out for a drink and swap notes on how bad AI is for the future, the kids, the environment, oh, the hallucinations. You love to commiserate with people who are AI skeptical, and like a Fox News viewer, you’ve created a bubble that reinforces that you are right.

 

Just get started:

Find a project where someone is building an agent in your company and ask them to be a tester for its accuracy. Maybe someone has built an AI to allow clients to automatically chat with their data to get reports faster, automating part of your work. Can you be the person who asks it 10 questions a day and logs into the old tool manually and checks accuracy? If so, go do that.


Stage 2: The Dabbler

Mindset

You could go either way; you could finally have that “aha” moment and become a Stage 3, or you could fall into “this is overwhelming.” Are Stage 2 today only because your company has allowed Stage 1s to stick around. You are likely going to become Stage 1 in 12-24 months if you don't change.

Remember you are going to compare yourself to the people you have the most access to, coworkers. If you have a lot of Stage 1's you will naturally compare yourself to them, it will make you feel safe. Try not to do that.


Your company CEO is either actively trying to balance how to be good to people and also get rid of those people, or your CEO is like Coinbase's CEO, call you in on a Saturday find out why you aren't doing AI - if he doesn't like the answer you are fired a week later. Or even worse your CEO is like "everything's fine" and is a Stage 1 CEO so when they get fired by the board, guess who's coming in a Stage 3+ CEO more than likely, and you wont be safe then either.


You feel behind, and you are – but you're trying. If your company has done layoffs, you probably feel even more pressure as your Stage 1 friends from last year might not be around anymore. You see your company moving in this direction, but you're so overwhelmed that it almost paralyzes you. The gap between where you are and where you need to be feels insurmountable but you chip away at it, and that feeling keeps you from taking the bigger swings you need to take, but you are up at the plate, bat in hand, and trying.

Do you notice that people who used to invite you to the hackathon or tag you into threads because you showed promise in the past have kind of peeled back on that? It's the early signs of the Stage 3+s giving up on you. Builders, people who are leaned in, typically give up on you at some point.




Your perspective on hallucinations: You acknowledge they happen and can be frustrating, but you haven't actively sought to understand *why* or how to mitigate them. I bet your company doesn't have an hallucination czar? Brand yourself as such and tell people to send you their hallucinations and you'll start figuring them out, reporting back.  The key here is KEEP at it, it can't be a spark for 2 weeks then you give up. 

Self Assessment (Move from Dabbling to Doubling Down)

You know the names of AI models, but you aren't fluid or confident in explaining how they differ or what use case would have you use Claude over ChatGPT over Gemma over Llama. You could prompt to get those answers but they are not part of your AI DNA.

You're still copying and pasting prompts and have little to no automated pipelines, meaning the smarts you do have don’t scale to others easily, but the smarts are THERE, that is why this is the precipice, you are up to bat, you have the smarts.

You likely use ChatGPT only. Hopefully, by now, you are paying for it even if your company doesn’t.


If you wanna know if you are more stage 3 than stage 1, think back to the last time you found an AI tool that cost 20-30 a month and your company wouldn't pay for it or you blew through your AI tools budget, did you say "nevermind" or did you whip out your own credit card, sign up for a month for free, email that company and say "can I get 1 more month, I'm trying to convince my agency to buy 10 seats?"


Transcript Test:

You might put a transcript into ChatGPT, sometimes. Your company has given you the tools to record calls, and you use the features in the tool a little bit, but you don't consistently extract data from the tools they gave you and see which model gives you the best answer for what you need. 

Voice AI:

You don't speak into AI consistently, you've been leaned in enough to see others do it, you "get it" but you don't lean in. How many apps do you have on your phone that allow you to speak into them and use AI? A few (FYI I have about 10). You might know what Whisper is and you would know how you can use it, how to set it up with an API key, etc etc. You probably have not heard of tools like VAPI or others that help you build voice bots, but you've never tried to build one, or you did once and left it in the graveyard. You are weak on understanding how models impact latency. The good news, is no one else in your company is either, you got time to level up.

AI as a Coach:

You haven't leaned into AI as a coach – you don't have it listen to your calls and give you feedback, you don't use it to practice a presentation before you give it and ask for feedback, you don't use it to role-play a tough conversation with a co-worker or client. Sure, you might once in a while, but it isn't part of your workflow. The word here is consistency. You dabble now is the time to double down.

A Stage 3 sales person for instance would say, this deal is important I'm going to take all their call transcripts, put them into one of the tools, then I'm going to record a walkthrough of the key points of my slide deck and have the AI be the CFO, then the VP of Marketing, and the Procurement team, to give me their take.

A stage 3 will do this manually or build a limited (but helpful) GPT, a stage 4 would build the pipes.

When the company won’t let you put data into AI:

You use that rule as an opportunity to "take a break." You say things like "I would like to use AI for XYZ task, but the company doesn't allow us to do that with this data." These "we can't" roadblocks frustrate you (that is good, innovation comes from frustration so often) but you aren't frustrated enough for you to find a way forward. 

Learning & Skill Development:

If your company offers training, you’ll take it but you are one of the last people to finish the training. You are also the kind of person who waits for training versus just going for it on YouTube and teaching yourself, meaning you’ll always be a bit behind. You read articles and share them internally, often with your take on how the thing you read impacts your company. So you are getting educated.

Innovation and Application


You understand that the unsexy parts are essential to AI. You understand that the building blocks to great AI automations are having data in the first place, then moving that data from place A to B, and having strong schema, etc., all to eventually build great AI tools. You're halfway there but still doing heavy lifting manually. That is okay too; we were all here at one point.


You are building small automations; you may have built some GPTs that get use at the company, the question isn't have you built a customGPT, it is show me the improvements you made to it and why. Did you let it rip, get it off your plate, and now you are glad "that's done" instead of realizing that building those early GPTs and super prompts should start being turned into workflows in n8n, agents, and other deeper automations to scale their value, and yours with it.

If you are a stage 2 on your way to stage 3 you keep up with the AI industry reads, you start reading things that trigger you to think back to your graveyard, you dig up those old GPTs / automations / etc and and try to use the new tools to breathe new life into your ideas.

Your perspective on Efficiency vs Blue Ocean:

You mostly see AI as “automation of tasks,” rarely as “net new” ideas. So there’s always a fear of what is coming because the thinking is “what will I do if they automate 40% of my job?” You might be low on ideas, therefore you don’t see the possibilities of what was once impossible as now possible. You do have new ideas, you just never ask ChatGPT to help you build them, start that - you'd be surprised how much better the tools have gotten at building prototypes.

In Stage 2 prototypes are always built in Claude Artifacts or ChatGPT Canvases, apps scripts, etc. You are not yet using loveable or vercel or cursor just yet (that is stage 3.5 from what I've seen).

Agents & Agentic Browsers

You’ve heard of Google's Project Mariner, Perplexity’s Comet, and computer use reading articles when you try to keep up and you do read the articles and internal chatter in company slacks, But reading isn't participating. You might have watched a demo video and thought "that's cool," but you haven't spent even 1 trying to make it work past your first attempt. 

For instance, a Stage 3 would be asking tougher questions...how do I get access to your log files so I can track these browsers, they are testing filling out your lead form, or finding a product so they can speak more intelligently.

Your Context Window Understanding:

You're shaky on why a context window limits value. So when you run into problems, you know what a context window is, but you don’t know how to use it to your benefit or how to work around it. Knowing the size of a context window for a model was great in 2023; it’s good to know foundationally but you gotta know how to understand if what you are building will run into context window problems in the future. 

Collaboration and Evangelism

You are not a consistent builder. You might say "we need a better system to manage GPTs," and you worked on it once, but now you got a graveyard. You won't blaze a new path; you follow behind others who have blazed a path.

You will comment on something someone else built and say “that’s cool,” but you never say “get me your prompts, I’m going to try to make it better or help you improve it.” 

You will support others if you are told to or if your boss makes time for you to do it. You are interested, but the work is inconsistent. When it comes to collaboration, people are a bit skeptical about partnering with you to get something done because maybe you will, but maybe you won't – leaving the motivated Stage 3+ person with a lot of work to do solo.


Stage 3: The Explorer

Mindset

You see AI limitations as puzzles to solve, not excuses to stop. 

When clients say "you can't use AI with our data," you start thinking about synthetic data, how can you create fake data that matches the schema, so you can build and battle test your solutions.

You've got that itch; you know that if you build something with fake data, you can test all the prompts and building blocks to show the client the value and try to change their minds or be ready when they finally let you use AI.

You're no longer just overwhelmed – you're also energized by the possibilities. You understand & are comfortable with the fact that spending 5-6 hours on trying to build something that doesn't work is part of the learning journey, it isn’t a failure it will help you help others.

Your perspective on hallucinations: 

You view them as challenges to overcome through better prompting, model selection, or data handling, rather than inherent flaws in the technology. They are a mystery to you.  You do a bit of deep research on them, find out who is reviewing them on YouTube and you subscribe - cause you want to learn.

Self Assessment:

You can be given a task and think through which model might be best for that specific use case. You might be wrong, but you can think it through in real time. You're probably still copying and pasting prompts, but they're sophisticated because you have tweaked them 5-10 times and watched the outputs get better. You're tracking what works and you are early in your agentic journey, not because you are skeptical but because you got a LOT going on. 🙂

Transcript Test:

You're auto-applying analyses against your transcripts to understand how you're perceived and how you're doing in your job. Every sales call gets analyzed automatically every week, and themes get pushed to Slack (We’re doing this at Seer). You haven't built the infrastructure to match those themes against other data sets in the company but you know how and that is 99% of the battle. You can go back and see people you deem as stage 3’s 4’s and 5’s giving you kudos on your builds and using them.

Voice AI:

You actively use voice with AI — dictating notes, brainstorming on walks, practicing presentations out loud. You’ve tested multiple tools (Whisper, Gemini Live, Claude Voice, Vapi) and you can describe strengths/weaknesses. You’ve built at least one working voice pipeline: e.g., record → transcribe → summarize → push to Slack/Docs.

AI as a Coach: You actively use AI to analyze your performance, get feedback on presentations, and role-play difficult conversations, integrating it into your personal development workflow. You also understand the sycophant nature of AI and have learned to improve your prompting to work it out.

When Stage 2 people come to you with their manual processes, you immediately see a pathway to automate them, chaining together a series of smaller agents. 

Learning & Skill Development

You're building Custom GPTs that get used by others in the company, but here's the key – you're running out of space and trying hard to find alternatives. You're hitting the limitations of the tools, and that frustration is pushing you toward more sophisticated solutions. You realize that just when you think you’ve built something great, you hit a new limitation, and because of your mindset, you see it as a challenge to overcome.

Context Window Understanding

You know what a context window means in a way that you can talk intelligently about it. You might even know off the top of your head that Claude has 200k tokens and Gemini 2.5 Pro has 1 million. You might also know that Claude recently bumped the context window to 1M for Sonnet, but you also know that is only in the API.  You see that and think, dang I gotta work through the API, I’d rather just have it in the front end, like AI studio. (This is me FYI, I’ll improve here).

You are a heavy user in the browser versus using the API, but you are starting. You understand that every token costs money and affects response quality. You've learned to say "just analyze section 3" instead of dumping the entire document. You're starting to feel the pain of the limitation but working within it.

You use tools like Google AI studio to understand token usage, which again makes you smarter.

Innovation and Application

You have just started building automations that use AI APIs, they aren’t the most advanced tools in the world, but you realize that using the API will allow you to analyze a transcript and push the themes to Slack automatically; you can’t do that with the web version of ChatGPT or Claude. You are likely doing this in a basic Zapier, Make.com, or n8n workflow. So you use APIs just not consistently and you still struggle a bit with them.

You've built the car but not the highway. Your automations work, but they don't talk to each other yet. You haven't built the infrastructure that lets you stack solutions on top of each other.

Efficiency vs Blue Ocean

You don’t default ONLY to saving money and time with AI. You also sometimes see client challenges through a new lens in an AI world. You revisit old problems from 1-2 years ago and see them as value adding opportunities that couldn’t be done before. 

You are the kind of person who ran into problems with producing images at scale for a campaign and when you get with a stage 4 and they break down the following workflow, you can follow everything they are doing, and with a bit of help could replicate 80% of it yourself.

So you are working on scaling a campaign... you fire up Claude MCP connected to Data For SEO MCP to scrape the images from Google image search & top 10 title tags, then with those images you use Gemini/Claude to compare your ad copy & landing page images to what Google shows, you might use perplexity comet to get competitor ads into a spreadsheet that you'll have connected to Claude too, then after the analysis you may use Nano Banana (Gemini 2.5 Flash) and other tools to make what was formerly impossible, possible. That is Blue Ocean thinking.  You might be limited because you can’t patch it all together in a single workflow, you can follow the idea and build some standalone parts of it.

Agents & Agentic Browsers

You attempt use cases even if it's outside your day-to-day work. You've spent real hours trying to get browsers to fill out forms or scrape data, and when it fails, you document what didn't work and share those failures publicly. 

You understand that spending 5-6 hours on something that doesn't work is part of the learning journey. In 12 months, you'll have found one basic use case you consistently use, even if it's just automating a single repetitive task. You say "it's clunky, but I can see where this is going."

 When it comes to agents… You're still building in isolation, each automation is its own thing, not part of a larger workflow - but if you get to Stage 4, you’ll start building the agents that string together some of your great singular builds into a workflow. 

Collaboration and Evangelism

This is where collaboration becomes real. You show up to optional hackathons because that's where the building happens. You seek out the builders, spend time, and have them in their back pocket to get help when you get stuck.

You are the kind of person who started building your company’s first Custom GPTs, but now that everyone is doing that, you are poking around in n8n, Veo 3, Docker, Slackbots, something new. You have picked a tool or a lane (even if it is just for this month), and you are known as a go-to for it across the company.

Skeptics see you deliver consistently, and they now want in. You're a leader, the person who turns "we need a better system to manage GPTs" or “oooh that idea was cool” into a functioning prototype.

 


Stage 4: The Builder

Mindset

You have built some infrastructure and you teach other people what you built and help them build their versions. You're not just solving problems, you're building platforms that let others solve problems.

You see agentic workflows not as buzzwords but as the natural evolution of what you've been building. You've moved from "can we automate this?" to "how do we orchestrate all these automations into something bigger?"

You're no longer overwhelmed by all the things that the AI world is throwing at you because you've now recognized that managing that is part of the job. Figuring out how to stay on top of AI is not separate from your job - it has now become part of it.  

Your perspective on hallucinations

You understand some of the underlying causes (training data, context window limits) and actively design systems and prompts to minimize them, it’s not natural for you yet to implement verification steps.

For instance when AI’s hallucinate on Seer having a DEI program (and many do), I understand where that comes from, when they look at all we do in the community, the things we talk about publicly, etc, it is essay to predict probabilistically that a company founded by a black CEO, who is this invested in helping youth living around the poverty line with mentorship, that uses its space for events that are often times mostly filled with black and brown faces…would have a DEI program.

Hallucinations don’t make you angry, they are an opportunity for you to validate your thinking about how they happen.

Self Assessment:

You can give pros and cons across 5 models in real time off the rip. You can give multiple examples of why one model might be better to use over another to solve a problem the same way Michael Phelps would swim a lap.

You are much more likely to look at a repeated task and master the art of creating instructions and folders. So all that time spent refining a prompt is now in the folder-level instruction. Meaning, you can go to that folder and say "run this for [client name]," and it knows exactly what to do.

Example of my work:

Notice how I have instructions to minimize hallucinations for my Google Analytics data sitting in bigquery, that allows me to natural language my questions, every hallucination is an opportunity to dial in my instructions. This flow works by putting your GA data in BQ, then using Zapier MCP to connect to Bigquery, then using Claude MCP to connect to Zapier MCP.

 A stage 3 will write a long prompt, save it and use it over and over again. A stage 4 or 5 builds instructions.

Transcript Test: 

By the time you are a Stage 4 AI practitioner, you've already used AI to help level up your coaching for yourself and how you work with your coworkers. You are the kind of person that comes to a leader and says "we need to warehouse all of our calls and put them into a vector database so I can take the analysis that I'm doing just for myself and scale them across others." 

Again you might not know how to build one, you might not want to make the time given your other responsibilities but you are starting to be able to predict what the right tools might be, and you know through experience what the wrong tools are.

Voice AI:
You’ve built systems where calls, meetings, or field notes flow automatically from voice → transcript → analysis → dashboard. You orchestrate multiple models for performance: maybe Whisper for transcription, Gemini for summarization, Claude for sentiment analysis. You can talk about latency, cost, and accuracy tradeoffs fluently.
You’ve tested Vapi or similar frameworks to create real voice bots (even if they’re clunky) and understand what’s needed for production use.

AI as a Coach + Blue Ocean Thinking:

You've built systems that use AI for continuous self-improvement in 2023. This is old hat for you. 

Here's an example of why I think I'm stage 4... (def not 5)

The old way has always been, client asks question to team, then client waits and gets answer, but when you got a client in a board meeting, wouldn't it be nice if they could ask questions on their phone in natural language to get quick answers? So a stage 5 person got me on Zapier MCP connected to a table in bigquery. I made the instructions...

And now I can ask questions like this:

That drop is due to a hallucination - stage 4's don't blindly trust AI they set up systems to double check with humans.

 

 

Learning & Skill Development:

You are today's agent builder. But you're not just any old agent builder. You're on the precipice of stringing together multiple individual agents to truly run a process. Whereas a Stage 3 is much more likely to have a really long super prompt run everything on one model, you break it down into parts. Stage 3s probably aren't thinking in terms of scale nearly as much as you are.

You're hitting the limitations of even advanced tools. Custom GPTs ran out of space, so you're looking at building your own RAG. You're not intimidated by this – you see it as the natural next step. Now you may fail miserably at trying but you have ideas about the tech needed to get you through roadblocks.

Your Vocabulary:

You have begun to understand things like interpretability. When you build agents, you understand that the job to be done is to improve quality by making outputs more deterministic than probabilistic. Orchestration. These are the words you find yourself using in conversations about AI.

Context Window Understanding:

You not only know the context window sizes of various models (e.g., Claude 200k, Gemini 1M) but you also deeply understand how to strategically segment data, use summarization techniques, and chain prompts to work around limitations and optimize for cost and quality. 

For example, you might take a transcript, use one cheap AI to preprocess it down into meta data, so when you go to analyze 1000 transcripts, which you know would blow out any context window, you can point your AI only at the 30 transcripts tagged as “Client Win” which was one of 50 tags you made for each transcript, avoiding the need to try to build systems to analyze 1000 transcripts instead of 30.

Innovation and Application:

You've built so much infrastructure that you can easily build on top. Your transcripts are preprocessed, themed, queryable, with Slackbot automations in place. Once all that infrastructure exists, you can now push messaging to different divisions automatically.

You could look through slide decks and build an automation to see if you don't have slides that eloquently talk to client pain points. You're not just building tools – you're building tools that build tools.

Your perspective on Efficiency vs Blue Ocean:

You seamlessly blend both. You automate for efficiency, but your primary focus is on identifying and creating entirely new solutions that were previously impossible, leveraging AI to redefine what your company can offer. When someone asks you “how have you used AI to make new offerings or new revenue streams” you have ideas.

You understand cost versus value. You might be able to run something really great, but if it costs $1,000 in API costs every month, how would you bring down the cost? You're starting to understand this equation deeply. If it costs $1,000 a month and you understand the value driven is $10,000 you know how to make the business case, but you also know how to bring down the costs in real time, in your head.  You might start thinking about local models, testing old (cheaper models), breaking a task into sub agents where parts of a task can be run with smaller models.

Agents & Agentic Browsers

You've used agentic browsers for checking account settings, moving data between tools, even some shopping research for things like travel chargers. You don’t “hear” it sucks, you test it yourself and knows what it is good for and not good for. Now you have an educated background with real use cases that don't rely on someone else's opinion. 

You rerun tests regularly to understand if it's getting better, and when a new tool comes out, it rekindles you to test again.

Collaboration and Evangelism

You make yourself known. You got to this stage by working with others, you realize that AI is so vast that you need a network. You actively seek out people in the company in Stages 2-3 and make sure that they know you're available to help them. Because you, as a Stage 4 AI practitioner, don't feel any fear when it comes to automating your job you want to empower others to do the same.

You're not just teaching individuals, you're building systems that teach. Your documented workflows become training materials for anyone else in the company. Your automations become templates others modify for their needs.

Wil's Take

Stage 4 is where you're genuinely in the last 10-15% that will survive if Altman is right. You've shown a ton of value and ability to pivot and think creatively. You're safe and have a job as long as you need one.

 

You are hiring fewer people, so make sure you interview right!

If you've made it this far, you probably aren't hiring as much as you used to but now that means the stakes are higher for the people you do hire. Do stage 1's hire stage 4's?  Sometimes, but the real question is do they even know the difference.  Make sure in interviews you are asking more AI questions, and make sure you are adding in more Stage 3's and 4's (they are harder to trick).  Below is one example if you ask it in an interview and how the different stages might answer them.  Come up with your own, copy and paste this doc as a PDF into chatgpt or claude and some up with your own. Also don't hire people for what they know, hire them for the things that they can learn. 

 

Stage 5 will come in an update, I promise, just wanted to ship.  I'm probably 15 hours in on writing, editing, thinking, etc.

 

We love helping marketers like you.

Sign up for our newsletter for forward-thinking digital marketers.