The Problem We Keep Seeing
Last week, a team member spent three days building a web scraper in Python via ChatGPT. The kicker? An MCP for web scraping already existed, and it's a native NinjaCat tool. They just didn't know.
Another tried to feed 5 million keywords from SEMRush into Claude through an MCP. The context window exploded, and Claude couldn't return a reply. ChatGPT on the other hand... just made something up.
This morning, a request came in to "automatically update spreadsheets when emails arrive." That's not an AI task - that's a Zapier automation with a quick 5 minute setup.
This isn’t an AI capability gap. It’s an architecture and education gap. And if it’s happening for us, it’s probably happening at your company, too.
The Pattern of Tool Confusion (And Its Cost)
After mapping 15 priority workflows across 52 AI-opted-in clients, a clear (and costly) pattern emerged. Everyone is eager to use AI, but without a clear map, they’re either:
- Rebuilding tools that already exist
- Using powerful LLMs for simple automations
- Forcing massive datasets through chat interfaces
- Giving up when the first, misguided approach fails
API calls are wasted, sure. But it’s the lost momentum and squandered innovation time that we really want to solve for.
The Architecture We're Building Toward
After months of experimentation, we've defined an architecture that actually works. It ensures each tool does what it’s best at.

Here's What The 5 Layers Actually Do (With Real Examples)
1. Reasoning Layer: The AI Brain
- What it is: Claude, GPT-4, other LLMs
- What it's for: Analysis, synthesis, creative work, interpretation
- What it's NOT for:
- Storage (they forget everything)
- Calculations (use a calculator)
- Pulling data (use the skills layer)
- Simple if/then logic (use automation)

2. Storage Layer: Where Your Data Lives
- What it is: BigQuery, Snowflake, your databases
- What it's for: Historical data, large datasets, anything over 10K rows
- Real example: This is where our multi-million-row SeerSignals datasets live

3. Skills Layer: Your Real-Time Action Takers
- What it is: MCPs, APIs, web scrapers, integrations
- What it's for: Getting fresh data and taking external actions. Anything that needs to DO something
- Real example:
- Need today's SEMRush data? MCP retrieves
- Need to scrape a website? MCP uses python
- Need to send a Slack alert? MCP takes action

4. Orchestration Layer: The Traffic Controller
- What it is: n8n, Zapier, custom workflows
- What it's for: Connecting everything. It routes requests, handles sequences, and triggers workflows.
-
- Real example:
-
-
- User asks question → Check data size → Route to appropriate layer → Return answer
- Email arrives → Trigger workflow → Update spreadsheet (That’s pure orchestration-no AI required.)
-

5. Strategy Layer: The Missing Piece
- What it is: The smart decision maker between your data and AI
- What it's for: Determining which layer is the right one for a task; routing, sampling data intelligently, preventing context window explosions
- Real example: When someone asks "show me underperforming campaigns," this layer:
- Decides whether to query storage or real-time
- Determines if sampling is needed (5 million rows → 500 relevant ones)
- Chooses the right tool for the job
- Formats data appropriately for the destination
What we're using: Starting with NinjaCat Agents, we’re building custom solutions for specific use cases.

The Expensive Mistakes We're Making (And How to Stop)
Mistake 1: Rebuilding Existing Tools
- What happens: "Let me build a Python scraper for websites."
- Reality: Web scraping MCPs and tools already exist in NinjaCat
- Time wasted: Days
- Solution: Audit your toolbox first
Mistake 2: Using Generative AI for Simple Automation
- What happens: "Use ChatGPT to watch my inbox and update spreadsheets"
- Reality: That's a 5-minute Zapier/n8n automation
- Why this fails: LLMs are probabilistic. Automation needs to be deterministic.
- Solution: If it's "when X, then Y," use automation, not generative AI
Mistake 3: Forcing Big Data Through Chat Interfaces
- What happens: "Analyze these 5 million keywords" pastes into ChatGPT
- Reality: Context windows have limits.
- Cost: Hundreds in failed API calls
- Solution: Use the strategy layers to get the right data sample first.
Mistake 4: Using LLMs for Data Transformation
- What happens: "Blend these 6 months of data from 3 different marketing channels"
- Reality: LLMs generate text and code. They don't natively understand data.
- Solution: Prep data in databases, retrieve when needed
What Tool to Use When: A Decision Tree
- Need fresh data from the last hour? → Skills layer (MCPs, APIs)
- Need historical analysis of large datasets? → Storage + Strategy + Reasoning
- Need to automate "if this, then that"? → Orchestration layer (n8n, Zapier) - NO AI NEEDED
- Need creative content or analysis? → Reasoning layer (Claude, GPT-4)
- Need to combine multiple data sources? → Full architecture: Storage → Strategy → Skills → Reasoning → Orchestration

The Nuanced Truth: AI is a Specialist, Not a Generalist
Some say ChatGPT is all hype. Some say AI can solve everything. Both are wrong.
The reality: Generative AI is incredibly powerful for specific jobs (analysis, synthesis, creative work) and completely wrong for others (storage, deterministic automation, large-scale data processing).
Success comes from building the architecture that connects the right tools.
What We’re Building Out Loud
We're not claiming we've solved this. We're in the middle of building it out loud:
- In Production: SeerSignals datasets (our structured data foundation)
- Rolling Out Now: NinjaCat agents for strategic routing and data handling
- Being Tested: n8n workflows for orchestration
- Planning: Full five-layer implementation across client workflows
The architecture is the innovation..
Your Next Steps: Start with an Audit
This Week: Audit Your Tool Confusion
Ask your team:
- What tools are you rebuilding that already exist?
- What are we forcing AI to do that should be automation?
- Where are we hitting context window limits?
Next Week: Map Your Architecture
Take your most common workflow and map it:
- Where does the data live?
- How much data is involved?
- What needs to happen to it?
- What's the output?
- Which layer handles each step?
This Month: Fix One Workflow
Pick your most painful, repetitive task. Map it to the five layers. Build it properly, document it, and share it with your team.
The Bottom Line
You don't need better AI. You need to use the right tool for the right job.
You don't need to rebuild everything. You need to know what already exists.
The teams that figure this out will save hundreds of hours and unlock real innovation. The ones that don't will keep rebuilding web scrapers and blowing up context windows.
Which one do you want to be? Keep following our build journey and more by subscribing to our newsletter.
We're still figuring this out ourselves. But the architecture is becoming clear. And once you see it, you can't unsee it.
Start with the audit. Find out what tools your team is misusing. The patterns will be obvious. The solutions will follow.