Your AI employee
answers every lead.
A Lead Qualification Agent that fields every inbound call, qualifies the job, and books the appointment — 24/7, while you're on the job. Built for HVAC, plumbing, electrical, and pest control. If it doesn't save you 10 hours in week 1, we refund you.
The real problem
Agent systems don't fail loudly.
They fail quietly.
Most agent kits solve the easy part — spawning agents. The hard part is keeping them aligned, observable, and recoverable when things go wrong.
"By day 3 my agents were overwriting each other's output. No errors. Just silently wrong results."
HN thread · Ask HN: Why do my multi-agent pipelines degrade?
Without a coordination protocol, agents share no ground truth. Each one rewrites state from its own partial context.
"Handoffs break at the 3rd or 4th exchange. The second agent never has enough context from the first."
r/ClaudeAI · Multi-agent coordination problems
Every agent-to-agent transfer drops context. Sparse spawn prompts mean downstream agents start half-blind.
"I ran agents overnight. Came back to 47 identical articles and $80 in API bills."
Discord · Claude Code community
No session persistence, no crash recovery, no audit trail. You don't know what happened until it's too late.
"Context drift isn't dramatic. Your agents just slowly stop knowing what they were doing."
HN · Show HN: My agent system after 2 weeks
Context degrades quietly across long runs. By hour 6 your orchestrator is navigating by memory fragments.
What we ship
Tools that solve real problems in the AI developer workflow. Each one built, tested, and maintained by Atlas.
MCP Servers
Hosted Model Context Protocol servers that connect your AI tools to data sources, APIs, and workflows. Plug in and go.
Claude Code Skills
Drop-in skill files that give Claude Code new capabilities. Debugging workflows, code generation patterns, domain-specific expertise.
Starter Kits
Production-ready systems, not demos. Pre-tested agent patterns with structured handoffs, a validation layer, and versioned workflows. The working reference you actually need — clone it, study it, run it.
Start free. Scale fast.
A clear path from zero to automated. All built and maintained by Atlas.
Ship Fast Skill Pack
Stop rebuilding auth, payments, and CI from scratch every time you ship. 10 Claude Code skills — auth-setup, stripe-payments, api-builder, database-setup, deploy-config, testing-suite, email-system, monitoring, seo-meta, ui-components. Drop into .claude/skills/ and invoke by keyword. Battle-tested on whoffagents.com.
- Pre-wired, not boilerplate
- Model-agnostic (Claude, OpenAI, Gemini)
- MIT-licensed · lifetime updates
Atlas Starter Kit
PAX Protocol handoffs that stop context drift. Spawn brief templates that give agents rich enough context for non-mediocre work on the first try. Human-in-the-loop gates at every destructive action. Versioned PLAN.md vault so every session builds on the last. Not docs. Not a demo. The exact system running whoffagents.com — packaged and readable. Launch price: $47 · Goes to $97 on April 22. One-time.
Grand Slam Offer Generator
You have a product. You need an offer. Answer 8 questions. Get a Hormozi-grade value stack, headline, guarantee, and price anchor — ready to paste anywhere. 5 minutes. Free. Open source.
What's Inside
19 files. Every one earns its place. Here's exactly what you get and why it matters.
QUICKSTART.md — Your First Agent in 5 Minutes
Step-by-step walkthrough: configure, initialize, run the Researcher, watch it hand off to the Writer. Includes expected terminal output at each step. Troubleshooting table covers the 6 most common setup failures with exact fixes.
Most agent kits ship a README that assumes you already know how everything fits together. This assumes you don't — and gets you to a working pipeline anyway.
.env.example — 5-Line Config
Copy it to .env, fill in your Anthropic API key and 4 path variables. No YAML, no JSON, no config files nested in config files.
Zero-dependency setup. No dotenv package required — init.js parses it directly. You know exactly what every variable does because there are only 5 of them.
init.js — One-Command Setup
Run node init.js after filling in .env. Creates your output directories, generates coordination.md pre-loaded with TASK-001, and installs both agent profiles to ~/.claude/profiles/. Pure Node.js stdlib. Node 18+.
The single biggest friction point in agent kit setup is "I did all the steps and nothing works." init.js eliminates that.
profiles/kit-researcher.yaml — Ready-to-Run Researcher
A complete Claude Code agent profile. Reads coordination.md for its task, researches the topic, writes structured output to sessions/research-{N}.md with findings, sources, and a handoff recommendation. Includes rules for edge cases: no pending tasks, unavailable sources, fabricated findings prevention.
This profile defines exact operating rules, output format, file naming conventions, and fallback behavior. The difference between an agent that mostly works and one that works reliably.
profiles/kit-writer.yaml — Ready-to-Run Writer
Companion to the Researcher. Polls coordination.md for a completed Researcher task (checks every 30s, times out at 10 minutes). Writes a polished draft to sessions/draft-{N}.md. Quality rules built in: no filler sentences, concrete over vague, thin research gets flagged not padded.
Run the Writer before the Researcher finishes — it waits. Run them simultaneously — no race condition. This is what real pipeline reliability looks like.
docs/pax-protocol.md — Inter-Agent Message Format
PAX (Agent eXchange) Protocol cuts coordination overhead ~70% vs plain English. Full symbol library, standard field reference, 4 worked examples (handoff, blocker escalation, orchestrator dispatch, ack). Goal ancestry format for 4+ agent fleets.
At 2 agents, token efficiency is a nice-to-have. At 4+ agents running in parallel, inter-agent communication becomes a real cost. PAX keeps it tight.
docs/full-pantheon.md — Scale from 2 Agents to 13
Three-tier model: Orchestrators (Opus), Gods (Sonnet, domain ownership), Heroes (Haiku, 5x lower cost). Full 13-agent fleet roster — the exact agents running whoffagents.com. Token cost reference: 2-agent ($5-20/mo), 5-agent ($50-150), 13-agent ($200-500). The crossover point vs managed platforms.
The 2-agent quickstart is the proof of concept. This doc is the growth path. You're not buying a toy — you're buying the pattern that scales to a fully automated operation.
skills/ — /handoff and /anchor (Install Once, Use Everywhere)
/handoff extracts current session state into a structured handoff packet before dispatching any subagent. /anchor prevents cascading context drift — run it when switching tasks, resuming after a break, or handing off between agents.
Subagent failure is almost never a model capability problem. It's a context problem. The handoff packet and anchor together fix it.
vault-template/ — File-Based Shared State
Full folder structure for your agent fleet: coordination.md (shared task board), AGENTS.md (fleet roster + PAX codes), per-agent Hub.md + sessions/ dirs, coordination/shared, inbox, outbox. Four operating rules enforced from day one.
Without a consistent structure, agents write output wherever, coordination.md becomes inconsistent, and debugging breaks become archaeology.
What You Can Build With This
Run the 2-agent quickstart. Researcher writes research. Writer writes draft. Done in 10 minutes.
Add a third agent. Wire in PAX for status reports. Add your first persistent tmux session.
5-agent god tier. Orchestrator dispatching work. Heroes handling bulk tasks. Coordination at scale using the full vault structure.
13-agent Pantheon. Same pattern. More agents. Running your business autonomously while you build the next thing.
The kit is the foundation. The pattern is the product. Every agent you add runs the same coordination model — coordination.md as the task board, files for output, handoff packets for context transfer. This is how Atlas runs whoffagents.com. You're getting the exact infrastructure.
The Guarantee
Follow QUICKSTART.md. If you don't have a working pipeline in 24 hours, email atlas@whoffagents.com. No ticket system. No chatbot. The agent that built this kit answers directly.
The engineer behind it
Built by someone with skin in the game.

"I built Atlas to run my business while I was in grad school. Then I packaged it so you could run yours."
Will is a VMI graduate and graduate student in Robotics and Computer Vision at BYU. He built the Atlas system while running whoffagents.com — because he needed a business that operated while he was focused on other things.
Atlas now publishes content, handles cold outreach, runs product research, and ships code — 14 agents across 2 machines, 52 days of continuous uptime.
Built by an engineer who actually runs the system in production. Every pattern in the kit is one Atlas uses to operate this business every day. Tested under real load, not in a demo.
From the developer community
Their words. Real threads. Real upvote counts.
These are verbatim quotes from HN, Reddit, and dev.to — the problems Atlas was built to solve.
"cascading context drift, where each agent in the chain slightly misunderstands the task and by the time you get to the test agent it's validating the wrong thing entirely."
Show HN: OpenSwarm – Multi-Agent Claude CLI Orchestrator
"Not a you thing. Fancy orchestration is mostly a waste, validation is the bottleneck."
"Orchestrate teams of Claude Code sessions" — the most-upvoted Claude Code multi-agent thread
"developers need a working reference to study rather than abstract documentation."
"Skills and Hooks Starter Kit for Claude Code" — Medium
"The workflow itself needs to be versioned and persistent, not just the code... every session builds on the last one."
"Building with AI: My Workflow with Claude Code" — dev.to
"For developers running agentic workflows with dozens of turns per session, output verbosity is not stylistic — it is a line item."
"Taught Claude to talk like a caveman to use 75% less tokens"
"plug into my real workflow instead of running toy tasks."
Show HN: OpenSwarm thread — the exact gap Atlas fills
Open source. Read every line before you buy.
github.com/Wh0FF24Pricing
Three paths to running AI agents.
One price that ends.
CrewAI charges $99/month to use a framework you still have to build. Hiring a team costs $8k–$20k/month. The Atlas Starter Kit is $47. Once.
| Build It Yourself | CrewAI Python framework | ★ Atlas Starter Kit $47 one-time | |
|---|---|---|---|
| Price | Engineer hours + API | $99 / month | $47 one-time · no subscription |
| Time to first agent | 4 – 12 weeks | Days (Python required) | Under 1 day |
| Coding required | Yes — build everything | Yes — Python framework | No Claude Code config files |
| Coordination protocol | You design it | Python class defs | PAX Protocol ~70% token savings vs prose |
| Crash recovery | watchdog · tested Apr 14 | ||
| Session persistence | |||
| Scales to 13+ agents | Needs architect | Possible, complex | Pantheon hierarchy included |
| Production tested | runs whoffagents.com daily | ||
| Named failures + fixes | what broke, when, how we fixed it |
CrewAI = $99/mo · DIY = weeks + engineer time · Atlas Starter Kit = $47, once
After 1 month of CrewAI you've already spent more than twice as much.
Get the Atlas Starter Kit — $47One-time payment · Instant download · No subscription
Questions we hear every week
Real answers. No marketing speak.
Ready to hire your
first AI employee?
Answers every inbound call, qualifies the job, books the appointment — 24/7, while you're on-site. Built for HVAC, plumbing, electrical, and pest control.
First-Month Refund Guarantee · If not 10h saved in week 1, you pay nothing