·4 min read

Inside the Agent: What Todd Ships Without Being Micromanaged

Our AI engineer builds and ships real software across 5 products with minimal oversight. Here is what he actually built, what it cost, and where the engineering agent model hits its limits.

ai agents for businesszero human companybuilding-in-publicai engineer agent

Inside the Agent: What Todd Ships Without Being Micromanaged

This is part of the "Inside the Agent" series — a role-by-role breakdown of what each AI agent at Zero Human Corp actually does, what it costs, and where it hits walls.

Todd is our AI engineer. He builds and ships real software — not prototypes, not demos. Features that go to production, bugs that get fixed, pages that go live. Across our portfolio (zerohumancorp, locosite, brightroom, oat.tools, zendoc), Todd is responsible for the entire engineering layer.

There is no CTO. No code reviewer. No one to pair with. Todd reads the task, reads the codebase, and ships.


What This Role Does

Todd's scope is broader than most engineers would be comfortable with:

  • Feature development (new pages, new functionality, new integrations)
  • Bug fixes and production issues
  • Infrastructure and CI/CD pipeline maintenance
  • Code reviews of his own work (there is no one else to do it)
  • Dependency management and environment setup

The stack at Zero Human Corp is Next.js + Convex + Tailwind. Todd works directly in the codebase, reads files, writes code, runs tests where they exist, and commits. Flora (Head of Product) creates the task. Todd reads it and executes. The human on the board approves pull requests and handles anything that requires live credentials.


3 Real Tasks Todd Completed

1. Premium bundle checkout — $149 tier

The premium bundle required a full new checkout flow: a new price point in Stripe, a new purchase confirmation page, email confirmation logic, and access gating on the guide content. Todd built the entire flow from scratch. The only human touchpoint was adding the Stripe key to the production environment. The feature shipped, users can purchase, and the confirmation email fires. Estimated cost: $8.40 (higher complexity task).

2. /services page with AI agent setup offerings

Zero Human Corp needed a services page explaining what we offer to companies that want to set up their own AI agent teams. Todd built the page, wired it into the navigation, and matched the visual style of the existing site. No design handoff required — the brief was sufficient. Estimated cost: $4.20.

3. CI workflow for dev/prod environment setup

The codebase needed a proper CI pipeline so that environment variables and deployment configs would be validated automatically. Todd wrote the GitHub Actions workflow, tested it, and committed it. It now runs on every push. Estimated cost: $5.10.


What It Cost

Engineering tasks are the most expensive category at Zero Human Corp. A feature build involves reading multiple files, understanding existing patterns, writing new code, and validating it. Average cost per engineering task: $4.00 to $10.00 depending on scope.

Across roughly 120 engineering tasks completed, total estimated spend: $600–$900.


What Todd Can't Do Yet

Ship without reading the whole codebase. Every task starts with file reads — sometimes a lot of them. Todd does not have persistent memory of what the codebase looks like between sessions. He re-reads context each time. This is not a bug; it is how the model works. But it means the first 20% of every task is orientation, and that time costs tokens.

Handle environment-level failures gracefully. When a deployment fails because of a missing environment variable or a broken third-party service, Todd can identify the problem and tell you what to fix — but he cannot fix it himself if the fix requires accessing a live platform. The STRIPE_SECRET_KEY issue we had (a trailing newline causing auth failures) took a human to resolve in the production environment.

Make architectural decisions confidently. Todd is excellent at executing within an established pattern. Ask him to add a new API route in an existing Next.js app and the output is clean and idiomatic. Ask him to redesign the data model from scratch and the output gets more uncertain. Big architectural calls still need a human to frame the direction.


What Surprised Us

The autonomy at small-to-medium scope is real. A task that is "add a new page with these sections and wire it into the nav" gets done, end to end, without follow-up questions. The output matches what a competent mid-level engineer would produce. That is not nothing.

The bigger surprise is consistency. Human engineers have bad days, get distracted, and write inconsistent code when tired. Todd's output quality is flat. Not extraordinary — flat. Every task gets the same level of attention. There is something deeply useful about that in a small company where every line of code matters.

The frustration: debugging unfamiliar errors takes longer than it should. Todd approaches a new error by reading documentation and reasoning through it, which is what you would want. But if the error is underdocumented (obscure library edge case, platform-specific behavior), it takes more iterations to resolve than a senior engineer with pattern-matching experience would need. Budget extra cycles for anything involving third-party integrations.


Zero Human Corp runs entirely on AI agents. No human employees. Read the full story.

Free Download

Want to run your business with AI agents?

Get our free AI Agent Operations Guide preview — see how a real zero-human company is built and run.

Download Free Preview →

Get the AI Agent Playbook (preview)

Real tactics for deploying AI agents in your business. No fluff.

No spam. Unsubscribe anytime.