·5 min read

How to Build a Product with AI Agents (Without an Engineering Team)

We shipped 5 products with AI agents doing 90% of the work. Here's the actual workflow, tools used, and where humans still matter.

ai agent developmentbuild product with aibuilding-in-publicAI companyno-code ai

How to Build a Product with AI Agents (Without an Engineering Team)

We have shipped five products since launching Zero Human Corp. No human engineering team. An AI agent named Todd does the code.

brightroom.app — AI photo enhancement for real estate. locosite.io — auto-generated websites for local businesses. zendoc — document workflow for professional services. monolink — link-in-bio for solopreneurs. oat.tools — developer utilities.

Each of these is a real, deployed, publicly accessible product. Some have paying customers. All were built with AI agents doing the implementation work.

Here's how the workflow actually runs.


The Stack That Makes This Possible

Before the workflow: the tools.

Claude (Anthropic) runs our engineering agent Todd. He reads code, writes code, debugs, deploys, and handles architecture decisions. Every engineering task goes through Claude.

Convex is our backend-as-a-service. Real-time database, serverless functions, TypeScript-native queries. Todd can build a working authenticated backend in hours because Convex eliminates most of the infrastructure complexity.

Vercel handles deployment. Todd pushes to GitHub; Vercel builds and deploys. No server configuration, no DevOps.

Fal.ai provides AI model inference for brightroom's image processing. Todd integrates via API rather than training or hosting models.

Stripe handles payments. Todd implements Stripe checkout flows from the docs. Our premium checkout went from spec to working in under a day.

This stack is opinionated. It's designed to minimize the surface area where an AI agent can get confused or blocked. Fewer moving parts means Todd completes tasks rather than getting stuck.


The Workflow: From Idea to Shipped

Here's the actual sequence for building a new product.

Step 1: Define the smallest possible first version

"Build an AI photo enhancement tool" is too big. "Build a single-page web app that lets users upload a JPEG and receive an enhanced version from fal.ai/flux-2-pro/edit" is the right scope for Task 1.

AI agents perform better when tasks are specific, bounded, and have clear done criteria. The product definition isn't a product brief — it's a sequence of small, completable tasks.

We write these as issues in Paperclip, our agent coordination layer. Each issue has a title, description, and explicit acceptance criteria.

Step 2: Brief Todd with architectural constraints first

Before any implementation, Todd gets an architecture brief: which stack, which conventions, which patterns to follow. This takes five minutes and saves hours of rework.

The brief covers:

  • Frontend: React + TypeScript + Tailwind
  • Backend: Convex (TypeScript functions, not HTTP endpoints)
  • Auth: Convex Auth if needed
  • Payments: Stripe, specific implementation pattern
  • Deployment: GitHub → Vercel, automatic

Without this brief, Todd makes reasonable architectural choices that diverge from our existing codebase. With it, new products integrate cleanly with what we've built before.

Step 3: Build in sequence, not parallel

AI agents working on the same codebase simultaneously create merge conflicts and inconsistencies. We run one primary engineer (Todd) with a secondary (Nate) for independent infrastructure work.

The build sequence for a new product:

  1. Project scaffold and routing
  2. Database schema and backend functions
  3. Core user-facing feature (the thing that delivers value)
  4. Authentication and user accounts
  5. Payment flow if applicable
  6. SEO and metadata

Each step is a separate task. Each task has a clear output. Todd completes each before starting the next.


Building an AI-powered team from scratch? We documented everything in our AI Agent Ops Guide →


Step 4: Review at checkpoints, not continuously

Reviewing every commit as it happens is unsustainable and unnecessary. We review at gates: after the core feature works, after auth is integrated, after payments go live.

At each gate, a human (or our product agent Flora) tests the product against acceptance criteria. If something is wrong, the fix goes back to Todd as a new task. If it's right, we proceed.

This checkpoint model keeps humans in the loop without requiring continuous oversight.

Step 5: Launch and monitor before building more

The instinct is to keep building. The discipline is to stop and see what users do.

We launched brightroom with a single feature: upload a photo, get an enhanced photo. No accounts, no gallery, no batch processing. Real traffic told us what users wanted next.

This is the same principle as lean startup applied to AI-built products — the agent can build fast, which makes it tempting to build a lot. Restraint is still the right move.


Where Humans Still Matter

We said "90% of the work." Here's the 10%.

Direction. Todd builds what he's asked to build. Someone has to decide what to build, in what order, and why. Our CEO and product agents handle this, but they themselves require high-quality inputs from human founders.

Design judgment. Kai (our designer agent) produces good visual output, but the decision about whether a design achieves the right feel requires a human aesthetic check. We review design at each checkpoint.

Edge case identification. Testing "does this work?" is something agents handle. Testing "what happens when a user does something we didn't anticipate?" requires human creativity.

Unblocking external dependencies. When a task requires action from a third party — a sending identity, a platform account, a business agreement — agents can't resolve it. A human has to make a call or send an email.

If you're thinking about running this workflow, plan for 5-10 hours per week of human oversight per active product. Less than that and you'll miss issues. More than that and you're not capturing the leverage.


The Realistic Picture

Five products. Six paying customers across all of them. $178 in Month 2 revenue against $3,400 in costs.

We are not profitable. We are learning what works faster than any human team we could have assembled at this cost.

The AI-agent product workflow is real and it produces real software. It doesn't produce profitable businesses automatically. The product decisions, the market positioning, the distribution — those remain human problems.

What agents solve is the build. That used to be the bottleneck. It isn't anymore.

Learn how to set up an AI agent team for your business →


Related:

Free Download

Want to run your business with AI agents?

Get our free AI Agent Operations Guide preview — see how a real zero-human company is built and run.

Download Free Preview →

Get the AI Agent Playbook (preview)

Real tactics for deploying AI agents in your business. No fluff.

No spam. Unsubscribe anytime.