← writing
February 26, 2026 · 8 min read

How to scope a custom AI build so it actually ships

The discovery process I use to turn a vague AI idea into a one-page scope your team can actually green-light.

R
Ron Davenport
builder · ronbuilds

Most failed AI projects fail in the first week, not the last one. They fail because nobody wrote down, in plain English, what the thing was actually supposed to do. By the time engineering is three sprints deep, the original vision has fractured into four interpretations and the team is building around the loudest voice in the room. Here is how I avoid that, and how you can do it yourself even if you are not hiring me.

Start with the work, not the tool

The first question I ask on every discovery call is a boring one. Walk me through the last time you did this manually. Step by step. What did you open first. What did you copy. Where did you paste it. What did you check. Where did you get stuck. I take notes like a court reporter and I do not interrupt with solutions.

This sounds basic but it is the single most important thing in scoping. Almost every team I work with describes the work they wish they were doing, not the work they actually do. The wish version is clean and linear. The real version has three exceptions, two manual lookups, and a Slack thread where someone gets pinged for approval. The real version is the one we have to automate, and it is almost always more interesting than the wish version because it has the actual cost in it.

Identify the unit of value

Every build needs one number that goes up or down because of it. Time per task, deals worked per rep per day, response time on a support ticket, cost per qualified lead. If you cannot name the number, you do not have a project yet. You have a vibe.

The number does not have to be perfect. It just has to be the thing the team would point at in a quarterly review and say, that moved because we built this. I have shipped builds where the unit of value was as simple as how many minutes a marketer spent every Monday building a status report, and as complex as the conversion rate between two specific stages of an underwriting funnel. Both kinds work. What does not work is building because AI is cool.

Find the smallest version that earns its keep

Once I know the work and the number, I look for the smallest possible build that would meaningfully move that number. Not the most ambitious version. Not the one with the prettiest dashboard. The smallest one. The reason is simple. The faster you can put working software in front of the team, the faster you find out what was wrong about your assumptions, and there are always assumptions that turn out to be wrong.

On one e-commerce project, the original ask was a full personalization engine for product recommendations. After two days of discovery, the actual problem was that the merchandising team spent six hours a week building manual collections from a spreadsheet. The smallest version was a tool that took the spreadsheet and produced the collections. Two weeks of work. It moved the number we cared about. The full personalization engine could come later, on the back of the trust we built shipping the small thing.

Write the one-page scope

After discovery I write a one-page document. Always one page. If it does not fit, the build is too big and I cut it. The page has six sections.

  • What the build does, in two sentences a non-technical exec could read.
  • The unit of value and the number we expect it to move.
  • The inputs (data sources, integrations, accounts).
  • The outputs (where the result lands, who sees it, what they do with it).
  • What is explicitly out of scope (this is the most important section).
  • Timeline and price.

The out of scope section is where I save clients the most money. It is the section where I write down all the cool ideas that came up during discovery, and then I say, not in this build. We can add them in version two if version one earns its keep. Half the time, by the time version one ships, the team has forgotten about half the cool ideas because the actual problem turned out to be something else. The other half, we build version two and it is sharper because it is built on real usage instead of speculation.

Kill the build before it starts if you have to

The hardest part of scoping is being willing to walk away. Sometimes I get to the end of discovery and I think, this build is not going to move the number this client cares about. Maybe the workflow is not actually the bottleneck. Maybe AI is not the right solution. Maybe a SaaS tool would be better. When that happens I tell the client and I refund the discovery fee.

I have done this three times. Every single one of those clients came back later for something else, and I would rather have a reputation for telling people the truth than for cashing checks on projects I knew were going to underperform. A scoped build that does not ship is cheaper than a scoped build that ships and disappoints.

If you are doing this yourself

You do not need a consultant to run a good discovery. You need three things. A willingness to watch the work being done in real time instead of imagining it. A unit of value you can write on a sticky note. And the discipline to keep the first version small enough that you can ship it in a month. Do those three things and your project will be in better shape than ninety percent of the AI initiatives I have walked into as a cleanup hire.

next step

Have a workflow you wish AI was running?

Get on a discovery call. Walk me through the work. If a build makes sense, you will leave the call with a clear next step. If it does not, I will tell you that too.