01

Opening

The new edge is making the idea legible.

Writing code is no longer the main barrier to entry. The hard part now is turning a fuzzy product instinct into a workflow the tools can actually build.

Rust Cohle once told two detectives to start asking the right questions. That line sticks because a lot of LLM use feels the same: broad, lazy asks, then irritation when the answer wanders off somewhere useless.

For a long time, the "idea guy" was mostly a joke. Not because ideas were worthless. Because the gap between an idea and a working product was enormous. If you could not code, could not hire engineers, and could not translate your thinking into a build process, then your great idea was mostly a speech.

That is changing. Writing code is not the main barrier anymore. The bigger barrier now is whether you can turn an idea into a system: what triggers the workflow, what data goes in, what happens to it, what comes out the other side, and what "done" actually means.

Naval put it cleanly: "Learn to sell. Learn to build. If you can do both, you will be unstoppable." I think that line matters even more now, but "learn to build" has changed. It means enough system thinking to scope the work, break it into parts, steer the tools, and review what comes back.

The course starts here for a reason. Most bad tool use begins with a bad question. The new opportunity is that good product thinkers can now turn better questions into software.

The Broken Loop

"Chat. Make me a million dollar ARR startup. Don't make mistakes. No em dashes."

This is how a lot of tool-assisted projects actually begin: one heroic prompt, no real plan, way too much scope, and no clean way to tell what is working. The project drifts, your understanding drifts with it, and by Thursday you want to start over.

Idea -> Open editor -> Heroic prompt -> Drift

The Better Loop

Ideas matter again, if they become systems.

The new edge is not typing code faster. It is turning a product instinct into a plan, tasks, and a review loop the tool can actually handle.

Idea -> Plan -> Tasks -> Tool -> Review -> Ship

Use the lab below, change the app idea, and watch the plan, task list, and check layer update together.

Think of this as the 101. Get the loop working first. The 201 is where we will talk about richer agents, deeper review flows, and more ambitious automation.

Brainstorming Sesh

The best input for an AI coding session is usually not a polished spec. It is a ramble. It is fine if this starts on pen and paper, in a WhisperFlow voice note, or in a messy chat. A doodle with arrows is often the honest first draft.

When you type, you self-edit. When you talk, you think out loud. You say, "well actually what I really want is..." and that correction is usually the real requirement surfacing. Voice is good because it lets intent show up before your inner editor kills it.

The model's job is not to have the idea. It is to take your messy thinking and reflect it back as structure: a workflow brief, the edge cases you implied but did not say, and eventually PRD.md. The important check is whether you read it back and think, "yes, that is what I meant," not just "yes, that sounds professional."

02

Ask Better Questions

Clearer questions help a lot. But the important part is why they help: they make the work small enough for the tool to answer honestly.

Install Claude Code or Codex, make sure you have a project the tool can work in, and optionally use Linear if you want the task board to live somewhere familiar. Do not let setup become the whole project.

If the terminal makes you nervous, treat it as a text window into the project, not as a test of whether you are technical enough. You do not need to become a shell expert. You mostly need to know where the project lives, how to run a few commands, and how to see what changed.

That matters because coding agents do not work on vibes. They work on files, folders, diffs, tests, and scripts. The file system is what makes the project real. It is where the tool reads context from and where it writes changes back. Without that, you mostly just have a chat.

The point of a good prompt is not to make the model smarter. It is to remove ambiguity the model would otherwise have to fill in on its own. When the model fills in gaps, it is guessing. Sometimes it guesses well. Sometimes it drifts. You do not control which.

So "small enough to answer honestly" means leaving so little room for interpretation that the tool can give you a straightforward response instead of an impressive-sounding one that quietly made assumptions you never asked for. Big prompts invite hallucination and drift. Smaller prompts invite accuracy because they leave the tool less room to be wrong.

The Rust Problem

Vague questions get wandering answers

If you ask for the whole app, the tool has to invent scope, architecture, priorities, and stopping conditions all at once.

Better Than One Shot

Ask in sequence

First ask what the app needs. Then ask for the smallest first task. Then ask for implementation. Small questions keep the tool from drifting.

Add Structure

Task, context, output

A structured task brief beats a giant paragraph. Tell the tool what the job is, what context matters, what done means, and what not to touch.

Force an Interpretation

Make the tool restate the job

If the tool explains the task back to you incorrectly, fix that before any files change. Misalignment is much cheaper to correct up front.

The Real Point

This is not prompt theater

Better questions matter because they are the front door to a build process. The real lesson is still scope, sequence, checking what changed, and shipping.

03

Start With a Real Idea

Start with a plain-English app idea. The page turns that workflow brief into PRD.md, a first task stack, and a cleaner handoff for Claude Code, Codex, or whatever coding tool you use.

This is the whole trick. If you can explain the trigger, the data, what happens to it, the result, and why it matters, you have enough context to ask the model for a real PRD.md.

04

Make a One-Page Plan

Startups call this a PRD. Normal people can just call it a one-page plan. I usually have the LLM turn the workflow brief into PRD.md so the app exists as a real markdown artifact before the code starts multiplying. Markdown is nice because it is readable, portable, and structured enough for both you and the model to work with.

This is one chain of artifacts, not four disconnected widgets. Your workflow brief creates PRD.md. PRD.md creates the task board. The selected task card creates the handoff prompt and the check list. Every step after this should change when the previous step changes.

Workflow Brief Rough thinking becomes a legible workflow.
PRD.md Generated from your workflow brief.
Issue Board Generated from PRD.md.
Handoff + Review Generated from the selected task card.

Product Thesis

Auggie for GTM sellers

GTM sellers need one place to gather account context, proof points, and open questions before outreach starts.

Who It Is For

Primary User

GTM seller prepping for outbound or a live meeting

Why It Matters

Account context stops getting trapped across tabs, PDFs, and old notes that never travel together.

Workflow Trace

    Definition of Done

      Markdown Artifact

      PRD.md

      
                              
                              
                          

      05

      Break It Into Tasks

      Generated from PRD.md. Have the LLM cut the plan into Linear-style task cards. The board gives you and the tool the same map of the work. Click a task card below to drive the next sections.

      This is why the board matters. It is familiar, visible, and trackable. Instead of one long chat holding the whole app in memory, the work lives on cards.

      06

      Hand One Task to the Tool

      Generated from the selected task card. This is the difference between a giant prompt and a useful handoff. A tool like Claude Code or Codex gets one task, the relevant context, and a clear picture of what should be true when the work is done.

      Task card handoff for Claude Code / Codex

      Why This Works

      Small scope makes the tool useful.

      When the task is narrow, the tool spends less time guessing at architecture and more time implementing what you actually need.

      The job of the human is not to dump the whole app into one prompt. The job is to define one task small enough that success is visible.

      07

      Check What Changed and Test It

      Generated from the selected task card and the workflow context. You do not need to sound like an engineer here. You just need to make sure the thing works, feels right, and still matches the plan.

      This part should be a little scrappier than people expect. Click around. Try the weird path. Type the annoying input. Use it like someone who does not care about your roadmap. You are not trying to prove the tool is perfect. You are trying to catch the dumb breakages while they are still cheap.

      What To Check

      Simple Check List

        What Not To Do

        Do Not Make This Weird

        • Do not pile three more ideas on top before this one works.
        • Do not ship it just because the output looks impressive.
        • Do not widen the task during the check because a new thought showed up.
        • Do not treat a giant diff as proof that useful work happened.
        • Do not panic when something breaks. That is the point of checking it here.

        08

        Ship Small

        The point of this process is not ceremony. It is to keep the loop tight enough that the next task starts from a cleaner project and a more honest understanding of the product.

        Before The Tool

        Decide the job

        Write the task, the files you expect might change, and what done means.

        After The Tool

        Inspect the change

        Run the flow, scan the diff, and make sure the task is actually closed.

        After Shipping

        Learn, then repeat

        Take what you learned from this task and fold it into the next task, not into a massive rewrite.

        09

        Monitor and Iterate

        Shipping is not the end of the loop. Time is a flat circle. Here that just means you ship, watch, tighten, and go again.

        Once the thing is live, watch what breaks, what confuses people, and what the next smallest fix should be.

        Watch For

        Confusion, errors, drop-off

        Did people use it? Did they get stuck? Did anything quietly fail? These signals matter more than your feelings about the code.

        What To Do

        Tighten the next task card

        Turn what you learned into the next small fix. Do not answer uncertainty with a giant rewrite.

        The Point

        Keep the loop alive

        Good products do not arrive all at once. They get less confusing, less brittle, and more useful one tight pass at a time.

        The Takeaway

        The barrier is not code anymore. It is clarity.

        The old excuse was: I had the idea, I just could not build it. That excuse is getting weaker. If you can define the workflow, write the one-page plan, break the work into tasks, and review what changed, you can run a real build loop.

        Naval had it right: "Learn to sell. Learn to build. If you can do both, you will be unstoppable." I think that still holds. But "learn to build" now includes system thinking: turning a fuzzy idea into something the tools can actually execute.

        The idea guy is back. He just needs a build loop instead of a pitch.

        Back to Wanderings.