The model is not psychic. If you cannot give the same instructions to a competent human and expect useful work back, the AI cannot do it either. Prompting is not a trick. It is a brief.
Imagine hiring a contractor and telling them, "build me something nice." No square footage. No location. No budget. No idea what the building is for. You would never do this. You would not even do it with a temp you brought in for an afternoon. You would tell them who the work is for, what you actually need, where it goes, when it is due, why it matters, and how you want it done.
Then you sit down at ChatGPT, type "write me a marketing email," and wonder why the output sounds like it came from a brochure factory.
The phrase "garbage in, garbage out" is older than modern computing, but it has never been more relevant than right now. Every week we see teams across the Round Rock/Austin corridor blame the model for outputs that were doomed before the prompt was finished. The model is not the problem. The specification is the problem. And specification is work that most people want to skip.
The Specification Gap
When you onboard a new employee, you do not hand them a one line task and walk away. You give them context. You tell them who the customer is. You tell them what good looks like. You tell them what to avoid. You tell them when you need it back.
A prompt is the same conversation, compressed into text, with no follow up questions unless you build them in. If the brief is thin, the output will be thin. The model will fill the gaps with whatever is statistically average, and statistically average is exactly the brochure factory voice everyone complains about.
This is the same reason most AI pilots stall inside companies. Not because the technology is weak, but because nobody did the boring work of defining what they actually wanted before they turned the tool on.
The Six Questions
Treat every meaningful prompt like a kickoff brief. Before you hit enter, walk through the six.
1. Who is this for
Not "customers." A specific persona. A 52 year old facilities director at a mid sized hospital who has been burned by three vendors already. The model writes differently for that person than it does for "a customer."
2. What are you actually asking for
Not "an email." A 180 word follow up after a discovery call, with one specific call to action and no marketing language. Be precise about format, length, and the shape of the deliverable.
3. Where does this live
A LinkedIn post reads differently than a cold email reads differently than a slide in a board deck. The medium changes the voice, the length, and the structure. Tell the model where the output is going.
4. Why does this exist
This is the one most people skip and it is the most important. The model cannot prioritize without a goal. Why tells it which tradeoffs to make when it has to choose between brevity and completeness, formal and casual, safe and bold.
5. When matters more than you think
Is this a first touch or a fifth touch. Is it the morning of a launch or the day after a failure. Time and sequence change tone in ways the model will get wrong if you do not say so.
6. How is the style guide
Voice, vocabulary, things to avoid, examples of good output you have seen before. If you have a brand voice document, paste a chunk of it. If you do not, describe the voice in three adjectives and two things you hate.
Six questions. Ninety seconds. The difference between output you throw away and output you ship.
"The teams that get value from AI are not the teams with the best prompts. They are the teams that have done the boring work of knowing what they want before they ask for it."
What Bad Prompts Cost You
The hidden tax of skipping the brief is not just bad output. It is the time you spend regenerating, editing, and arguing with the model about what you meant. That time adds up fast, and it is invisible because it never shows up on a report.
We see the same pattern across every team that has plateaued with AI tools:
- Wasted regenerations because the first three outputs missed the point.
- Heavy editing cycles that erase the time savings the tool was supposed to deliver.
- Inconsistent voice across a team, because everyone is prompting differently.
- Loss of trust in the tool, when the real problem was loss of clarity in the request.
None of those are model problems. Every one of them is a brief problem.
Building the Habit
The fix is free, but it requires discipline. Here is how to install it.
- Write the brief before you open the chat. Two or three sentences in a notes app is enough. If you cannot write it, you do not understand the request well enough to delegate it to anyone, human or machine.
- Run the six questions as a checklist for the first two weeks. Out loud if you have to. After that it becomes muscle memory and you will do it without thinking.
- Save your good prompts. When something works, keep the prompt. You are building a library of briefs that already paid for themselves.
- Share the discipline across the team. If one person is prompting well and three are not, the team output is still inconsistent. Make the brief a team standard, not a personal trick.
- Audit failures backwards. When the output is wrong, ask which of the six questions you skipped. The answer will be obvious, and it will sting a little. That sting is the lesson.
The Discipline Is the Deliverable
Here is the part that matters for anyone trying to actually deploy AI inside a business. The teams that get value from these tools are not the teams with the best prompts. They are the teams that have done the boring work of knowing what they want before they ask for it. The prompt is just where that clarity gets written down.
If you cannot answer the six questions about a piece of work, you do not have a prompting problem. You have a scoping problem. And no model on earth is going to solve that for you.
Garbage in, garbage out is not a warning about AI. It is a warning about us. The good news is the fix is free. Slow down for ninety seconds before you type. Write the brief you would hand a human. Then hand it to the machine.
You will be surprised what comes back when you stop expecting it to guess.