Most advice about writing prompts focuses on specificity: be more detailed, add more context, describe the output format you want. That advice is correct as far as it goes. But there's a particular problem it doesn't solve — getting AI output that actually sounds like your brand rather than a professional approximation of it.

The difference matters. Professional approximations are competent. They're also interchangeable. A brand voice prompt that actually works produces copy that sounds like a specific choice was made, not like the model found the most acceptable middle ground.

Why most brand voice prompts fail

The standard approach is to describe the brand voice in adjectives: "conversational but authoritative," "warm without being casual," "direct and confident." These descriptions feel accurate when you write them. The copy they produce tends to feel generic.

The problem is that adjectives describe a range. "Conversational" could describe a dozen different writing styles. The model picks a point somewhere in that range — probably the most common interpretation of the word — and writes toward it. The output matches the description without capturing what makes your voice specific.

Examples don't have this problem. A single paragraph of copy you've actually published shows the model something concrete: the sentence length you prefer, where you allow fragments, how you handle technical terms, whether you write to an individual or a group, what level of formality you default to, which rhetorical moves you reach for. None of these choices are captured in "conversational but authoritative." All of them are visible in the example.

The structure of a prompt that works

A brand voice prompt that consistently produces useful output has three parts, in this order: context, examples, task.

Context orients the model. What's the product, who's the audience, what's the moment in the customer journey. One or two sentences — enough to anchor the request, not a full brief. The model doesn't need everything; it needs the right frame.

Examples do the voice work. Two or three pieces of copy that represent the voice you want — ideally at the same format and length as what you're asking the model to produce. If you're writing a subject line, your examples should be subject lines. If you're writing a landing page opening, show it three different landing page openings in your voice. The closer the format match, the more directly the model can apply what it learns from the example.

Task is the specific request. Now that the model has context and examples, what exactly do you want? Be direct. You don't need to re-describe the voice — the examples already did that. The task is just the task.

Choosing the right examples

The quality of the examples determines the quality of the output. A few things to consider when selecting them:

Pick copy you're actually proud of — pieces where you read the draft and thought "yes, that's it." Not everything you've published is at that level. The swipe file equivalent of your own work is worth maintaining for exactly this reason.

Prefer specificity over range. It's tempting to pick examples that "show the range" of your voice — something more casual, something more formal, something more creative. In practice this tends to confuse the output. A tighter selection of examples that share the same core voice characteristics produces more consistent results than a deliberately diverse set.

Match the format. A great email you wrote is a poor example for a social post prompt. The model will abstract principles from the example, but format signals are part of what it's learning — rhythm, sentence length, structural choices that differ between formats. Give it examples that look like what you want to receive.

What to do when the output is close but not right

If the first output is in the right territory but not exactly right, the fastest path to improvement is usually a specific edit, not a revised prompt. Rewrite the sentence that's off in your actual voice, then show the model the before and after: "Here's what you gave me. Here's how I'd write it. Write three more using the revised version as your reference." A concrete demonstration is more useful than a description of what you want changed.

The instinct to add more to the prompt — more adjectives, more instructions, more examples — usually produces diminishing returns past a certain point. If two examples and a clear task aren't working, the issue is usually in which examples you chose, not how many you included.

Building a prompt library

The prompt you write for a subject line is different from the prompt you write for a LinkedIn post, which is different from the prompt you write for a landing page. Each format has different voice characteristics, different structural requirements, different baseline expectations.

Rather than reconstructing a brand voice prompt from scratch each time, it's worth building a small library: one prompt structure per format you produce regularly, each pre-loaded with your best examples for that format. The investment is maybe an hour to set up; the return is copy that sounds consistently like you across every format you use.

If your AI tool supports persistent context — a way to make your examples available automatically before any prompt — the prompt structure collapses to just context and task. The examples live in the background, always present. That's the practical case for connecting a swipe file to your AI workflow: you stop rebuilding the voice foundation every time and start from a position where the model already understands what you sound like.