Getting AI to write in your brand voice is a problem with several plausible solutions, and which one works depends on how seriously you need the output to sound like you rather than like a well-briefed stranger.

Here are five methods, roughly in order from least to most effective.

Method 1: Describe your voice in adjectives

The default approach. You tell the model your brand is "conversational but authoritative," "warm without being casual," "direct and confident." You've probably tried this.

It produces copy that matches the description the way a stock photo matches a mood board: technically correct, generically inoffensive, indistinguishable from anyone else who used similar adjectives. The model picks the most common interpretation of each word and writes toward it. The output is fine. It doesn't sound like you.

Use this only as a rough orientation, not as a voice specification. Pair it with something more concrete.

Method 2: Add a style guide or brand guidelines document

An improvement over pure adjectives because it's more specific — particular word choices to prefer or avoid, rules about sentence length, guidance on tone in different contexts. A real brand guidelines document contains decisions that a description alone can't capture.

The limitation is that guidelines describe a voice rather than demonstrating one. "We write in second person, present tense, with short sentences and active verbs" tells the model something. An actual paragraph written in second person, present tense, with short sentences and active verbs shows it something different. Both are useful, but they're not equivalent.

Style guides work well as constraints — guardrails around what the model shouldn't do. They're less effective as the primary mechanism for achieving a distinctive voice.

Method 3: Provide examples in the prompt

Meaningfully better. Instead of describing the voice, you show it. Two or three pieces of copy you've actually written, at roughly the format and length of what you're asking the model to produce.

The model learns from examples in a way it can't from descriptions. Sentence rhythm, the specific formality level you default to, how you handle transitions, where you allow fragments, which rhetorical structures you reach for — all of this is visible in actual copy and invisible in a brand voice description.

The practical issue: you have to reconstruct the examples every time. For a one-off task that's fine. For daily AI-assisted writing, rebuilding your voice foundation in every prompt is friction. It also means you're often pulling from memory rather than your best work — which is inconsistent.

This method is solid and worth doing. The next two methods are better.

Method 4: Maintain a curated example library and paste from it

A more systematic version of method 3. Instead of improvising which examples to include, you maintain a collection of your best copy — organized by format (headlines, landing pages, emails, social posts) — and pull from it deliberately when building prompts.

The difference between improvising examples and selecting from a curated library is larger than it sounds. The best copy you've written, organized well and accessible quickly, is a different input than whatever you happen to remember. A swipe file of your own work is one of the more underrated tools for AI-assisted writing.

This requires maintenance, but the investment compounds. Every time you produce something you're proud of, it goes into the library. Every time you use the library, the output improves. The collection gets more valuable the longer you keep it.

Method 5: Connect your example library to your AI as persistent context

The most effective method — and the one that requires the least per-session effort once it's set up.

If your AI tool supports it (Claude Code does, through MCP), you can connect your swipe file directly. Before the model writes anything, it reads your collection: your best headlines, your brand voice examples, the copy you've saved over time. The examples don't live in the prompt. They're available in the background, loaded automatically.

The practical result is that you stop briefing the AI on your voice for every task. The foundation is already there. You ask for the thing you want; the model writes it with your specific examples as context, not the statistical average of everything it trained on.

James Webb Young argued in 1939 that every idea is a new combination of existing elements — and that the quality of your output depends on the quality of the raw material available when combinations form. He was describing human creativity. The same logic applies directly to AI output. Give the model better raw material, and the output improves in a way that better prompts alone can't achieve.

The swipe file connected to your AI workflow is the practical implementation of that principle. You build the collection over time, you connect it once, and every AI writing task you do afterward starts from a position where the model already knows what good looks like for you specifically.

Which method to use

If you write with AI occasionally: method 3. Pick your best examples, include them in the prompt, and you'll see a meaningful improvement over adjective-based prompting.

If you use AI for writing regularly: method 4 or 5. Maintain a library of your best copy, organized by format. If you can connect it as persistent context, do that. The upfront investment is an hour or two; the return is every AI writing session you do afterward.