Skip to content
Go back

How to Write Skills That Actually Work

Stuart Brameld

Stuart Brameld

Founder

Writing your first agent skill is easy. Writing one that actually works is harder.

We’ve seen plenty of skills that look fine on paper but fall flat in practice. The AI ignores half the instructions, applies the wrong rules to the wrong tasks, or burns through context with verbose preambles before getting to the work.

The team behind the Agent Skills standard has published guidance for skill creators. Here’s what matters most for marketing teams writing their own.

If you’re new to skills, start with our guides on agent skills for marketers and when to use skills, MCPs, and CLIs.

Start from real work, not theory

The most common mistake: asking an LLM to generate a skill without domain-specific context. The result reads like a textbook chapter. Generic procedures with phrases like “follow best practices” that mean nothing in practice.

As the Agent Skills documentation puts it: “Effective skills are grounded in real expertise.”

Two reliable ways to get there:

Extract from a hands-on task. Run the workflow in conversation with an AI agent. Provide context, make corrections, iterate to a good outcome. Then ask the agent to extract the reusable pattern into a skill. Pay attention to the corrections you made along the way. Those are usually the most valuable instructions in the final skill.

Synthesize from existing artifacts. Feed the AI your brand guidelines, content briefs, campaign retros, or tone-of-voice documents. A copywriting skill built from your team’s own approved drafts (and rejected ones) will outperform one built from generic copywriting advice.

Either way, the value lives in your team’s specifics, not the AI’s general knowledge.

Get the description right

A skill with great instructions and a vague description is invisible. The AI only sees the description field when deciding whether to load a skill. Get it wrong and the skill never fires.

Matt Pocock makes the point bluntly in his write-a-skill skill:

“The description is the only thing your agent sees when deciding which skill to load.”

Four practical rules:

The description is the front door. Spend more time on it than you think you need to.

Spend context wisely

Once a skill activates, its full content loads into the AI’s context window alongside your conversation, system prompts, and any other active skills. Every token competes for attention.

Three practical implications:

Add what the agent doesn’t already know. Skip the basics. The agent knows what an SEO audit is, what UTM parameters do, and how email open rates work. Focus on what’s specific to your team: your tone of voice, your reporting structure, your reviewers, your historical mistakes.

Design coherent units. A skill that covers “everything about content” is too broad. A skill that covers “writing the meta description” is too narrow. A skill that covers “writing a blog article from brief to publication” is about right.

Use progressive disclosure for big skills. The Agent Skills specification recommends keeping the main SKILL.md under 500 lines. When you need more, move detailed reference material into separate files and tell the agent when to load each one. “Read references/seo-checklist.md before publishing” works. “See references/ for details” doesn’t.

Calibrate control to the task

Not every part of a skill needs the same level of prescription. Match the specificity of your instructions to how fragile the task is.

Give the agent freedom when multiple approaches are valid. A blog editing skill can describe what to look for (clarity, jargon, transitions) without prescribing exact steps. Explaining why outperforms rigid directives. The agent can make better context-dependent decisions when it understands the purpose.

Be prescriptive when consistency matters or a specific sequence must be followed. Publishing checklists, brand-mandated formats, or compliance steps belong in this category. Spell them out exactly.

The Agent Skills guide is direct on this:

“Match the specificity of your instructions to the fragility of the task.”

Most skills mix the two. Calibrate each section independently.

Provide defaults, not menus

When multiple tools or approaches could work, pick one and mention alternatives briefly. A skill that says “use Brand Voice A or B or C, depending on context” forces the agent to decide every time. A skill that says “default to Brand Voice A. Use B for technical audiences. Use C only when explicitly instructed” gives a clear path with escape hatches.

Teach procedures, not specific answers

A skill should teach the agent how to approach a class of problems, not what to produce for one specific instance.

<!-- Specific answer (only useful for this exact task) -->
For Q2, focus on the SMB segment and emphasise the new pricing tier.

<!-- Reusable procedure -->
1. Read the current quarter's strategy from references/strategy.md
2. Identify the priority segment from the strategy file
3. Pull the latest positioning from references/positioning.md
4. Draft three message variants tailored to that segment

The first only works once. The second works every quarter.

The patterns that actually work

A few patterns appear in skills that work well. Use the ones that fit your workflow.

Gotchas sections

The single highest-value content in many skills is a list of environment-specific facts that defy reasonable assumptions. Not generic advice. Concrete corrections to mistakes the agent will make without being told otherwise.

“The highest-value content in many skills is a list of gotchas — environment-specific facts that defy reasonable assumptions.”

For marketing teams, gotchas might look like:

When the agent makes a mistake you have to correct, add it to your gotchas list. It’s the fastest way to improve a skill.

Templates for output format

AI agents pattern-match well against concrete structures. If you want a specific report format, give the agent a template, not a prose description of one.

## Weekly performance report

# [Channel] performance, week of [date]

## Headline number
[Single metric that summarises the week]

## What changed
- Bullet 1
- Bullet 2

## What we're testing next
1. Test 1
2. Test 2

Checklists for multi-step workflows

An explicit checklist helps the agent track progress and avoid skipping steps. Useful for processes with dependencies or validation gates: pre-launch campaign reviews, content publishing pipelines, anything with a “do not ship without” step.

Validation loops

Tell the agent to check its own work before moving on. The pattern is: do the work, run a validator (a script, a checklist, or a self-check), fix any issues, repeat until validation passes. A reference document can serve as the validator. “Before publishing, check the draft against references/brand-voice-rules.md. Revise anything that doesn’t match. Repeat until clean.”

Refine with real execution

The first draft of a skill almost never lands right. Run it against real tasks, then feed the results back into the next iteration. Read what the agent actually did, not just the final output. If it wasted time on unproductive steps, that’s a signal the instructions are too vague, don’t apply to the task, or present too many options without a clear default.

The Agent Skills guide is clear on this: “Even a single pass of execute-then-revise noticeably improves quality, and complex domains often benefit from several.”

The best skills aren’t written. They’re rewritten.

The bottom line

A working skill looks more like a runbook than a clever prompt. It captures the specifics of how your team operates, focuses on what the AI doesn’t already know, and matches its prescriptiveness to the task. Get those three things right and you’ll build skills your team will actually use, and that AI agents will actually follow.


Back to top ↑