Skip to content

Skills as reusable playbooks

Agent Skills are the procedural layer of your AI environment. While AGENTS.md provides context and norms, Skills provide executable playbooks for tasks you perform repeatedly.

The beauty of this approach lies in prompt compression. Instead of re-explaining a complex workflow every session, you invoke a named skill that carries all the necessary instructions, examples, and scripts.

What is a skill?

A skill is a self-contained folder that follows the Agent Skills specification. At its heart is a SKILL.md file that acts as the primary prompt for the agent.

A well-structured skill typically includes:

  • SKILL.md: The instructions, trigger conditions, and expected output format.
  • references/: Domain knowledge or documentation the agent needs.
  • scripts/: Executable code (Python, Bash) for deterministic tasks.
  • assets/: Templates, icons, or "good" examples for the agent to emulate.

Keep openskills on your radar

openskills is worth keeping in your line of sight. It offers a universal installer for skills, tracking sources and syncing available skills to your AGENTS.md. For now, I want you aware it exists.

That said, I use my own skill-installer instead. It respects your current state rather than overwriting it. The key difference: openskills assumes skills go into ~/.agents or .agents, while most agentic coding harnesses use different directories entirely.

Install the "Big Three"

With that context in mind, here are three global skills from my skills repository I'd recommend installing:

  • skill-installer: Installs and updates skills while respecting your current state.
  • skill-creator: Lowers the energy barrier for building new playbooks.
  • agents-md-improver: Keeps your repository memory fresh by autonomously updating the code map.

Tacit knowledge and the "ESL benefit"

One of the most powerful uses for skills is capturing tacit domain expertise.

A teammate of mine recently used skill-creator to encode her implicit knowledge from over three years of debugging chromatography traces. For her, this was a massive productivity multiplier. English is not her native language, and the structured, Markdown-first nature of skills allowed her to "download her brain" into an executable procedure without being bottlenecked by the nuances of conversational prompting.

This pattern is a huge win for international teams. It levels the playing field by turning individual intuition into a shared, high-signal technical artifact.

The iteration loop

A skill is not a static document; it is a living artifact. I've found this loop works best:

  1. Review: Scrutinize the agent's output from a skill.
  2. Revise: Make surgical edits to the output until it meets your standards.
  3. Feed back: Give the revised version back to the agent.
  4. Update: Tell the agent, "Update this skill with this new example of what looks good."

Over time, the skill evolves to match your taste and technical rigor.

See this in action

The CI/CD chapter shows how automated systems need reliable monitoring. Debugging a failed GitHub Actions run is a perfect candidate for a skill. Instead of manual log diving, you can invoke a skill that pulls the failed job logs, identifies the error, and proposes a fix.

For more on how this fits into your overall practice, see the next chapter on the Compounding agent improvement.