Eric J Ma's Website

Agent skills are also human skills

written by Eric J. Ma on 2026-03-14 | tags: automation documentation workflow context dependencies github obsidian productivity skills structure agents


Workflow-specific agent skills don't just automate tasks — they encode how you work, down to your tools, your file structure, and your philosophy. I explore this through two examples: a daily sign-off skill that inherits my Obsidian setup and bullet journal structure, and a scientific EDA skill that goes further, encoding a whole epistemology of how analysis should proceed. I argue there are three layers of implicit assumptions in any workflow skill — tool dependencies, organizational preferences, and epistemic preferences — and that the last one is the hardest to see and the most important to document.

Agent skills are great, but I've been thinking about this... skills alone aren't enough.

I've been thinking about this while developing and using agent skills at home and at work. There's a distinction I've started to draw between two types of skills. Tool-specific skills document how to work with a particular tool or package. Those are fine, but really, pointing an agent at llms.txt often works just as well. The more interesting category is workflow-specific skills, things that encode how you actually work, that string together multiple tools to get a job done (Christensen).

Workflow-specific skills are what I want to talk about here.

A concrete example

My daily sign-off skill, which I use at work, is a case in point. I use it to wrap up my day. When I sign off, I need two things: my meeting notes (which I paste into Obsidian throughout the day) and my GitHub activity (commits, PRs, comments, reviews). The skill handles the GitHub part by querying the GitHub CLI and formatting everything into my daily bullets template.

But here's where it gets opinionated. My skill assumes:

  • You have the GitHub CLI installed
  • You do PRs as part of your work (not all technical managers do)
  • You write into a monthly file as your bullet journal, rather than having a single note per day.

That last point is opinionated. I don't have a single note per day. Instead, each month contains my collection of daily bullets. The motivation here is a line from the Zen of Python -- "flat is better than nested". On March 26, I have entries for that day inside the March file, rather than have a reference from the March file to March 26. This might not reflect your own preferences; you might prefer one note per day, or use a different structure entirely. But this is what my skill expects, and it's baked into how the skill works.

If you want to use my daily sign-off skill, you're not just adopting the skill. You're adopting my way of working. You're inheriting my file structure, my tool preferences, my mental model for organizing information. The skill comes with implicit assumptions about how you work, what tools you use, and what your environment looks like.

A second example, cutting deeper

The daily sign-off is mostly about tool and structure preferences. But some skills go further — they encode a philosophy.

My scientific EDA skill is a good example. On the surface it looks like a set of technical rules: use uv with PEP723 inline scripts, save plots as WebP (not PNG), organize each analysis session into a timestamped folder, keep an append-only journal.md. But look at what those rules actually encode:

  • One step at a time, ask "why" before executing — this isn't a technical constraint. It reflects a skepticism of agents that run ahead of the analyst. I believe good exploratory analysis is a dialogue, not a sprint.
  • Capture the research question before touching the data — this reflects a conviction that context shapes what you should even be looking for. Data without a question is just noise.
  • Append-only journal — this reflects a belief that good science is narrated, not just executed. The journal isn't a log file; it's a record of reasoning.
  • WebP over PNG — a small but deliberate aesthetic and practical stance on file hygiene.
  • uv + PEP723 — a specific bet on the Python toolchain that not everyone has made.

None of these are neutral defaults. Each one is a choice that reflects how I think scientific work should be done. If you use my EDA skill but don't share that underlying philosophy, you'll find yourself fighting it. The one-step-at-a-time rule will feel like friction. The journaling requirement will feel like overhead. The skill isn't broken — it's just mine.

This is a different kind of assumption from the daily sign-off. There, you're inheriting my tools and file layout. Here, you're inheriting my epistemology. That's harder to see, harder to document, and harder to transfer.

What this means

I call this procedural context. A workflow-specific agent skill is more than documentation for the coding agent. It also implicitly encodes a person's systems and structures for working. Without documenting the procedural context, the skill can only be half-useful for another person.

The two examples above hint at different layers of procedural context. There are at least three:

  1. Tool dependencies — what software needs to be installed (GitHub CLI, uv, etc.)
  2. Organizational preferences — how you structure files, folders, and notes
  3. Epistemic preferences — how you believe the work should actually proceed

The third layer is the most invisible. It's also the most important, and the hardest to transfer. You can install a CLI tool in five minutes. Adopting someone else's philosophy of scientific analysis is a different ask entirely.

Someone on Twitter put it well (I wish I could remember who, so I won't take credit): with agent skills, we finally found a way to get coders to write documentation. We'll document how we work if it means we can delegate that work to some{one/thing} else!

At the end of the day, agent skills are just automation and documentation. We're automating away the minutiae, and I love that. But if your skill describes a workflow, you need to document the assumptions too. What are the dependencies? What tools need to be installed? What mental structures does the person need? What does the user need to know to verify the output is correct?

Without that context, you can't evaluate whether the coding agent used the skill correctly -- and verification matters! You need to know what to look for when an LLM does work on your behalf.

The takeaway

Agent skills implicitly involve human skills. If that's true, then agent skills are also for humans. They're not merely instructions for an agent. They're documentation of how someone accomplishes a job, with all the prerequisites and context needed to reproduce it.

So when you write a workflow skill, think about the other people who might use it. Ask the skill-creator skill to include the dependencies, explain the environment, and describe what success looks like. The skill alone isn't enough. We have to teach the next person how to use it too.


Cite this blog post:
@article{
    ericmjl-2026-agent-skills-are-also-human-skills,
    author = {Eric J. Ma},
    title = {Agent skills are also human skills},
    year = {2026},
    month = {03},
    day = {14},
    howpublished = {\url{https://ericmjl.github.io}},
    journal = {Eric J. Ma's Blog},
    url = {https://ericmjl.github.io/blog/2026/3/14/agent-skills-are-also-human-skills},
}
  

I send out a newsletter with tips and tools for data scientists. Come check it out at Substack.

If you would like to sponsor the coffee that goes into making my posts, please consider GitHub Sponsors!

Finally, I do free 30-minute GenAI strategy calls for teams that are looking to leverage GenAI for maximum impact. Consider booking a call on Calendly if you're interested!