Agentic Context is King
Managing context with skills and sub-agents
It’s no longer enough to have an agent instruction file to help you code. Now you need to manage conversational context to achieve the best outputs. Agents and skills will make that happen.
To start, when you’re interacting with an LLM, like Claude or ChatGPT, you’re talking to an agent. And the back-and-forth between you and the agent is the context. You should care about this context because that’s what the agent knows. If you were to start a new session all that context would be lost. This context is what you’re managing with sub-agents and skills.
I like to think of sub-agents as “side cars”, they run off to the side and have their own context independent of the primary conversation. What that means is the bulk of what they do is lost to the primary context. This context is what you’re managing. You might ask yourself questions like, Do I care about that lost context? Would I prefer to have that knowledge in my primary thread? Is this context something that would improve the final outputs? The answer to these boils down to whether preserving the context will be useful to you in the near-future.
Here are some examples of sub-agents I use to manage context scope, agents that
capture architectural decision artifacts
perform adversarial functions on my code
produce project documentation
open Pull Requests
If you need to write documentation, have an agent review your code (especially from an adversarial point of view), or fire-and-forget the opening of a PR, you can safely exclude their intermediary thinking as you often only care about the final output.
So those are sub-agents, let’s go further and talk about skills.
Skills are what I like to reach for before getting fancy with sub-agents. Skills are small, focused bits of knowledge that an agent can leverage when necessary. Examples of skills are
common language conventions
preferences for writing tests
composing documentation
working with data models
Unlike sub-agents, skills load directly into an agent's context; that means you get specialized knowledge without losing valuable context. For example, when I write tests for a has_many relationship I like to be very specific about the check. Say a parent has many children. I don’t want to test that one child exists and I don’t need to instantiate ten. I want precisely two. Why? Because I’ve seen leaky tests before where I might create two objects but there’s a third that sneaks in somehow (overly complex test harnessing). If I don’t check for precisely two and simply look for inclusion then I’ll get a false positive. If I check for one that doesn’t prove there can be many. If I check for precisely two, I catch the case that the test harness itself is bad (i.e. there are more than two). And so I encoded this in a skill’s directive like,
Model relationships (has_many, etc.) must be tested with two associated records. Assert the collection contains exactly both using contain_exactly(item1, item2). Do not use include(one_item); that passes even when the association is broken.
What I get from this is increased certainty that agents performing work for me take advantage of my own experience. In this case, be careful with inclusionary testing.
What’s more, the nice thing about skills is that they can be referenced by sub-agents. That is, if I eventually want a coding agent that specializes in Ruby, I can tell it to reference all the skills I have for writing Ruby. And that’s where the two ideas link up. You can view skills as building blocks to future agents; mix a little of this knowledge with that knowledge and wrap it up into a single agent that can perform the task on its own.

