I have yet to read this article (in full), but I love trees! As an amateur AST transformation nerd. Kinda related but I’ve been trying to figure out how to generalize the lessons learned from this experiment in autogenerating massive bilingual dictionary and phrasebook datasets: https://youtu.be/nofJLw51xSk
Into a general purpose markup language + runtime for multi step LLM invocations. Although efforts so far have gotten nowhere. I have some notes on my GitHub profile readme if anyone curious: https://github.com/colbyn
(I really dislike the ‘agentic’ term since in my mind it’s just compilers and a runtime all the way down.)
But that’s more serial procedural work, what I want is full blown recursion, in some generalized way (and without liquid templating hacks that I keep restoring to), deeply needed nested LLM invocations akin to how my dataset generation pipeline works.
PS
Also I really dislike prompt text in source code. I prefer to factor in out into standalone prompt files. Using the XML format in my case.
You should also try to make context query the first class primitive.
Context query parameter can be natural language instruction how to compact current context passed to subagent.
When invoking you can use values like "empty" (nothing, start fresh), "summary" (summarizes), "relevant information from web designer PoV" (specific one, extract what's relevant), "bullet points about X" etc.
This way LLM can decide what's relevant, express it tersly and compaction itself will not clutter current context – it'll be handled by compaction subagent in isolation and discarded on completion.
What makes it first class is the fact that it has to be built in tool that has access to context (client itself), ie. it can't be implemented by isolated MCP because you want to avoid rendering context as input parameter during tool call, you just want short query.
depends_on is also based on context query but in this case it's a map where keys are subagent conversation ids that are blockers to perform this handed over task and value is context query what to extract to inject.
This kind of research is underrated. I have a strong feeling that these kinds of harness improvements will lead to solving whole classes of problems reliably, and matter just as much as model training.
We built something like this by hand without much difficulty for a product concept. We'd initially used LangGraph but we ditched it and built our own out of revenge for LangGraph wasting our time with what could've simply been an ordinary python function.
Never again committing to any "framework", especially when something like Claude Code can write one for you from scratch exactly for what you want.
We have code on demand. Shallow libraries and frameworks are dead.
Claude basically does this now (including deciding when to use subagents, tools, and agent teams). I built a similar thing a month ago and saw the writing on the wall.
Historically, Claude code used sequential planning with linear dependencies using tools like TodoWrite, TodoRead. There are open source MCP equivalents of TodoWrite.
I’ve found both the open source TodoWrite and building your own TodoWrite with a backing store surprisingly effective for Planning and avoiding developer defined roles and developer defined plans/workflows that the author calls in the blog for AI-SRE usecases. It also stops the agent from looping indefinitely.
Cord is a clever model and protocol for tree-like dependencies using the Spawn and Fork model for clean context and prior context respectively.
Not exactly a surprise Claude did this out of the box with minimal prompting considering they’ve presumably been RLing the hell out of it for agent teams: https://code.claude.com/docs/en/agent-teams
I wonder if the “spawn” API is ever preferable over “fork”. Do we really want to remove context if we can help it? There will certainly be situations where we have to, but then what you want is good compaction for the subagent. “Clean-slate” compaction seems like it would always be suboptimal.
Is there any reason to explicitly have this binary decision.
Instead of single primitive where the parent dynamically defines the childs context. Naturally resulting in either spawn or fork or anything in between.
This approach seems interesting, but in my experience, a single "agent" with proper context management is better than a complicated agent graph. Dealing with hand-off (+ hand back) and multiple levels of conversations just leaves too much room for critical information to get siloed.
If you have a narrow task that doesn't need full context, then agent delegation (putting an agent or inference behind a simple tool call) can be effective. A good example is to front your RAG with a search() tool with a simple "find the answer" agent that deals with the context and can run multiple searches if needed.
I think the PydanticAI framework has the right approach of encouraging Agent Delegation & sequential workflow first and trying to steer you away graphs[0]
The tasks tool is designed to validate a DAG as input, whose non-blocked tasks become cheap parallel subagent spawns using Erlang/OTP.
It works quite well. The only problem I’ve faced is getting it to break down tasks using the tool consistently. I guess it might be a matter of experimenting further with the system prompt.
Opencode getting fork was such a huge win. It's great to be able to build something out, then keep iterating by launching new forks that still have plenty of context space available, but which saw the original thing get built!
Into a general purpose markup language + runtime for multi step LLM invocations. Although efforts so far have gotten nowhere. I have some notes on my GitHub profile readme if anyone curious: https://github.com/colbyn
Here’s a working example: https://github.com/colbyn/AgenticWorkflow
(I really dislike the ‘agentic’ term since in my mind it’s just compilers and a runtime all the way down.)
But that’s more serial procedural work, what I want is full blown recursion, in some generalized way (and without liquid templating hacks that I keep restoring to), deeply needed nested LLM invocations akin to how my dataset generation pipeline works.
PS
Also I really dislike prompt text in source code. I prefer to factor in out into standalone prompt files. Using the XML format in my case.
You should also try to make context query the first class primitive.
Context query parameter can be natural language instruction how to compact current context passed to subagent.
When invoking you can use values like "empty" (nothing, start fresh), "summary" (summarizes), "relevant information from web designer PoV" (specific one, extract what's relevant), "bullet points about X" etc.
This way LLM can decide what's relevant, express it tersly and compaction itself will not clutter current context – it'll be handled by compaction subagent in isolation and discarded on completion.
What makes it first class is the fact that it has to be built in tool that has access to context (client itself), ie. it can't be implemented by isolated MCP because you want to avoid rendering context as input parameter during tool call, you just want short query.
Ie. you could add something like:
depends_on is also based on context query but in this case it's a map where keys are subagent conversation ids that are blockers to perform this handed over task and value is context query what to extract to inject.Neat concept though, would be cool to see some tests of performance on some tasks.
I've been playing with a closely related idea of treating the context as a graph. Inspired by the KGoT paper - https://arxiv.org/abs/2504.02670
I call this "live context" because it's the living brain of my agents
Never again committing to any "framework", especially when something like Claude Code can write one for you from scratch exactly for what you want.
We have code on demand. Shallow libraries and frameworks are dead.
I’ve found both the open source TodoWrite and building your own TodoWrite with a backing store surprisingly effective for Planning and avoiding developer defined roles and developer defined plans/workflows that the author calls in the blog for AI-SRE usecases. It also stops the agent from looping indefinitely.
Cord is a clever model and protocol for tree-like dependencies using the Spawn and Fork model for clean context and prior context respectively.
Is there any reason to explicitly have this binary decision.
Instead of single primitive where the parent dynamically defines the childs context. Naturally resulting in either spawn or fork or anything in between.
in the short run, ive found the open ai agents one to be the best
If you have a narrow task that doesn't need full context, then agent delegation (putting an agent or inference behind a simple tool call) can be effective. A good example is to front your RAG with a search() tool with a simple "find the answer" agent that deals with the context and can run multiple searches if needed.
I think the PydanticAI framework has the right approach of encouraging Agent Delegation & sequential workflow first and trying to steer you away graphs[0]
[0]:https://ai.pydantic.dev/graph/
The tasks tool is designed to validate a DAG as input, whose non-blocked tasks become cheap parallel subagent spawns using Erlang/OTP.
It works quite well. The only problem I’ve faced is getting it to break down tasks using the tool consistently. I guess it might be a matter of experimenting further with the system prompt.
[1]: https://github.com/matteing/opal
Opencode getting fork was such a huge win. It's great to be able to build something out, then keep iterating by launching new forks that still have plenty of context space available, but which saw the original thing get built!
cord - The #1 AI-Powered Job Search Platform for people in tech