Agentic Loopiness¶
alex-krentsel's visual for how agent systems stack: nested matryoshka dolls where each outer shell is a new loop around the layer below. A compact way to explain why phase-3 agents aren't qualitatively different kinds of programs — they're just the next outer doll.
The nesting¶
From inside out:
- Transformer inference — one forward pass, one next token. (The 2017 paper.)
- LLM generation — transformer in a loop over tokens; produces a full response.
- Assistant — LLM in a loop over turns; multi-step conversation, internal reasoning passes.
- Scoped agent — assistant in a loop over tool calls; read/write files, run commands.
- Autonomous agent — scoped agent in a loop over tool discovery and self-modification; owns its environment, adds skills, edits config.
Each layer is "just" the previous one wrapped in a new while loop. The outer layer you accept determines what the system can be.
Why it's useful¶
- Reifies the "harness" — a harness is whatever code runs the outermost loop. See harness-engineering.
- Separates capability from authorship. Each loop shifts more control-flow authorship from the human to the model.
- Predicts the next layer. Krentsel's open question: what's outside phase-3? He guesses malleable architecture — a loop where the architecture itself is edited by the agent, not just skills/config.
Strange-loops connection¶
Krentsel explicitly invokes Hofstadter's Gödel, Escher, Bach: at the autonomous-agent level, the loop wraps all the way around — the agent is now the interface by which the agent is reconfigured. The edges of "author" and "subject" blur. He reads this as a flywheel-takeoff signal.
Pairs with¶
- phase-3-autonomous-agents — the phase-ladder version of the same argument
- sessions-as-processes — inside a single loop layer, the OS-abstraction analogy
- design-over-implementation — implication: the value is in which loops you close, not how you implement each one
- learning-agent-loop — the reflexive / self-improving variant