Skip to content

Ryan Lopopolo — Symphony + Extreme Harness Engineering (Latent Space)

Source: transcript · Latent Space · April 2026. Companion interview to Ryan's AIE London keynote (ryan-lopopolo-harness-engineering-2026); this one goes deeper on symphony, the openai-frontier team, and the dual-agent PR loop.

Threads this video contributes to

The 1M-LOC / 0% proof case. Five months, an internal beta Electron app, 1M lines of code, 0% human-written, and — newly surfaced here — 0% human-reviewed before merge. This moves the harness-engineering claim from "humans steer, agents execute" to something stronger: for well-scoped engineering work, humans don't need to steer every PR either, provided the harness encodes taste. See dual-agent-pr-loop for how they got there without the reviewer bullying the author into non-convergence.

Ghost libraries as a distribution model. symphony ships as a spec + reference Elixir implementation. Consumers point a coding agent at the spec and re-derive the library locally — ghost-library. Elixir wasn't a taste choice; Codex picked it because GenServers match Symphony's per-task-daemon orchestration. A direct consequence of code-is-free at the distribution layer.

Six reflection layers + a zero-layer. Symphony operationalizes harness engineering into policy / configuration / coordination / execution / integration / observability — plus a meta layer where the agent modifies its own workflow MD. This is the concrete shape of the "agent that improves its own harness" idea that connects skill-distillation, emergent-cursor-rules, and learning-agent-loop.

Agent-legible software. Files ≤350 lines, bespoke lint rules encoding codebase invariants, observability designed for agents not humans — see agent-legible-software. "Don't accept slop" is the velocity/guardrails tradeoff; short-term pain, durable floor.

Brett Taylor's dependencies claim. "Software dependencies are going away — they can just be vendored." Ryan's response: 100%, but you still pay Datadog and Temporal. Dependency rot is going away; dependency services stay. Adjacent to install-base-moat (the services with user data or network effects remain defensible).

What models still can't do. Zero-to-one product work (where the shape of the problem isn't known) and the gnarliest refactors where interface contours haven't stabilized. Pairs with ai-generated-code-is-untrusted and the verifiability-frontier.

Enterprise deployment + OpenAI Frontier. openai-frontier targets Snowflake/Stripe/Citadel-class customers where governance is table-stakes — "deploying agents safely at scale with good governance." Bellevue is the hiring push; OpenAI expanding beyond SF.

New pages from this ingest

Pages updated