Peter Steinberger — State of the Claw¶
Source index. 17-min keynote + ~25-min AMA with Swyx at AI Engineer 2026. Raw at peter-steinberger-state-of-the-claw-2026.
Structure¶
- Keynote (17 min): growth metrics, security flood, openclaw-foundation, corporate contributors
- AMA with Swyx: OpenAI relationship, local models, workflow, dark-factory debate, prompt injection, future features, taste/system-design/saying-no
Concepts introduced¶
- agent-taste — the surviving engineering skill
- agent-security-slop — AI-generated CVE flood and maintainer coping
- soul-md — persona file as first-class artifact alongside AGENTS.md
Entities¶
Notable quotes¶
- "Stripper-pole growth." (Project velocity curve)
- "The higher they scream how critical it is, the more likely it's slop." (Security advisories)
- "Taste is the lowest bar — you instantly know if something stinks like AI."
- "The 10x engineer is no longer about words per minute — it's token-to-token usage." (Cross-echo to eric-zakariasson)
- "Madness with a touch of science fiction." (Self-description)
- "What's the worst that can happen? Pics are already online if you use Grindr." (Personal-risk framing)
Notable moments¶
- GSHJP CVSS-10 advisory — critical score on a permission-model edge case that ~nobody hits in practice; illustrates CVSS gaming.
- Nemo-Claw breakout — Peter's Codex Security agent broke Nvidia's hardened sandbox in 30 minutes using un-nerfed internal models.
- Agents of Chaos paper critique — academic paper ignored documented security recommendations for narrative impact.
- Dreaming feature — memory consolidation plug-in, confirmed Anthropic working on same (source leak).
- Dark-factory pushback — "The way to the mountain is usually never a straight line" — disagrees with pure-waterfall-agent-execution model.
- Ubiquitous agents vision — iPads in every room, agent follows you, glasses + earbuds as endpoints, Star-Trek-style "computer".
Cross-ingest links¶
- Strong rhyme with eric-zakariasson on parallel agents (5–10 sessions) + token-maxing joke.
- Extends mitchell-hashimoto's foundation model (Ghostty → OpenClaw).
- Concretizes andrej-karpathy's llm-knowledge-bases direction via dreaming/memory/wiki plug-ins — and Karpathy is a user.
- install-base-moat analog: OpenClaw's moat is the install base across personal hardware that no hosted SaaS can replicate.
- Adds a consumer/personal tier to the agent-tool map where claude-code, cursor, zed-editor sit on the professional/coding side.
Open questions¶
- Can the Foundation survive the BDFL effect? Peter's energy is the project's engine.
- Does "taste as moat" hold as models improve? Or does taste also get learned?
- What's the equilibrium between corporate contributors and hobbyist direction when every integration is maintained by one vendor's employee?
- At what point does the home-agent use case (Karpathy, Marin Dre) demand real standards (security, interop) vs staying "maximum fun"?