Skip to content

title: "Riley & Galan — Identity for AI Agents (AI Engineer 2026)" type: query source_url: https://youtu.be/VSdV-AdSlis raw: riley-galan-identity-for-ai-agents-auth0-2026 speakers: patrick-riley, carlos-galan venue: AI Engineer conference date_published: 2026-01-14 ingested: 2026-05-01 tags: [domain/ai, identity, auth, agents, mcp]


Patrick Riley & Carlos Galan — Identity for AI Agents

Source index. ~82-min hands-on workshop. Raw at riley-galan-identity-for-ai-agents-auth0-2026.

Thesis

Identity & authorization for AI agents slots cleanly into existing OAuth/OIDC primitives if you model agent = client, MCP server = client, upstream API = resource server. Auth0's Auth-for-AI release operationalizes this via token-vault (delegated upstream access), async-auth-ciba (risky-op approval), and dynamic-client-registration (MCP-server-as-client).

Structure

  1. OWASP LLM Top 10 — why agents bring new threats
  2. Agent modalities — interactive → task runners → autonomous → agent-to-agent
  3. auth-for-ai-four-pillars framework
  4. Auth0 + Okta positioning (user-side vs employee-side)
  5. Token Vault + CIBA + MCP DCR deep-dive with live Next.js demo
  6. Building: chatbot → + identity → + upstream API access → + MCP server → + async approval → + deployed MCP + Claude Code client
  7. Rich Authorization Request (RAR) + Guardian MFA push
  8. Q&A on agent identity, scope escalation, enterprise vs consumer

Concepts introduced

Entities

Memorable moves

  • "We don't want a hallucinating agent buying a stock in the middle of the night without my permission."
  • "Please don't do that" — on just passing the user's access token to the agent.
  • "The employee is not only acting on their own behalf, they're representing the company" — the enterprise angle that bifurcates Auth0 vs Okta.
  • "Scopes not part of the connection can never end up in the access token" — prompt injection can't escalate via token exchange because authorization is in code, not in the LLM.

Cross-ingest synthesis

These four AI-Engineer-2026 ingests now form a trust-boundary quadrilateral for agentic systems:

Underlying unity: the LLM sits inside a bounded trust envelope. Everything outside — judging, sandboxing, authorizing — must be done in non-LLM code. The LLM can't prompt-inject its way past code that runs before/around it.

Open questions

  • How does CIBA latency play in production? Push → approve → exchange → execute could be 10s+; does that match user patience for "I asked the agent to do X and forgot about it"?
  • What's the blast-radius math when Token Vault itself is compromised? It's a honey-pot of refresh tokens for every user × every connected provider.
  • Client ID Metadata (the next OAuth spec) — will it solve the open-DCR trust problem, or create a new centralization point?
  • Agent-to-agent authorization chains: actor claims are one deep; what happens at depth 5 (agent → agent → agent → agent → API)? When does the delegation chain break down or become unauditable?
  • Where do you put the line for "risky operation"? Developer-defined per-tool today; but a system-level policy (FGA? Okta?) is probably where it needs to live at scale.

Synthesis with prior ingests

  • Steinberger's soul-md + Auth0's actor claims → identity for agents is multi-layer: persona (SOUL.md) + cryptographic (azp claim).
  • Zakariasson's isolated-agent-vms + Agrawal's isolates-vs-containers + Auth0's scope-bounded tokens → three defense-in-depth layers: VM isolation, runtime sandbox, narrow-scoped credentials. Belt, suspenders, and zip-tie.