Skip to content

Physical AI

Thesis that the next era of AI moves from digital-only language and reasoning systems to embodied systems that perceive, plan, and act in the physical world — humanoids, factory robots, autonomous vehicles, and "anything that moves."

Most prominently championed by jensen-huang in the 2026 cleo-abram interview, where he framed it as a straight-line extrapolation from today's foundation models: if it can do this, how far can it go?

Core claims

  • Language models → agents → embodied agents is a continuous capability curve, not a discontinuous jump
  • The unlock is the full stack, not any single model
  • Manufacturing, healthcare, and software will be the first sectors reshaped by continuously-reasoning physical systems

Open questions

  • Timelines: Jensen says "soon" but doesn't commit; skeptics point to the real-world data gap between web text and physical interaction
  • Safety and failure modes of continuously-learning embodied agents
  • How much of the thesis depends on nvidia-specific hardware vs being stack-agnostic

Reinforcement from the Lex Fridman interview

Jensen's Lex Fridman 2026 answer sharpens the embodiment argument with a concrete intuition: the first great humanoid will use the tools already in our environment — open a microwave, read its manual, call a plumber — rather than have a morphing hand that turns into a hammer or beams microwaves. Civilization's infrastructure is the pre-built prior; any robot that can use it inherits trillions of dollars of "training data" for free. See jensen-huang-lex-fridman-2026.