Jensen Huang¶
Co-founder and CEO of nvidia. Central figure in the modern accelerated computing era; drove NVIDIA's long CUDA investment through years when the thesis was non-consensus. Interviewed by cleo-abram on Huge Conversations (2026) in a format that deliberately excluded financials and competitor questions, focusing on vision and personal conviction.
Key positions (from jensen-huang-cleo-abram-2026)¶
- Physical AI thesis: Everything that moves will be robotic someday and it will be soon. Frames robotics as the next straight-line extrapolation from language models.
- Extrapolation heuristic: If it can do this, how far can it go? — the question he says NVIDIA asks about each new capability.
- Computing reinvention: At some point, you have to believe something. We've reinvented computing as we know it.
- Stack integration: The unlock is not any single model but chips + networking + systems + software + data flywheel acting as one computer.
- Conviction under non-consensus: Treats CUDA's decade-long underappreciated investment as the model for how to hold architectural bets before evidence crystallizes.
Style¶
First-principles reasoning, tight customer feedback loops, willingness to absorb short-term pain for architectural correctness. Reframes stress as a function of caring deeply rather than of external pressure.
Additional positions (from jensen-huang-lex-fridman-2026)¶
- Install base > elegance: Architecture is defined by install base, not beauty. x86 "won" despite its inelegance; CUDA's moat is the same shape — the developers and software that compile to it.
- Extreme co-design (rack → pod → building): Scope of optimization keeps expanding. Grace Blackwell was designed to run MoE LLM inference; Vera Rubin adds new CPU (Vera), storage accelerators, and the Rock rack specifically because agents "bang on tools" and shift the workload shape.
- Scaling laws are plural: Pre-training, post-training, test-time, and agentic scaling all compound. Blockers are physical (power, fab capacity, packaging) more than algorithmic.
- Supply chain is the real bottleneck: ASML (EUV), TSMC (CoWoS advanced packaging), SK Hynix (HBM) — works continuously upstream/downstream to align capacity with demand growth.
- First-principles over continuous improvement: Engineer at the speed-of-light limit first; only then iterate. Continuous improvement applied to a wrong baseline compounds waste.
- 30M → 1B coders: Natural language as a programming interface turns every carpenter, accountant, plumber into an "architect-plus-AI." Intelligence commoditizes; judgment, character, and compassion become the scarce inputs.
- On China: ~50% of world's AI researchers are Chinese; open-source (DeepSeek, MiniMax) is now a serious research vector and NVIDIA studies those models to understand how hardware must evolve.
- Resilience formula: Curiosity + ability to forget setbacks + belief in one's belief + constant re-evaluation — the four-or-five traits he credits for surviving NVIDIA's long non-consensus period.
Related¶
- nvidia — the company he runs
- physical-ai — his current thesis
- cleo-abram — interviewer on the 2026 Huge Conversations episode
- jensen-huang-lex-fridman-2026 — 2026 Lex Fridman Podcast source index