Skip to content

Liquid AI

Company productizing non-transformer neural architectures, described by mikhail-parakhin as inspired by the C. elegans worm's neural circuit and functionally "a state-space model squared." Key property: sub-quadratic in context length — making long-context + low-latency tractable where transformers strain.

Shopify usage

shopify is a significant Liquid customer and uses it in production for:

  • Search: 800 → 4,200 QPS at the same quality on the same hardware (distilled from a larger model via an auto-research loop; see auto-research-loop)
  • Liquid theme gisting (the templating language, confusingly sharing the name)
  • Catalog formulation for ucp-catalog
  • sidekick-pulse merchant notifications (needs long context + high throughput at small-model cost)

Typical deployment: 7-8B-parameter Liquid model, distilled from the largest available frontier model for a narrow task — see distill-to-small-task-model. In those narrow scenarios Parakhin claims it beats same-size Qwen / Kimi variants.

Open question

Parakhin is non-committal on whether the architecture scales into a frontier model. The company's current bet is narrow-specialist distillation, not generality.

See also