Skip to content

Listen

AI-native qualitative research platform. Turns hundreds or thousands of user/customer interviews, surveys, and focus groups into structured, traceable insights. Co-founded by florian-juengermann (CTO).

Three-ish agents

Listen's platform is not a single monolithic agent but three (or four, counting onboarding) distinct agents, each with its own UX affordance:

  1. Onboarding / Composer agent — co-edits a discussion-guide document with the user. Not a pure chat; the document is the shared artifact and every change is logged as an edit operation so the LLM knows what the human just did. See composer-agent-ux.
  2. Interviewer agent — multimodal conversational agent that runs the actual interview with end-users. Voice + video + screen-share. Florian concedes the voice-interface UX is "still not quite solved" and cites OpenAI's own back-and-forth as evidence.
  3. Research agent — the centerpiece. Given 500+ completed interviews, answers open questions, cuts video clips, and generates PowerPoint decks in the customer's template. Architected around a virtual-table-architecture + feedback-subagent.

Tech stack (2026)

  • Transcripts/responses stored in Postgres, exposed to the agent as a virtual table (pandas-frame shape), not files or CSV.
  • Sandboxed Python execution via E2B (pre-warmed to meet live-chat latency). Only used ~20% of the time; most operations are hardcoded structured tools.
  • Small classifier models (GPT-mini, Haiku) run map-reduce-classification across thousands of open-ended responses.
  • Claude Code SDK runs inside E2B sandboxes for the rebuilt powerpoint-subagent.
  • Agent trust: every engineer owns end-to-end, including reading traces themselves.

See source: florian-juengermann-listen-agents-2026.