Live Report Numbers¶
UX/infra problem listen-labs solved: interview studies aren't static — responses keep streaming in for days. A report generated on day 2 saying "42% of respondents mentioned price" goes stale on day 3. Listen keeps the numbers live.
Mechanism¶
When a new interview arrives: 1. The system identifies which columns on the virtual table already exist for this study. 2. Runs the same classification tools over the new row only — doesn't re-process the whole corpus. 3. Aggregations (counts, percentages) recompute automatically from the updated column. 4. The rendered report — whether HTML, slide deck, or chat answer — reads the latest aggregate values, so the numbers update in place.
Why it's hard¶
- Consistency of classification: the small model used on day 2 has to score day-3 rows the same way. Model drift, prompt drift, temperature wobble all corrode this. Connects to model-drift.
- Report format lock-in: PowerPoint decks aren't naturally live; Listen's powerpoint-subagent writes code that re-renders from current data, which is why the Claude Code SDK rewrite mattered.
- Human-authored text next to live numbers: if the agent wrote "a strong majority prefer option A" and the percentage drops below 50% next week, the prose is now wrong. Open problem.
Related¶
- map-reduce-classification — the incremental compute primitive
- trigger-long-running-agent — decides when to re-run vs reuse
- eval-lifecycle-pre-to-production — eval systems face the same staleness problem
My inference: This is a specific case of a broader pattern — agent outputs as computed views over live data, not frozen artifacts. Florian didn't name it; I'm labeling the pattern.