Skip to content

Contextual Prompt Engineering

Florian Juengermann's chosen alternative to the Anthropic/Shopify-style Skills pattern. At listen-labs, when the long-tail of unusual use cases started overwhelming the main agent, they didn't introduce a skill registry — they leaned harder on dynamically assembling context per call from organization-level instructions, prior project memory, and the edit history of the current document.

Position in the 2026 debate

Two approaches to the same problem — the frontier model is smart enough for the common case but brittle in the rare cases:

Pattern Mechanism Example
Skills Retrieve + execute a pre-written procedure for the rare case Anthropic's Skills, soul-md
Contextual prompt engineering Inject the right context into the prompt so the model improvises correctly Listen's research agent

Florian's hedge (source):

"We have been relying more on contextual prompt engineering… the model's getting smarter and the prompts are getting longer and that seems to kind of hold the balance. But if you want to go into the more detailed rare instances where we know this is how it's supposed to work but the agent maybe doesn't want to — we don't want the agent to reinvent it."

So he doesn't reject skills — he's using them selectively for known-rare behaviors, while betting contextual assembly covers the long tail. Note: this is single-source framing; the broader skills-vs-context-engineering debate is still open.

What "context" means at Listen

  • Organization-level written instructions ("always format percentages this way")
  • Prior project reports (treated as implicit knowledge — but Florian admits "what is common knowledge, what is not" is far from solved)
  • Edit history of the composer document so the agent doesn't undo recent human changes
  • Extracted columns on the virtual table — computed context, not static