Interesting direction — especially the event-driven autonomy part.
One thing I’ve noticed while working on founder tooling is that the biggest challenge isn’t building agents anymore, but deciding, what workflows are actually worth automating before people invest time connecting tools and infrastructure.
Curious how you’re seeing users define successful agent tasks — are they mostly repetitive operational workflows, or more decision-based use cases?
Also wondering how you handle failure states when an agent runs long-term without supervision.
On what's worth automating: it splits roughly into two camps.
The most common are repetitive operational things — monitoring
markets, responding to messages, deploying code, updating
spreadsheets. But the more interesting use cases are
decision-based: the trading agent deciding when to open/close
positions, or a support agent deciding whether to escalate.
The Event Hub is what makes the decision-based ones viable.
Agents subscribe to real-time events and react based on
triggers — you can use structured filters or even natural
language conditions ("fire when the user seems frustrated").
So the agent isn't just on a cron loop, it's genuinely
reacting to context.
On failure states: agents have built-in timeouts on
subscriptions, automatic retries with exponential backoff,
and silence detection (they can react to the absence of
events, not just their presence). If something breaks, the
subscription expires and the agent can re-evaluate. Long-
running agents also persist their state across restarts so
they pick up where they left off.
There's also a workflow builder where you connect multiple
agents together in non-linear graphs — agents run async
and pass results between each other. So you can have one
agent monitoring, another analyzing, another executing —
all coordinating without a linear chain
That makes sense — the shift from task automation to decision automation feels like the real inflection point.
The silence detection aspect is especially interesting. Reacting to the absence of signals is something most workflow tools still struggle with, and it’s usually where long-running systems fail in practice.
Curious whether users tend to start with predefined agent patterns, or if they’re designing workflows from scratch once they understand the event model? I imagine abstraction becomes important pretty quickly as graphs grow.
Both, actually. Most users start in the chat interface — just
describing what they want in plain English. The agent figures
out which tools to use and how to react. No graph, no config.
Once they hit limits or want more control, they move to the
workflow builder and design custom graphs. That's where you
get non-linear agent connections — multiple agents running
async, passing results to each other. One monitors, one
analyzes, one executes.
Abstraction is definitely the challenge as graphs grow. Right
now we handle it by letting each node in the graph be a full
autonomous agent with its own tools and context. So you're
composing agents, not steps. Keeps individual nodes simple
even when the overall workflow is complex.
Built this after struggling to estimate real marketplace fees while selling across platforms. Would really appreciate feedback from anyone selling on Amazon/Etsy/eBay — especially if I missed fee edge cases.