AI agents can look autonomous, but what keeps them reliable in production is the harness around the model. Using OpenClaw as a concrete case study, this talk explains the systems that keep an agent sane: a control plane that owns sessions and state, the concurrency invariants that prevent overlapping runs from corrupting behavior, and the tool plus approval boundaries that determine when chat becomes action. I’ll walk through how events enter the system, how state is rehydrated per session, why single-writer execution and throttling matter, and how auditable approval paths keep powerful tools useful without making them unsafe. The goal is not to teach one framework. It is to give senior engineers a reusable mental model for building agent systems that behave predictably under load. Attendees will leave with a practical blueprint they can apply across platforms: events → session key → single-writer lane → throttle → tools → audit.
Speaker
Vinoth Govindarajan
Member of Technical Staff @OpenAI, Co-Author of "Engineering Lakehouses with Open Table Formats" and Writes The Agent Stack
Vinoth Govindarajan is a Member of Technical Staff at OpenAI, where he works on core data infrastructure for large-scale AI systems and internal agent-facing platforms. His recent work includes internal facing agents, a support and on-call assistant for Data Platform that applies agent memory and retrieval to operational triage.
He is co-author of Engineering Lakehouses with Open Table Formats and writes The Agent Stack, a systems-first publication on production AI agents and data infrastructure. His work focuses on control planes, state, memory, tool boundaries, reliability, and the architecture patterns that make agent systems safe and predictable in production.