Architecting the Data Layer for AI Agents: From Transactional Systems to MCP and Semantic Models

Most conversations about production AI agents focus on the agent itself — the prompts, the orchestration, the framework. But the moment you put an agent in front of real enterprise data, a different problem dominates: the data layer wasn't designed for this consumer. Data lakes were built for analysts and dashboards. Transactional systems were built for applications. Neither was built for a non-deterministic, token-hungry, latency-sensitive reasoning loop that may issue thousands of unpredictable queries per minute.

This talk takes the data architect's view of agentic systems. We'll walk through the architectural decisions that determine whether your agents are reliable and affordable in production — or quietly bankrupting your team.

You'll learn how to draw the line between deterministic and non-deterministic computation, and why getting that boundary right is the single biggest reliability lever you have. We'll cover when to route an agent to a transactional system versus a data lake, and what each choice costs in latency and consistency. You'll see how to add low-latency serving layers to a traditional data lake to make it agent-ready, how to design MCP servers that scale and stay secure under agent-driven traffic, and how a semantic layer can dramatically reduce hallucinations by translating raw schemas into agent-native concepts.

We'll also tackle the topic that quietly kills most agent projects: token economics. We'll share concrete patterns to keep agent workloads financially viable as they scale.

If you're moving agents from prototype to production, you'll leave with a decision framework — and a clear mental model for the data stack that has to exist beneath every reliable AI system.

Main Takeaways:

  1. Architectural strategies for serving data to AI agents — when to use  transactional systems vs. data lakes, and how to add low-latency layers to make existing platforms agent-ready. 
  2. Patterns for implementing scalable, secure, and token-efficient MCP servers under high-volume agent traffic.
  3. How a semantic data layer improves inference accuracy and reduces hallucinations.

Interview:

What is the focus of your work these days?

I am the Data Intelligence Director at Totvs, the largest tech company in Brazil. My mission is to provide data platforms and strategies for AI applications.

What is the motivation behind your talk?

As we deliver AI agents in production, several issues arise on how to provide data for AI agents in a secure and affordable way. I want to share what we learned in the last few years and provide a few techniques that can avoid common pitfalls.

Who is your session for?

Data Engineers and AI Agents developers who need to deploy AI agents in production.


Speaker

Fabiane Nardon

Data Expert, Java Champion & Data Platform Director @totvs

Fabiane Bizinella Nardon is a tech executive with 20+ years architecting large-scale data systems. She currently leads Data Intelligence at TOTVS, Brazil's largest tech company, where her team designs data platforms and engineering strategies for the AI era — enabling LLM and agent-driven products with strong foundations in governance, security, and cost-efficient operation, including token usage optimization and trustworthy retrieval.

Previously, she was CTO at Tail (acquired by TOTVS in 2020), where she led data and ML systems processing 4B+ new records per day. Her PhD focused on Ontologies and Semantic Data Models, and she was an early practitioner of RDF and semantic technologies — foundations now critical for grounded AI systems.

Fabiane is a Java Champion and co-founder of SouJava. She hosted the Architecture in the Age of AI track at QCon London 2026 and has served on program committees for QCon London and QCon San Francisco.

Read more

Date

Monday Jun 1 / 02:30PM EDT ( 50 minutes )

Location

Metcalf Hall Small

Share