Decision Models in Agentic Architectures: From Production to Agent Skills

In agentic architectures, not every step can be probabilistic. When an agent needs to decide whether to approve a loan, which treatment protocol to apply to a patient, or whether to pay an insurance claim, the output needs to be deterministic and explainable.

The technology to solve this isn't new. Decision models and rule engines are common across enterprise environments. This session draws from a real production deployment at a large financial services enterprise where decision models are invoked as part of a data pipeline, processing millions of evaluations daily at tens of thousands of transactions per second. The same decision models are then served to agents for real-time interaction.

A back-office analyst asks an agent to evaluate a customer's risk profile. The agent pulls context from documents, bank systems of record, and internal notes, interprets unstructured inputs, and assembles what the decision model needs. Then it invokes the decision model as a skill for the actual risk classification. The result is deterministic and fully traceable. The agent handles what's probabilistic. The decision model handles what can't be.

Decision models are executable and algorithmically efficient, and each business rule remains individually identifiable, testable, and maintainable as a discrete component. In the age of AI-assisted development, code is increasingly generated, not written. Decision models are no exception. The difference lies in what gets preserved: generated code consolidates rules into control flow, blurring the link between policy and implementation. A decision model keeps that link explicit. When a regulation evolves, you know exactly which component is affected.

In this presentation, we'll walk through a live demonstration using the DMN (Decision Model and Notation) standard. We'll show how the decision model is built, how it serves a data pipeline at scale, and how the same model operates as an agent skill for real-time interaction. We'll also demonstrate how this pattern is evolving, using decision models as guardrails directly inside LLM pipelines with NVIDIA NeMo.

Main Takeaways:

  1. Decision model agentic pattern: how existing enterprise decision models and rule engines become agent skills for deterministic and explainable execution.

  2. When to use decision models vs. regular code within agentic systems, and the architectural trade-offs that drive that choice.

  3. How isolating decisions in a dedicated model separates business logic from application control flow, enabling teams, including non-developers, to evolve, test, and validate decisions with a narrowed business focus.

Interview:

What is the focus of your work these days?

Building systems where AI and deterministic decision logic work together in production. After leading Drools, jBPM, and Kogito as Chief Architect and Engineering Leader at IBM (and previously at Red Hat), the question driving my work now is: how can AI do its best work while ensuring that every step and decision that follow are governed, traceable, and provable?

What is the motivation behind your talk?

Every conversation with enterprise teams hits the same wall. They want AI to do real work in their systems, but compliance and accountability hold it back at the prototype stage. The gap isn't better models. It's that most teams haven't designed the collaboration between AI and the provable decision layer. This session walks through a working approach, demonstrated live with a regulated loan approval workflow.

What is your session about, and why is it important for senior software developers?

The session is about how decision models work as a deterministic layer inside agentic architectures. Agents are great at handling ambiguity, pulling context from documents and systems, interpreting unstructured input. But when the system needs to approve a loan, classify a customer's risk, or apply a treatment protocol, the result has to be deterministic, traceable, and explainable. I'll walk through a real production deployment processing millions of evaluations daily at tens of thousands of transactions per second, where the same decision model serves a high-throughput data pipeline and operates as a skill for real-time agent interaction. For senior developers, this is the missing piece between an interesting AI prototype and an AI system that actually runs in regulated, mission-critical environments.

Why is it critical for software leaders to focus on this topic right now?

The problem of consistently enforcing policies, procedures, and regulations in an auditable, reproducible way is not new. It has been an application-layer concern for decades. What changed is that the agentic stack and LLMs introduced a new surface where the same problem needs to be solved, and the pressure to adopt AI is real. Without addressing it, leaders end up with two bad options: put the business at risk by accepting non-deterministic behavior in high-stakes flows, or avoid AI in the exact places where it would create the most value. Getting this foundation right is what unlocks meaningful adoption.

What are the common challenges developers and architects face in this area?

The pattern I keep seeing is teams trying to push deterministic behavior into a probabilistic system. They fine-tune, expand training data, layer on harnesses, and squeeze the model toward the right answer. That works for many use cases, but it does not hold up in high-stakes systems where 99% is not enough and where every output also has to be explainable. The challenge is not that the model is wrong, it is that the architecture is asking the wrong component to be accountable.

What's one thing you hope attendees will implement immediately after your talk?

Look at your current architecture, agentic or traditional, and identify where it makes sense to separate policy from general-purpose code. Once those boundaries are clear, pick one or two real cases and experiment with a decision engine. Apache KIE is open source and gives you everything needed to get started. That exercise alone builds the foundation for applying everything in the talk to an agentic context.

Who is your session for?

Senior software architects and engineering leaders who are past the AI proof-of-concept stage and dealing with governance, traceability, or compliance requirements in production AI systems.

What makes QCon stand out as a conference for senior software professionals?

I have a long history with QCon. I helped organize the São Paulo edition in 2013 and Rio in 2014, hosted a track at QCon Plus in 2021, and spoke at QCon London in 2022. What makes it unique is that you learn from every direction. The hallway conversations are as valuable as the sessions because the room is full of senior practitioners who have actually solved the problems they are talking about. Speakers are carefully selected to combine current market relevance with real production experience. That combination is hard to find at other events.


Speaker

Alex Porcelli

Co-founder and CEO @Aletyx - Bridging Symbolic AI + GenAI for Enterprise AI, PPMC Apache KIE - 17+ yrs Driving Drools, jBPM and Kogito, Previously @IBM and @Red Hat

Alex Porcelli is Co-founder and CEO of Aletyx and a seasoned architect with nearly 30 years of experience. A long-time open-source contributor and member of the Apache KIE Project Management Committee, he has spent over 15 years leading Drools, jBPM, and Kogito, foundational technologies for deterministic, reasoning-based AI and enterprise automation. As Chief Architect and Engineering Leader at IBM (and previously at Red Hat), he led IBM's first open-source product, following the Red Hat model. At Aletyx, we build systems where AI and governed decision execution work together in mission-critical environments.

Read more