Every CRM, ERP, and warehouse management system in your organization is a digital version of something that used to happen on paper. The sales pipeline existed before Salesforce. It was a box of index cards sorted by stage. SAP did not invent the purchase order; it replaced the carbon-copy forms moving between desks. Warehouse management systems digitized the clipboard on the factory floor. Enterprise IT made these processes faster, better audited, more reliable. The steps themselves never changed.
Enterprise architecture grew up around this model. Map the applications, define the integrations, govern the changes, maintain a target state. The assumption underneath all of it: work flows through known processes, and architecture's job is to make sure the systems supporting those processes are well-structured and well-governed. The target state diagram became the architect's primary instrument: draw where the landscape should be in 18 months, then govern the journey toward it.
This model held for thirty years because the work was process-shaped. It no longer is.
When the Process Doesn't Exist Until It Runs
Agentic AI does not automate processes. It decides them. Given a goal, whether that is resolving a customer complaint, assessing an insurance claim, or enriching a travel request, an agent reasons about what to do next based on the data it encounters and the context it accumulates. There is no predefined sequence. The "process" does not exist until the agent creates it at runtime.
Cloud migration, microservices, data lakes: those were significant shifts, but they operated within the same paradigm. You still designed processes, built systems to support them, and governed the result. Agentic intelligence breaks the paradigm because the thing you used to design, the process itself, is now the thing the agent decides. Researchers studying BPMN extensions for agentic workflows reached the same conclusion from the notation side: BPMN has no constructs for modeling non-deterministic agent behavior. The notation that has underpinned process modeling for twenty years does not have a symbol for "the agent decides."
In a previous article, I argued that structured knowledge is the real competitive layer for AI tools. At the individual level, that holds. At the enterprise level, the question is harder: what happens when the agent's operating surface is an application landscape assembled over decades for a fundamentally different kind of work?
From Target State to Constraint Envelope
Agentic systems invalidate the core assumption of design-time architecture: that if you design it correctly, it will behave correctly. You cannot fully predict what an agent will do, because its behavior depends on context it has not yet encountered. Two claims with identical policy terms may produce different processing paths depending on claimant history, document quality, and cross-referenced data. The same input, different execution. That variability is the entire value proposition.
The architectural response starts at the level where enterprise architects actually work. In a design-time model, the architect places the claims processing capability in the insurance domain, specifies event-driven integration with the customer data platform, and governs changes through the standard architecture review process. Domain boundaries, integration patterns, governance cadence: all defined upfront, all assumed to be stable.
In an agentic model, the architect defines what we might call a constraint envelope for that domain. Which domain boundaries agents may cross. What integration patterns are permitted for agent-to-system communication. What observability standards apply, and under what conditions autonomous decisions must escalate to a human. The internal workflow is no longer specified, because the agent composes it at runtime. But the domain boundaries, integration contracts, and escalation patterns still are. Everything inside the envelope is permitted. Everything outside is blocked or escalated.
The abstraction level is the same. The nature of what gets specified is different. Design-time architecture assumes the landscape inside the boundaries is stable and convergent. A constraint envelope assumes the landscape inside is dynamic, but the boundaries themselves are deliberate, traceable, and governed.
Continuous Enforcement, Not Periodic Review
Periodic review worked for decades because the processes themselves did not change between reviews. Governance could operate on human timescales. An architecture review board that meets monthly cannot govern an agent making hundreds of decisions per hour, and the gap between what an agent does and when a board evaluates it can be measured in millions of decisions. Architecture practice must move from approval-based governance to continuous enforcement: policies defined as executable rules that evaluate every agent action in real time.
Research on behavioral contracts illustrates why. Studies on AgentAssert found that agents operating under explicit behavioral contracts detected 5.2 to 6.8 policy violations per session that uncontracted agents missed entirely. Each small deviation created precedent for the next, because every agent decision becomes context for subsequent decisions. Drift in agentic systems compounds exponentially, not gradually.
The architect's role changes accordingly. The gatekeeper who approves designs becomes the policy designer whose rules execute autonomously. The output shifts from architecture principles that humans interpret to architecture policies that machines enforce. The precision bar is higher: constraints must be specific enough that a system can evaluate them, not just specific enough that a committee can discuss them.
Observation as an Architecture Concern
Traditional EA predicts how systems will behave by designing them upfront. In an agentic model, you observe how agents actually behave, and that observation requires deliberate instrumentation. The distinction matters because when the same input can produce different execution paths, prediction alone is insufficient. You need to know what actually happened: which decisions were made, what data was consulted, whether policies were evaluated or bypassed, and what triggered escalation. These are architecture questions, not operational telemetry. They determine whether the constraint envelope is holding.
The EU AI Act, now enforcing in 2026, makes this concrete. The Act regulates through what The Future Society describes as "a static compliance model that assumes fixed configurations with predetermined relationships." Agentic systems are neither fixed nor predetermined. IBM frames the challenge directly: auditing AI agents is about whether an organization can explain what an agent did, why it did it, and who was accountable for letting it operate that way. Without architectural observability, that explanation does not exist.
Observability here means traceability by design. Which policies governed the session, what data the agent accessed, how it reasoned, what it decided. The architect specifies this upfront, alongside data models and integration patterns, because a system that cannot be observed cannot be governed.
The Triage Question
Not every workload needs agents. Payroll processing has fixed steps, and standard order fulfillment follows a known sequence. For these workloads, process automation remains the right paradigm. The fiche box, digitized, is still a good model when the process is genuinely fixed.
The question that matters for enterprise architects is where the boundary sits. Where in your organization is the process truly fixed, and where does the next step genuinely depend on context? Customer complaints where the resolution depends on history, sentiment, and policy interpretation. Insurance claims where eligibility requires judgment across multiple data sources. A travel disruption where the right response depends on downstream bookings and real-time availability. These workloads are where agentic intelligence creates value, and where the architecture underneath must be designed for runtime.
The triage itself is an architectural act. Classify a fixed process as context-dependent and you add unnecessary complexity. Go the other direction and you lock your organization into automation that cannot handle the variability the work actually requires. Getting the boundary right is not a one-time exercise; it shifts as agent capabilities mature and as your organization learns which processes were only fixed because the tooling could not handle variability.
New orchestration models are emerging that treat agents as actors with goals and constraints rather than nodes in a predetermined flow. That shift deserves its own treatment. The first step is simpler: look at your architecture practice and ask whether it is still organized entirely around design-time artifacts. If it is, the agents are already outrunning your governance.