Model Overview Tier Flow & Data Direction
The model defines five tiers arranged in a deliberate Consumer-to-Producer flow. Requests originate at the Consumer side and travel through to authoritative data on the Producer side. The AI / Intelligence Tier is a Producer-side tier, positioned to receive input from the Product Tier and produce results before engaging the Service Tier.
Request flow → Consumer side to Producer side
Presentation
Device & Channel
UX Rendering
Consumer
›
Product
Interaction
Auth & Coarse AuthZ
Consumer
›
AI / Intelligence
Inference · RAG
Agents · Models
Producer — NEW
›
Service
Integration
Routing & Protocol
Producer
›
Recordkeeping
SoR · Business Logic
Fine-Grained AuthZ
Producer
Key Separation of Responsibility Principles Foundational Rules
These principles are the non-negotiable rules that give the model its power. Violating them degrades data integrity, creates security gaps, and undermines the agility the model is designed to provide.
01
Systems of Record
All authoritative data lives exclusively in the Recordkeeping Tier. No other tier maintains its own copy of business data. The Feature Store, Vector DB, and Data Lakehouse are the AI-era extensions of this principle — the AI Tier produces results but does not own data.
02
Business Logic
Business rules, validation, and transaction logic reside solely in the Recordkeeping Tier. The Product Tier handles presentation logic only. The Service Tier routes — it does not decide. The AI Tier orchestrates — it does not own business rules.
03
Security Separation
Product Tier: user authentication and coarse-grained authorization (feature / product access). Recordkeeping Tier: fine-grained / data-level authorization (row, field, record). Service Tier relays credentials only. AI Tier enforces model-level guardrails and prompt injection defense.
04
Logging Separation
Product Tier logs user interactions (what did the user do?). Recordkeeping Tier logs transactions and data changes (what changed, when, by whom?). AI Tier logs model operational data. Systems-level logging (performance, error, event) belongs in the Platform Layer exclusively.
05
Infrastructure Layer = Shared Libraries
The Infrastructure Layer provides domain-agnostic shared libraries (JIL pattern) consumed by all tiers uniformly. Cross-cutting technical utilities live here: logging libs, security libs, i18n frameworks, config clients, MLOps pipeline, and AI governance frameworks. No business logic belongs in any layer.
06
AI Tier Placement
The AI / Intelligence Tier is a Producer-side tier, sitting between the Product Tier and the Service Tier. It receives requests from Product, produces results via inference / RAG / agent orchestration, then calls downstream services via the Service Tier. Its authoritative data stores reside in the Recordkeeping Tier.
Enterprise Benefits Value to EA & IT Organization
The Tiers & Layers model delivers measurable architectural and organizational value. These benefits apply across the full IT organization and compound over time as the model matures and adoption deepens.
Agility & Independent Deployability
Clear tier boundaries enable each tier to evolve, scale, and deploy independently. Teams can release changes to the Product Tier without touching Recordkeeping, reducing regression risk and accelerating delivery cycles. AI capabilities can be introduced or swapped without disrupting existing tiers.
Data Integrity & Single Source of Truth
The Systems of Record principle guarantees one authoritative version of every business entity. Eliminates data drift, reconciliation overhead, and the "which system is correct?" problem that plagues organizations without a defined recordkeeping boundary.
Security by Design
Authentication, coarse-grained authorization, and fine-grained data authorization are assigned to specific tiers by design — not left to individual development teams to figure out. This eliminates authorization gaps, reduces attack surface, and simplifies compliance audits.
Integration Simplicity
The Service Tier provides a well-defined integration boundary. External systems, third-party APIs, and AI models all connect through a single layer, preventing point-to-point integration sprawl and making the integration topology understandable and governable.
Governance & Standards Enforcement
The model provides an unambiguous framework for governing where capabilities belong. Architecture reviews, solution approvals, and design standards can all reference a single shared model, reducing debate and accelerating decision-making across teams and programs.
AI Readiness Without Disruption
The AI / Intelligence Tier is a first-class architectural construct that slots into the existing model. Organizations can introduce LLMs, RAG pipelines, and agent orchestration without restructuring their existing tiers or compromising the principles that protect data integrity and security.
Onboarding & Knowledge Transfer
A single shared reference model dramatically reduces onboarding time for new architects, developers, and technology leaders. "Where does this belong?" has a clear answer, reducing architectural debt caused by well-intentioned but inconsistent design decisions.
Observability & Auditability
Logging responsibilities assigned by tier means every event has a defined owner. User activity, business transactions, AI inference events, and system metrics each have a specific home. This makes audit trails complete, compliance reporting straightforward, and incident investigation faster.
Value by Role What This Model Means for You
The model speaks differently to each architecture and leadership discipline. Below is the specific value proposition for each stakeholder role.
- Provides a single, durable architectural language shared across all technology teams and programs
- Reduces the cost and risk of AI adoption by defining exactly where AI capabilities fit without disrupting existing investments
- Accelerates delivery by eliminating "where does this belong?" debates at the team level
- Supports regulatory compliance by making security boundaries and audit logging explicit and enforceable
- Enables portfolio-level governance: solutions can be evaluated against a consistent standard
- Demonstrates IT maturity to business stakeholders with a clear, communicable architecture framework
- Provides a TOGAF-aligned, SAFe-compatible reference model that spans the full application portfolio
- The Consumer / Producer distinction gives a principled basis for evaluating any proposed capability or integration
- AI Tier is a governed construct, not an ad-hoc addition — prevents AI sprawl across tiers
- Infrastructure Layer clearly defines what belongs in shared services vs. what is tier-specific
- Separation principles serve directly as architecture review criteria and design guardrails
- Supports capability mapping, value stream analysis, and application rationalization against a stable model
- Eliminates ambiguity about where business logic, integration, and data access belong in solution designs
- Tier boundaries define natural API contracts between components, reducing integration friction
- AI / Intelligence Tier provides a clear template for designing LLM, RAG, and agent-based features
- Logging separation guidance ensures solution designs have complete observability without over-engineering
- Shared Infrastructure Layer means common concerns (auth, logging, config) are solved once and reused
- Enables faster design reviews: reviewers and designers share the same reference framework
- Authentication, coarse-grained AuthZ, and fine-grained data AuthZ are assigned to specific tiers — no gaps, no duplication
- AI Tier has a defined security responsibility: model-level access control, prompt injection defense, and output filtering
- Audit logging responsibilities are tier-specific, making compliance evidence collection straightforward
- Service Tier credential pass-through pattern prevents authorization logic from leaking into integration code
- Infrastructure Layer hosts shared security libraries, ensuring consistent encryption and token validation across all tiers
- Zero-trust and defense-in-depth principles map directly onto the tier security hierarchy
- Systems of Record principle eliminates data duplication and ensures a single authoritative source for every business entity
- AI-era data stores (Feature Store, Vector DB, Data Lakehouse, Embedding Store) are placed as first-class SoR extensions in the Recordkeeping Tier
- Training data lineage and AI data access audit are explicit Recordkeeping Tier responsibilities
- Consumer-side data (channel, session, clickstream) is clearly separated from authoritative Producer-side data
- Data Sub-Layer provides a defined location for data classification within the application architecture
- Supports data governance: every data store has a tier owner, making data stewardship assignments unambiguous
- Infrastructure, Environment, and Platform Layers map directly to the technology stack each tier application depends on
- Shared libraries (JIL pattern) in the Infrastructure Layer provide the technical rationale for platform investment in common services
- AI-era Environment Layer additions (GPU / TPU clusters, ML platforms, Vector DB platforms) have a defined home
- Systems-level logging is explicitly the Platform Layer's responsibility, not individual application tiers
- Cloud AI services (SageMaker, Azure OpenAI, Vertex AI) are positioned in the Platform Layer as infrastructure, not application components
- MLOps / CI-CD pipeline is a shared Infrastructure Layer service, enabling consistent AI delivery tooling across all teams