Presentation Tier

The outermost tier responsible for rendering the user interface across all devices and channels. It has no business logic and no data ownership — it presents and displays only.

Device-Specific Rendering
WhatAdapts the UI layout and behavior for the device type in use (desktop, tablet, mobile, kiosk).
DoesApplies responsive breakpoints, touch vs. mouse interaction models, and device-appropriate component sizing.
WhyEnsures a consistent, accessible experience regardless of how or where the user accesses the application.
Globalization / Localization
WhatDisplays text, dates, currencies, and formats according to the user's locale and language preference.
DoesLoads locale-specific resource bundles and formats all UI strings and values accordingly.
WhyRequired for applications serving international users; ensures regulatory and cultural compliance.
Accessibility (A11y)
WhatEnsures the UI meets accessibility standards (WCAG) for users with disabilities.
DoesProvides screen reader support, keyboard navigation, sufficient color contrast, and ARIA attributes.
WhyLegal requirement in most jurisdictions and a quality indicator for all users.
NEW
Conversational / Chat UI
WhatA chat-style interface component that renders natural language exchanges between user and AI.
DoesDisplays threaded messages, typing indicators, and markdown-formatted AI responses in a conversational layout.
WhyThe primary interface pattern for LLM-backed features; distinct rendering requirements from traditional form-based UIs.
NEW
Copilot & AI Widget Rendering
WhatEmbeds AI-powered inline assistants, suggestions, and action panels within existing screens.
DoesRenders contextual AI overlays, sidebars, and inline suggestion components alongside core application content.
WhyEnables AI capabilities to augment existing workflows without requiring users to switch to a separate AI interface.
NEW
AI-Generated Content Display
WhatRenders content that was produced by an AI model, formatted appropriately for the screen context.
DoesHandles streaming text rendering, structured AI output (tables, lists, code blocks), and content provenance indicators.
WhyAI output has unique rendering needs (streaming, markdown, disclaimers) that differ from static application data.
Product Tier

Manages user interaction, authentication, coarse-grained authorization, and user activity logging. The boundary between what the user sees and what the back end produces.

User Authentication
WhatVerifies the identity of the user attempting to access the application.
DoesHandles login flows, MFA, SSO integration, and session token issuance.
WhyThe first security gate; establishing identity is the prerequisite for all downstream authorization decisions.
Coarse-Grained Authorization
WhatDetermines whether the authenticated user is permitted to access a product or feature at a high level.
DoesEnforces role-based access at the product/feature level (e.g., "can this user access the trading module at all?").
WhyPrevents unauthorized users from reaching the Service and Recordkeeping tiers; reduces back-end load from unauthorized requests.
Session Establishment
WhatCreates and manages the authenticated session after successful login.
DoesIssues session tokens, manages token refresh and expiry, and maintains session state.
WhyEnables stateful interaction without re-authenticating on every request; session security is foundational to application security.
User Activity Logging
WhatRecords what the user did in the UI: clicks, navigation, feature interactions.
DoesEmits interaction events to the logging infrastructure for analytics, UX research, and security review.
WhySeparated from transaction logging by design; user behavior data has different retention, access, and compliance requirements.
Consumer-Facing Dashboards
WhatDisplays summarized data and reports to the end user within the product experience.
DoesRenders data retrieved from the Recordkeeping and Reporting tiers in user-facing charts, tables, and summaries.
WhyUsers need at-a-glance views of their data; this presentation logic belongs in the Product Tier, not embedded in back-end services.
NEW
AI Session Identity & Trust Token
WhatExtends the user session to include identity context for AI model requests.
DoesAttaches user identity and permission scope to AI inference requests so the AI Tier can enforce appropriate access.
WhyAI requests must carry verified identity to prevent prompt injection attacks that attempt to impersonate other users or elevate privileges.
NEW
Copilot Access Control (Coarse)
WhatDetermines whether the user is permitted to use AI / copilot features at all.
DoesChecks role and entitlement before exposing AI-powered UI components; hides or disables them for unauthorized users.
WhyAI features may have licensing, compliance, or risk considerations that limit their availability to specific user segments.
NEW
AI Copilot Interaction Logging
WhatRecords the user's AI interactions: prompts submitted, responses received, feedback given.
DoesCaptures copilot usage events in the user activity log for analytics, compliance review, and UX improvement.
WhyAI interaction data requires its own logging category for model performance analysis, abuse detection, and regulatory review.
NEW
AI-Generated Consumer Insights
WhatDisplays AI-produced summaries, recommendations, and insights within the user's dashboard.
DoesRenders personalized AI-generated content (e.g., "Based on your portfolio, consider...") alongside traditional data views.
WhyAI-generated insights must be rendered distinctly from authoritative data; users need to distinguish AI-produced content from system-of-record data.
AI / Intelligence Tier

An entirely new Producer-side tier introduced for the AI era. Positioned between the Product Tier and the Service Tier, it receives requests from Product, produces results via inference and orchestration, and calls downstream services. It does not own business logic or data.

NEW
LLM / Foundation Model Serving
WhatHosts and exposes large language models or foundation models for application consumption.
DoesManages model loading, GPU allocation, request queuing, and response delivery for inference requests.
WhyCentralizing model serving prevents each application team from independently managing expensive model infrastructure.
NEW
Inference Engine
WhatThe runtime that executes model predictions and generates outputs from inputs.
DoesHandles batching, latency optimization, hardware acceleration (GPU/TPU), and response streaming.
WhyEfficient inference is critical for acceptable latency and cost; a dedicated engine enables tuning that application code cannot achieve.
NEW
RAG Pipeline
WhatRetrieval-Augmented Generation: retrieves relevant documents from a knowledge base and injects them into the model prompt.
DoesConverts queries to embeddings, searches the Vector DB for relevant content, and constructs enriched prompts for the LLM.
WhyEnables LLMs to answer questions grounded in proprietary or current data without requiring expensive model retraining.
NEW
Agent Orchestration
WhatManages multi-step AI workflows where a model autonomously plans and executes sequences of actions.
DoesCoordinates tool calls, memory retrieval, decision branching, and result synthesis across multiple model invocations.
WhyComplex tasks (research, analysis, automation) require coordinated AI actions that go beyond a single prompt-response cycle.
NEW
Prompt Mgmt & Versioning
WhatManages the library of prompt templates used by applications to communicate with LLMs.
DoesStores, versions, and serves prompt templates; tracks which prompt version was used for each inference for auditability.
WhyUnmanaged prompts become a source of inconsistency and risk; versioning enables controlled changes and rollback.
NEW
Fine-Tuning Pipeline
WhatAdapts a pre-trained foundation model on domain-specific data to improve its performance for specialized tasks.
DoesOrchestrates training data preparation, model training runs, evaluation, and promotion to the model registry.
WhyGeneral-purpose models often underperform on specialized domains; fine-tuning improves accuracy without building models from scratch.
NEW
Model Registry
WhatA versioned catalog of all trained and approved models available for deployment.
DoesStores model artifacts, metadata, performance metrics, and approval status; governs which model versions are in production.
WhyWithout a registry, model versions are untracked, rollbacks are risky, and governance is impossible.
NEW
Model-Level Access Control
WhatEnforces which users, roles, or services are permitted to invoke specific models or model capabilities.
DoesValidates the identity token from the Product Tier and applies model-level entitlement rules before serving inference.
WhyDifferent models carry different risk and cost profiles; access must be controlled to prevent misuse and cost overruns.
NEW
Prompt Injection Detection
WhatDetects and blocks malicious inputs designed to manipulate the AI model into ignoring its instructions.
DoesScans incoming prompts for injection patterns, jailbreak attempts, and indirect injection from retrieved content.
WhyPrompt injection is a primary AI security threat; undetected attacks can cause data leakage, privilege escalation, or harmful outputs.
NEW
Output Filtering / Guardrails
WhatReviews model outputs before delivery to detect harmful, biased, or policy-violating content.
DoesApplies content classifiers, toxicity filters, and business policy rules to all model responses before they reach the Product Tier.
WhyLLMs can produce unexpected outputs; guardrails are the last line of defense before AI content reaches users.
Service Tier

The integration and routing tier. Routes requests to Recordkeeping, translates protocols, and provides reliable messaging. It does not own business logic — it moves and transforms, it does not decide.

Service Access
WhatThe entry point through which consumer-side tiers invoke back-end capabilities.
DoesExposes service contracts (APIs), enforces service boundaries, and routes calls to the appropriate Recordkeeping service.
WhyA defined service access layer prevents consumer tiers from calling Recordkeeping directly, maintaining the separation of concerns.
Protocol Translation
WhatConverts between different communication protocols (REST, SOAP, MQ, gRPC, FTP) at the integration layer.
DoesTransforms message formats and transport protocols so that consumer tiers and Recordkeeping systems can communicate regardless of their native protocols.
WhyEnterprise systems rarely speak the same protocol; translation here prevents format coupling between tiers.
Reliable Messaging / File Transfer
WhatProvides guaranteed, ordered delivery of messages and files between systems.
DoesUses message queues (MQ, Kafka) and managed file transfer to ensure no message is lost even if downstream systems are temporarily unavailable.
WhyCritical for transactional integrity; without reliable messaging, network failures cause data loss or double-processing.
Token / Credential Forwarding
WhatPasses the authenticated user's identity token downstream to the Recordkeeping Tier.
DoesPropagates identity context through service calls so fine-grained authorization can be enforced at the data layer.
WhyThe Recordkeeping Tier needs to know who is making the request to enforce row/field-level access; the Service Tier must not strip this context.
NEW
AI API Gateway
WhatA specialized gateway managing traffic to and from external AI model providers and internal AI services.
DoesHandles rate limiting, cost management, failover between model providers, and request logging for all AI API traffic.
WhyAI API calls are expensive and latency-sensitive; a dedicated gateway provides cost control, resilience, and observability that general API gateways lack.
NEW
Event Streaming
WhatReal-time event distribution infrastructure (Kafka, Kinesis) enabling streaming data flows between tiers.
DoesPublishes and subscribes to event streams, enabling real-time data ingestion, AI model triggers, and event-driven architectures.
WhyAI and ML workloads often require real-time data; batch-oriented integration patterns are insufficient for streaming inference and feature computation.
NEW
Model Endpoint Routing
WhatRoutes inference requests to the appropriate model endpoint based on request type, user context, or load.
DoesPerforms intelligent routing between model versions, A/B test groups, and regional model deployments.
WhyMultiple model versions and deployment targets require dynamic routing to support canary releases and traffic splitting without application code changes.
Recordkeeping Tier

The authoritative tier. Owns all business logic, all systems of record, fine-grained data authorization, and transaction audit logging. Nothing in this tier is duplicated in any other tier.

Transaction Processing
WhatExecutes business transactions: the core operations that change the state of authoritative data.
DoesApplies business rules, validates inputs against domain logic, and commits or rolls back state changes atomically.
WhyBusiness logic must be centralized here to ensure consistent rule enforcement regardless of which product or channel initiated the transaction.
Data Integrity Enforcement
WhatEnsures all data written to the system of record meets defined business and referential integrity rules.
DoesValidates constraints, enforces referential integrity, and rejects any data that violates the canonical data model.
WhyData integrity must be enforced at the point of record, not in calling systems, to guarantee the SoR is always consistent.
Fine-Grained Authorization
WhatEnforces data-level access control: which specific rows, fields, or records the authenticated user may read or write.
DoesEvaluates the user's identity token against data-level entitlement rules before returning or modifying any record.
WhyCoarse-grained feature access in the Product Tier is insufficient for data privacy; record-level control must be enforced at the data source.
Transaction Audit Logging
WhatCreates an immutable record of every data change: what changed, when, by whom, and from which system.
DoesWrites audit entries for every create, update, and delete operation against the system of record.
WhyRequired for regulatory compliance, fraud investigation, and operational incident resolution; must be at the data layer to be authoritative.
Reporting & Analytics
WhatProvides structured access to authoritative data for reporting and analytical consumption.
DoesExposes read-optimized views of SoR data via data warehouses, data marts, and reporting APIs.
WhyReports must be derived from the authoritative data source to be trustworthy; ad-hoc reporting against consumer-tier data produces unreliable results.
NEW
Feature Store
WhatA centralized repository of pre-computed ML features: the system of record for machine learning input data.
DoesStores, versions, and serves feature vectors for both model training and real-time inference; ensures training-serving consistency.
WhyWithout a feature store, teams independently recompute the same features inconsistently, causing training/serving skew and duplicated effort.
NEW
Vector Database
WhatA specialized database that stores and retrieves high-dimensional vector embeddings by semantic similarity.
DoesIndexes document and data embeddings; performs approximate nearest-neighbor search for RAG and semantic retrieval use cases.
WhyTraditional databases cannot perform semantic search; a vector DB is the authoritative store for the AI tier's retrieval knowledge base.
NEW
Data Lakehouse
WhatA modern data architecture combining the scale of a data lake with the governance of a data warehouse.
DoesStores structured and unstructured data at scale with ACID transaction support, schema enforcement, and direct ML training access.
WhyAI workloads require access to large, diverse datasets that traditional data warehouses cannot efficiently serve; the Lakehouse is the AI-era SoR for analytical and training data.
NEW
AI-Driven Transaction Audit
WhatExtends transaction audit logging to capture AI-influenced data changes and the model decisions that drove them.
DoesRecords which model, which prompt version, and which inference result contributed to a data change for full explainability.
WhyRegulatory explainability requirements (e.g., GDPR right-to-explanation) demand that AI-driven decisions be traceable back to their model and data inputs.
NEW
AI Model Output Data AuthZ
WhatEnforces data-level authorization on records that AI models retrieve or reference during inference.
DoesValidates that AI-generated responses do not surface data the requesting user is not entitled to see, even if retrieved by the RAG pipeline.
WhyRAG pipelines can inadvertently retrieve and expose sensitive records; data authorization must be enforced at the Recordkeeping level for all access, including AI-driven access.
Infrastructure Layer

Domain-agnostic shared libraries and platform services consumed uniformly by every tier's applications via the JIL (Java I-Layer) pattern. No business logic belongs here. These are technical utilities — the plumbing every tier relies on.

Common Logging Libraries
WhatShared logging framework used by every tier to emit structured log events.
DoesProvides a consistent logging API, log format, and routing to the central logging infrastructure regardless of which tier emits the event.
WhyInconsistent logging formats across tiers make log aggregation, searching, and alerting unreliable; a shared library enforces consistency.
Shared Security / Encryption Libs
WhatCommon cryptographic utilities used by all tiers for data encryption, token signing, and secure communication.
DoesProvides vetted, up-to-date implementations of encryption algorithms, key management interfaces, and TLS configuration.
WhySecurity vulnerabilities in cryptographic code are catastrophic; all tiers must use the same reviewed library rather than independent implementations.
Configuration Service Client
WhatShared client library for reading application configuration from a central configuration service.
DoesAbstracts configuration retrieval (feature flags, connection strings, environment settings) behind a uniform API used by all tiers.
WhyCentralizing configuration management prevents environment-specific values from being hardcoded; enables runtime configuration changes without redeployment.
Service Client Stubs
WhatPre-built client libraries that each tier uses to call shared back-end services consistently.
DoesEncapsulates service endpoints, retry logic, circuit breakers, and serialization so consuming tiers don't implement these independently.
WhyAvoids duplicated integration code across tiers; when a service changes, only the stub needs updating.
Container Orchestration (Kubernetes)
WhatThe platform that schedules, scales, and manages containerized application workloads across all tiers.
DoesAutomates deployment, scaling, self-healing, and service discovery for all tier applications running as containers.
WhyModern distributed applications require automated lifecycle management; Kubernetes provides the operational foundation for all tier deployments.
Service Mesh (Istio)
WhatInfrastructure layer that manages secure service-to-service communication, observability, and traffic control.
DoesProvides mutual TLS between services, traffic shaping, distributed tracing, and load balancing without application code changes.
WhySecurity and observability concerns for inter-service communication should not be embedded in application code; the mesh handles them uniformly.
NEW
MLOps / CI-CD Shared Pipeline
WhatA shared continuous integration and delivery pipeline adapted for ML model development and deployment.
DoesAutomates model training, testing, evaluation, approval gates, and deployment to the model registry and serving infrastructure.
WhyWithout a shared MLOps pipeline, every team builds its own, creating inconsistent quality gates and governance gaps for AI models entering production.
NEW
AI Observability Libraries
WhatShared instrumentation libraries that emit AI-specific telemetry: token usage, latency, model version, and quality signals.
DoesInstruments model calls across all tiers to produce consistent AI performance metrics, cost tracking, and anomaly signals.
WhyStandard observability tools don't capture AI-specific signals; a shared library ensures all teams emit comparable, aggregatable AI telemetry.
NEW
AI Ethics & Bias Evaluation Framework
WhatA shared toolkit for evaluating AI models and outputs for bias, fairness, and ethical compliance.
DoesProvides standard bias metrics, fairness tests, and demographic parity checks that any team can run against their models as part of the MLOps pipeline.
WhyEthics and bias evaluation must be systematic and consistent across all AI initiatives; a shared framework prevents each team from defining its own (or skipping it).
NEW
Model Governance Policy Engine
WhatEnforces enterprise policies governing which models can be deployed, to which environments, under what conditions.
DoesEvaluates models against a policy ruleset (risk classification, explainability requirements, performance thresholds) before permitting promotion to production.
WhyAI models carry unique operational and regulatory risks; governance enforcement at the infrastructure layer prevents non-compliant models from reaching production.