How We Got Here: Application Architecture¶
Arc: Platform Eras covered: 5 Timeline: ~2000-2025 Read time: ~12 min
The Original Problem¶
In 2000, most web applications were monoliths — and that was fine. Your entire application was a single process: one codebase, one deployment, one database. The problem wasn't the monolith itself — it was what happened when the team grew from 5 to 50 developers and the codebase grew from 50,000 to 5,000,000 lines. Deployments took hours and broke constantly. A bug in the search feature took down the checkout flow. One team's database migration blocked every other team's release. The monolith wasn't the problem; the coupling was.
The history of application architecture is a search for boundaries — how to decompose a system so that teams can work independently without stepping on each other, while still delivering a coherent product.
Era 1: The Monolith (~2000-2008)¶
The Solution¶
The monolith was the default because it was the simplest thing that worked. One application server (Tomcat, IIS, Apache + mod_php), one database (Oracle, MySQL, PostgreSQL), one deployment. Frameworks like J2EE, Rails (2004), and Django (2005) made it easy to build feature-rich monolithic applications quickly.
What It Looked Like¶
# A typical Rails monolith (~2006)
myapp/
├── app/
│ ├── controllers/
│ │ ├── orders_controller.rb
│ │ ├── users_controller.rb
│ │ ├── products_controller.rb
│ │ ├── search_controller.rb
│ │ └── admin_controller.rb
│ ├── models/
│ │ ├── order.rb
│ │ ├── user.rb
│ │ └── product.rb
│ └── views/
├── db/
│ └── schema.rb # one database, 200 tables
├── config/
│ └── database.yml # postgres://prod-db:5432/myapp
└── Gemfile
# Deploy: cap deploy (Capistrano)
# All features, all routes, one process, one deploy
Why It Was Better¶
- Simple to develop, test, and deploy
- One codebase to understand, one debugger session to step through
- ACID transactions across the entire domain
- Rails, Django, and similar frameworks were incredibly productive
- IDE support was excellent (everything in one project)
Why It Wasn't Enough¶
- Scaling: the only option was vertical (bigger server) or horizontal (copy everything)
- Coupling: changing the order model could break the search feature
- Deployment: the entire application was deployed for every change
- Team coordination: merge conflicts, deployment queues, "don't deploy, I'm testing"
- Technology lock-in: the whole app was in one language/framework
Legacy You'll Still See¶
Most existing applications are monoliths. Many should stay monoliths — the complexity of distributed systems is not justified for every team. The "monolith first" approach (Martin Fowler) is widely recommended. If you join a company, the odds are high that the core product is a monolith.
Era 2: Service-Oriented Architecture (SOA) (~2005-2012)¶
The Solution¶
SOA decomposed the monolith into services that communicated via standardized protocols (SOAP, WS-*, ESB). Each service owned a business capability. An Enterprise Service Bus (ESB) mediated communication, handled routing, and translated between protocols. IBM WebSphere, Oracle SOA Suite, and MuleSoft were the enterprise platforms.
What It Looked Like¶
<!-- ESB routing configuration (~2008) -->
<service name="OrderProcessing">
<endpoint uri="http://order-service.internal:8080/soap"/>
<routing>
<route when="/Order/Priority = 'HIGH'" to="express-fulfillment"/>
<route when="/Order/Priority = 'STANDARD'" to="standard-fulfillment"/>
</routing>
<transform>
<xslt source="order-to-fulfillment.xsl"/>
</transform>
</service>
<!-- WSDL contract — formal API definition -->
<wsdl:definitions name="OrderService">
<wsdl:portType name="OrderPortType">
<wsdl:operation name="CreateOrder">
<wsdl:input message="CreateOrderRequest"/>
<wsdl:output message="CreateOrderResponse"/>
</wsdl:operation>
</wsdl:portType>
</wsdl:definitions>
Why It Was Better¶
- Services had clear boundaries and contracts (WSDL)
- Reusability: the "order service" could be used by the web app, mobile app, and partner API
- Technology diversity: services could use different languages and databases
- The ESB provided routing, transformation, and protocol mediation
- Enterprise tooling for governance, monitoring, and management
Why It Wasn't Enough¶
- The ESB became a centralized bottleneck ("Enterprise Service Bus" → "Everything Stops at the Bus")
- SOAP/WS-* was complex, verbose, and slow
- Services were often too large — "order service" contained 100,000 lines
- Organizational politics: the "ESB team" became a gatekeeper
- Testing was difficult — ESB routing logic was opaque
- Vendor lock-in to expensive enterprise platforms
Legacy You'll Still See¶
SOA persists in large enterprises, especially in banking, insurance, and government. ESBs (MuleSoft, IBM Integration Bus) are still in production. The WSDL/SOAP contracts are still consumed by legacy systems. If you work at a large financial institution, you will encounter SOA.
Era 3: Microservices (~2012-2018)¶
The Solution¶
Microservices (the term popularized by Martin Fowler and James Lewis, 2014) were SOA done right — or at least, done differently. Small, independently deployable services, each owning its own data, communicating via lightweight protocols (REST, messaging). No ESB. No shared database. Each service could be built, tested, deployed, and scaled independently by a small team.
What It Looked Like¶
# Microservices architecture (~2015)
# Each service: own repo, own database, own team, own deploy pipeline
user-service/ → PostgreSQL (users table only)
POST /users
GET /users/{id}
order-service/ → PostgreSQL (orders, order_items)
POST /orders
GET /orders/{id}
# Calls user-service to validate user
# Publishes OrderCreated event to message queue
inventory-service/ → MongoDB (inventory)
GET /inventory/{sku}
# Listens for OrderCreated events, reserves stock
payment-service/ → PostgreSQL (payments)
POST /payments
# Listens for OrderCreated events, charges payment
notification-service/ → Redis (templates, queues)
# Listens for OrderCreated, PaymentProcessed events
# Sends emails and push notifications
# Docker Compose for local development
services:
user-service:
build: ./user-service
ports: ["8001:8000"]
environment:
DATABASE_URL: postgres://user-db:5432/users
order-service:
build: ./order-service
ports: ["8002:8000"]
environment:
DATABASE_URL: postgres://order-db:5432/orders
USER_SERVICE_URL: http://user-service:8000
RABBITMQ_URL: amqp://rabbitmq:5672
Why It Was Better¶
- Independent deployment: change one service without touching others
- Team autonomy: each team owns their service end-to-end
- Technology freedom: use the best language/database for each problem
- Scalability: scale each service independently based on its load
- Fault isolation: one service failure doesn't cascade (with proper resilience)
Why It Wasn't Enough¶
- Distributed systems are fundamentally harder (network failures, partial failures, consistency)
- Data consistency across services required sagas, eventual consistency, and new patterns
- Operational overhead: 50 services = 50 CI pipelines, 50 monitoring configs, 50 deployments
- Debugging distributed requests required distributed tracing
- "Microservices tax" — the infrastructure investment before you can benefit
- Many organizations created "distributed monoliths" — coupled services that had to deploy together
Legacy You'll Still See¶
Microservices are the dominant architecture for cloud-native applications. The pattern is well-understood but often poorly implemented. Many organizations are living with the consequences of premature decomposition — too many services, too small, with too much coupling between them.
Era 4: Micro-Frontends and Domain-Driven Design (~2018-2023)¶
The Solution¶
While backend microservices matured, the frontend remained a monolith. Micro-frontends (ThoughtWorks Technology Radar, 2016; widespread ~2018) applied the same decomposition to the UI. Each team owned a vertical slice: backend service + frontend component. Domain-Driven Design (Eric Evans, 2003, but mainstream adoption in microservices era ~2018) provided the intellectual framework for drawing service boundaries using bounded contexts.
What It Looked Like¶
// Module Federation (Webpack 5) — micro-frontend composition
// shell/webpack.config.js
new ModuleFederationPlugin({
name: 'shell',
remotes: {
productCatalog: 'productCatalog@http://cdn.example.com/product/remoteEntry.js',
cart: 'cart@http://cdn.example.com/cart/remoteEntry.js',
userProfile: 'userProfile@http://cdn.example.com/user/remoteEntry.js',
},
});
// Product team deploys their frontend independently
// Cart team deploys their frontend independently
// Shell app composes them at runtime
# DDD-aligned service boundaries
# Bounded Context: Order Management
order-service/
# Aggregates: Order, OrderItem
# Domain Events: OrderPlaced, OrderShipped, OrderCancelled
# Anti-corruption Layer: translates User from user context
# Bounded Context: Inventory
inventory-service/
# Aggregates: StockItem, Warehouse
# Domain Events: StockReserved, StockDepleted
# Consumes: OrderPlaced events from Order context
Why It Was Better¶
- True vertical ownership: team owns frontend + backend + data
- DDD provided principled service boundaries (bounded contexts, not arbitrary splits)
- Micro-frontends eliminated frontend monolith bottleneck
- Independent deployment of UI components
- Organizational alignment: Conway's law worked for you, not against
Why It Wasn't Enough¶
- Micro-frontend complexity: shared state, routing, CSS isolation, bundle sizes
- DDD requires deep domain expertise that many teams lack
- Boundary discovery is iterative — getting it wrong creates expensive coupling
- Runtime composition of micro-frontends has UX challenges (loading states, inconsistency)
- Organizational change (team topology) was harder than technical change
Legacy You'll Still See¶
DDD-aligned microservices are the current best practice for large systems. Micro-frontends are adopted at scale (IKEA, Spotify, Zalando) but not mainstream for smaller teams. Module Federation and single-spa are the leading technical approaches. Most teams are still struggling with the "right size" for services.
Era 5: Platform Engineering and Modular Monoliths (~2022-2025)¶
The Solution¶
The pendulum swung back. The industry recognized that many teams had over-decomposed into too many microservices too early. The "modular monolith" (a single deployable with well-defined internal module boundaries) emerged as a pragmatic alternative. Platform engineering teams built golden paths that handled the infrastructure complexity, letting developers focus on domain logic regardless of whether it was a monolith or microservices.
What It Looked Like¶
// Modular monolith — Spring Modulith (2023)
// Single deployment, but with enforced module boundaries
@ApplicationModule(
allowedDependencies = {"user", "shared"}
)
package com.example.order;
// Module boundaries enforced at compile time
// Internal types are not accessible from other modules
// Communication between modules via events (not direct method calls)
@Service
class OrderService {
private final ApplicationEventPublisher events;
public Order createOrder(CreateOrderCommand cmd) {
Order order = new Order(cmd);
orderRepository.save(order);
events.publishEvent(new OrderCreated(order.id()));
return order;
}
}
// Can be extracted to a microservice later if needed
// Module boundary is already clean
# Platform engineering abstraction
# Developer doesn't care about infrastructure topology
# score.yaml — describe what you need
apiVersion: score.dev/v1b1
metadata:
name: order-service
containers:
main:
image: .
variables:
DB_HOST: ${resources.db.host}
resources:
db:
type: postgres
queue:
type: rabbitmq
# Platform decides: monolith module? microservice?
# container? serverless function?
# Developer doesn't need to know.
Why It Was Better¶
- Pragmatic: get monolith simplicity with module boundary discipline
- Extractable: modular monolith modules can become microservices when justified
- Lower operational cost: one deployment, one database (initially)
- Platform engineering handles infrastructure decisions
- Focus on domain boundaries, not infrastructure topology
Why It Wasn't Enough¶
- Modular monolith requires discipline (module boundaries degrade without enforcement)
- Platform engineering is expensive (dedicated team, multi-year investment)
- Tooling for modular monoliths is immature compared to microservices tooling
- "When to extract" decisions are still judgment calls
- Organizational scaling still eventually requires service decomposition
Legacy You'll Still See¶
This is the current frontier. The "monolith first, extract when proven necessary" approach is the emerging consensus. Spring Modulith, Nest.js modules, and similar frameworks support the modular monolith pattern. Platform engineering is the fastest-growing discipline in DevOps.
Where We Are Now¶
The industry has moved past the "microservices for everything" peak. The consensus is emerging: start with a modular monolith, decompose into services only when organizational scaling or technical requirements demand it. DDD provides the framework for drawing boundaries. Platform engineering provides the infrastructure abstraction. The architecture should match the team topology, not the other way around.
Where It's Going¶
The distinction between "monolith" and "microservices" will blur further. Platforms will allow teams to write domain logic without choosing a deployment topology upfront. WebAssembly components may enable service-like isolation within a single process. AI-assisted architecture analysis may help teams identify the right decomposition points. The pendulum will keep swinging, but the midpoint is moving toward pragmatism.
The Pattern¶
The history of application architecture is a pendulum between centralization (simple, coupled) and decentralization (complex, independent). Every swing teaches the industry where the boundaries should be. The answer is never "everything in one place" or "everything separate" — it's "separate where the organizational and technical boundaries align."
Key Takeaway for Practitioners¶
Don't choose your architecture based on what FAANG companies use. Choose it based on your team size, your domain complexity, and your operational capacity. A well-structured monolith outperforms a poorly-structured microservices architecture. The right architecture is the one your team can build, deploy, and debug at 2 AM.
Cross-References¶
- Topic Packs: Microservices, Domain-Driven Design
- Tool Comparisons: Monolith vs Microservices Decision Matrix
- Evolution Guides: Service Communication, CI/CD Evolution