← Blogs / Solution Architecture

Event Modelling: A Complete Guide for Solution Architects

Santosh Pradhan·March 20, 2026

One of the most persistent problems in complex system design is not technical debt — it is information asymmetry. Teams build what they understood, not what was needed. Requirements drift. Edge cases surface in production. Event Modelling is a methodology that addresses this directly, by forcing a shared, visual specification of how information moves through a system over time. This guide covers Event Modelling (also written as Event Modeling in American English) from first principles — what it is, how to apply it, and why it belongs in every solution architect's toolkit.

Event Modelling is a method of describing systems using a concrete example of how information changes within them over time. Rather than describing a system through abstract data models or process flows, Event Modelling places the full lifecycle of information — from user intent, through system state change, to derived read models — on a single, readable timeline called the blueprint. Every stakeholder, from product owner to backend engineer, reads the same artefact. There is no translation layer, no interpretive gap.

The methodology was created by Adam Dymitruk, founder of Adaptech Group, and introduced publicly in a landmark post published on 23 June 2019. Dymitruk drew on years of experience in the CQRS (Command Query Responsibility Segregation) and Event Sourcing communities, synthesising patterns that had proved effective in practice into a coherent, teachable design process. The timing was deliberate: by 2019, storage costs had fallen so dramatically — a consequence of Moore's Law applied to data — that preserving complete event histories was no longer a luxury. Systems could afford to remember everything. Event Modelling gives teams a structured way to design systems that take full advantage of that capability.

The methodology has since gained adoption across software engineering and enterprise architecture communities worldwide, with workshops, a dedicated book (Event Modeling and Event Sourcing on Leanpub), a Discord community, and a growing library of real-world case studies. For architects designing event-driven systems in domains as varied as fintech, healthcare, and marketing technology, Event Modelling provides a rigour that whiteboard sketches and user story backlogs simply cannot match.

Why Event Modelling Matters: The Problems It Solves

Traditional system design approaches — UML diagrams, entity-relationship models, user story maps — each capture a partial view of a system. A data model shows structure but not behaviour. A user story shows intent but not information flow. A process flow shows sequence but not what data enters and exits at each step. The result is what Dymitruk calls the forgotten information problem: critical details about where data comes from, how it is transformed, and where it ends up are never made explicit. They live in people's heads, in meeting notes, in Slack threads — everywhere except the specification.

This produces predictable failure modes: developers implement features that cannot be wired together because nobody mapped the data dependencies; QA raises bugs that were actually design ambiguities; and product teams discover, only after delivery, that a workflow they assumed was straightforward requires a third-party API call that was never modelled. Each of these failure modes is expensive. Event Modelling eliminates them by requiring information completeness: every field in every view must be traceable to an event, and every event must be reachable through a command. The blueprint accounts for all information. Nothing is assumed.

The secondary problem Event Modelling solves is team disconnection. In large organisations, business analysts, architects, developers, and QA engineers rarely work from the same artefact. Event Modelling produces a single, persistent blueprint that all roles can read and contribute to simultaneously. Business stakeholders see the user-facing wireframes at the top of the diagram. Engineers see the commands and event structures below. Everyone sees how they connect. The shared visual language removes the translation overhead that causes so much rework.

Finally, Event Modelling addresses estimation unreliability. Because each workflow step — called a slice — is the smallest independently implementable unit, teams can measure velocity against slices directly. The cost curve of adding features remains flat: completing one slice does not require reworking another. This predictability is rare in software delivery and represents one of Event Modelling's most commercially significant benefits.

The Core Concepts of Event Modelling

Event Modelling uses a deliberately small vocabulary. There are four building blocks, four structural patterns, and one overarching format. Mastering the building blocks takes an afternoon. Applying them fluently takes practice.

Events

An event is a fact that has already occurred. It is named in the past tense, it is immutable, and it represents a state change that has been persisted to the system's store. "ConsentCaptured", "EmailActivated", "OrderPlaced", "CustomerSegmentUpdated" — these are events. Events describe what the business cares about, not what the database does. Only state-changing occurrences qualify; a user viewing a page is not an event. An event always carries realistic, concrete data: not just the fact that consent was captured, but the customer ID, the timestamp, the consent category, and the channel through which it was given. Events are the connective tissue of the blueprint. Everything else — commands and views — either produces or consumes them.

Commands

A command represents a user's or system's intention to change state. It is written in the imperative: "CaptureConsent", "ActivateEmail", "PlaceOrder". A command carries all the information needed to produce one or more events. It is the mechanism through which the outside world — a human clicking a button, an automated process firing, an external API calling in — interacts with the system. Commands are not guaranteed to succeed; a command may be rejected if business rules are violated. The blueprint shows both the happy path (command succeeds, event is written) and the exception paths (command rejected, error event or no event written).

Views (Read Models)

A view — sometimes called a read model — is a projection of events into a queryable structure. It exists to answer a specific question from a specific actor at a specific point in the workflow. Views derive entirely from events; they are never updated directly. A "CustomerConsentProfile" view might aggregate multiple ConsentCaptured and ConsentWithdrawn events into a single current-state representation that a campaign engine can query before sending an email. Views make the information stored in events accessible in the shape the UI or downstream process actually needs. A system may have many views over the same events, each tailored to a different consumer. This is the core insight of CQRS: reads and writes are separate concerns, optimised independently.

Triggers

A trigger is what initiates a command. It can be a human interacting with a UI, an automated process, or an external system calling a public API. In the blueprint, triggers are represented as wireframes or mockups at the top of the diagram. They provide the user experience context that anchors the technical specification to actual human workflow. The trigger is the "why" that precedes every command.

The Blueprint (Timeline)

The blueprint is the full Event Modelling artefact: a horizontal timeline, read left to right, showing the complete lifecycle of the system as a narrative. Triggers and wireframes occupy the top swim lane. Commands and events occupy the middle. Views and their connections back to the UI occupy the lower area. The blueprint is read like a storyboard — each column represents a moment in time, and the flow of information from left to right tells the story of what the system does and how data moves through it.

Given/When/Then

Every workflow slice in Event Modelling is specified using the Given/When/Then pattern, borrowed from Behaviour-Driven Development (BDD). Given a set of pre-existing events (the system's current state), When a command is issued (the action), Then a specific new event or set of events is produced (the outcome). This triple serves as both the specification and the acceptance test. A view is specified similarly: Given a set of events, When the view is queried, Then a specific data structure is returned. Given/When/Then makes the specification executable. There is no ambiguity in what "done" means for any slice.

The Four Workflow Steps

Event Modelling workshops follow a structured four-step process. The steps are sequential but iterative — teams will loop back as understanding deepens.

Step 1: Brainstorm

All participants — business, product, engineering, QA — collectively list every state-changing event they can imagine occurring in the system. No filtering, no prioritisation. The goal is breadth. Events are written on cards (physical or digital) and placed on a surface without order. This step surfaces hidden assumptions quickly: when a business analyst writes "CustomerApproved" and an engineer writes "AccountActivated" for what they think is the same event, the conversation that follows reveals a genuine ambiguity in the domain model.

Step 2: Plot the Line

Events are arranged chronologically into a plausible narrative — a storyline that a real user or system could follow. This is not a happy-path-only exercise; edge cases and alternative flows appear in parallel lanes. The plot step forces the team to ask: in what order do things actually happen? Can this event occur before that one? What state does the system need to be in for this event to be valid? Gaps in the storyline become visible, prompting discovery of missing events or overlooked scenarios.

Step 3: Identify Workflows (Commands and Views)

With the event timeline established, the team identifies the commands that produce each event and the views that consume each event. Wireframes or mockups are placed above the timeline to represent the user interface context. For each UI action, a command is identified. For each screen or report that displays data, a view is identified. This step wires the full information loop: UI → Command → Event → View → UI. The blueprint begins to take shape as a complete, connected specification.

Step 4: Elaborate (Given/When/Then)

Each slice — each command-event pair and each event-view pair — is elaborated with concrete Given/When/Then scenarios. Realistic data is used throughout: actual customer IDs, real email addresses, meaningful timestamps. This step transforms the visual blueprint into a set of executable specifications. It also reveals the edge cases that the plot step left implicit: what happens if the command is issued twice? What if the pre-condition events are missing? What is the view's behaviour when no events have yet been written? Elaboration is where the specification becomes precise enough to implement directly.

The Blueprint Pattern in Depth

The blueprint is the centrepiece of Event Modelling and the primary reason the methodology works as well as it does. It functions as a storyboard for a system — the same technique used in film production to sequence narrative before committing to expensive shooting. Like a film storyboard, the Event Modelling blueprint makes the full story visible before any code is written.

The blueprint is organised into horizontal swim lanes, each representing a different concern:

  • UX lane (top) — Wireframes and mockups representing the user interface at each moment in the workflow. These are intentionally lo-fi; the goal is to show what information is present on screen, not to design the final UI.
  • Logic lane (middle) — Commands (blue) and Events (yellow/orange) representing the write side of the system. This lane shows what actions are taken and what facts are recorded.
  • Data/Storage lane (lower) — Views/Read Models (green) showing how stored events are projected into queryable structures and fed back to the UX layer.

The colour coding is intentional and consistent across all Event Modelling practitioners: white or grey for triggers, blue for commands, yellow (or orange) for events, green for views, and a robot icon for automated triggers. This shared visual grammar means any practitioner can pick up any blueprint and immediately understand its structure.

The blueprint is read left to right as a narrative. At any column in the diagram, you can ask: what does the user see? What action did they just take? What event was recorded? What can they see next? This narrative quality is what makes the blueprint useful to non-technical stakeholders. A marketing operations manager reviewing a consent capture flow can follow the blueprint and identify, for example, that the view feeding the email activation screen does not yet include the consent category field — a gap that would not surface until QA testing if the specification had been written in prose.

The blueprint also supports four structural patterns that cover all system behaviours:

  • Command pattern — Trigger → Command → Event(s). The standard write flow: a user action produces a command that produces one or more events.
  • View pattern — Event(s) → View. The standard read flow: stored events are projected into a view consumed by the UI.
  • Automation pattern — Event(s) → View → Automated Trigger → Command → Event(s). A processor monitors a "to-do list" view and fires commands automatically when conditions are met. Used for workflows that proceed without human intervention.
  • Translation pattern — Event(s) from a source system → View → Automated Trigger → Command → Event(s) in the target system. Used for integration: consuming events from an external system and converting them into the target system's domain events.

Together, these four patterns cover human-initiated workflows, automated processes, and cross-system integrations — the full surface area of modern distributed systems.

Event Modelling vs Event Storming

A common question: What is the difference between Event Modelling and Event Storming? They share vocabulary — both use events, both involve cross-functional groups — but they serve different purposes and should not be conflated.

Event Storming, created by Alberto Brandolini, is a discovery workshop. It is designed to be run in a room (or virtual equivalent) over a few hours, with sticky notes, a long wall, and a heterogeneous group of participants. Its goal is exploration: surface domain events, identify bounded contexts, discover hotspots and ambiguities. Event Storming is deliberately open-ended and fast. It produces a snapshot of collective understanding at a moment in time, and that snapshot is typically discarded or summarised once the discovery phase is complete.

Event Modelling is a specification tool. It produces a persistent artefact — the blueprint — that serves as the living specification of the system throughout its development and evolution. Where Event Storming is broad and exploratory, Event Modelling is precise and complete. Where Event Storming is facilitated as a time-boxed exercise, Event Modelling is an ongoing discipline: the blueprint is updated as the system changes, kept under version control alongside the code, and referenced throughout the delivery lifecycle.

The two are complementary, not competing. A typical engagement might begin with an Event Storming session to rapidly discover the domain, then transition to Event Modelling to produce the precise specification required for implementation. Event Storming gives you the vocabulary and the rough map; Event Modelling gives you the detailed blueprint. Using one without the other is possible, but using both in sequence produces the best outcomes: the discovery speed of Event Storming combined with the specification rigour of Event Modelling.

The key practical difference for architects is durability. An Event Storming wall photograph is a historical artefact. An Event Modelling blueprint is a living document. For programmes that span months or years — the norm in enterprise MarTech — Event Modelling's persistence is not a nice-to-have; it is essential.

Event Modelling in Practice: A MarTech Example

To make the methodology concrete, consider a common MarTech domain challenge: modelling a customer consent capture and email activation flow. This is a workflow every enterprise marketing platform must handle correctly — both for regulatory compliance (GDPR, CCPA) and for campaign effectiveness.

The diagram below shows the complete Event Modelling blueprint for this flow — read left to right across eight phases. The five horizontal layers follow the canonical blueprint structure: UI triggers at the top, commands and events in the middle, read models below, and automation processors at the bottom. Drag to pan, scroll to zoom, or open in the full editor to explore and edit the model.

Event Model · GEO/AEO Flow
Loading diagram…
Command
Event
Read Model
Processor
Scroll · Drag · Pinch to zoom

Step 1: Brainstorm

The team — product owner, MarTech architect, email platform engineer, data privacy officer — writes every event they can think of:

  • ConsentCaptured
  • ConsentWithdrawn
  • ConsentExpired
  • CustomerProfileCreated
  • EmailActivated
  • EmailDeactivated
  • CampaignEnrolmentRequested
  • CampaignEnrolmentApproved
  • CampaignEnrolmentRejected
  • EmailSent
  • EmailBounced
  • UnsubscribeRequested

Immediately, two conversations surface: Does "ConsentWithdrawn" automatically trigger "EmailDeactivated", or are they independent events? And is "CampaignEnrolmentRejected" a business event or a system error? These questions, answered at this stage, prevent production defects later.

Step 2: Plot the Line

The team arranges events into a chronological narrative. A primary storyline emerges:

  • CustomerProfileCreated → ConsentCaptured → EmailActivated → CampaignEnrolmentRequested → CampaignEnrolmentApproved → EmailSent

Parallel lanes handle edge cases: ConsentWithdrawn before EmailSent (campaign must be suppressed); EmailBounced after EmailSent (profile update required); UnsubscribeRequested (ConsentWithdrawn event written, EmailDeactivated follows via automation). Plotting the line reveals that "ConsentExpired" was missing from the initial story — a significant gap given GDPR's time-bound consent requirements.

Step 3: Identify Workflows

For each event, the team identifies the command that produces it and the view that consumes it:

  • CaptureConsent command (issued from a consent capture form UI) → ConsentCaptured event → CustomerConsentProfile view (consumed by the campaign eligibility check screen)
  • ActivateEmail command (issued automatically by an automation processor checking the CustomerConsentProfile view) → EmailActivated event → ActiveEmailRecipients view (consumed by the campaign delivery engine)
  • EnrolInCampaign command (issued by the campaign manager UI) → CampaignEnrolmentRequested event → PendingEnrolments view (consumed by the approval automation processor)

The wireframes placed above the timeline show, for example, the consent capture form with its specific fields: email address, consent category (marketing, transactional, third-party), channel (web, mobile, point-of-sale), and timestamp. Every field on the wireframe must appear in the ConsentCaptured event. If it does not, the information completeness principle is violated, and the view that needs it will have no source.

Step 4: Elaborate (Given/When/Then)

Each slice is specified precisely. For the CaptureConsent → ConsentCaptured slice:

  • Given: CustomerProfileCreated event exists for customer ID CUS-00123 (email: anna.mueller@example.com)
  • When: CaptureConsent command is issued with { customerId: "CUS-00123", category: "marketing", channel: "web", capturedAt: "2026-03-20T10:15:00Z" }
  • Then: ConsentCaptured event is written with { customerId: "CUS-00123", category: "marketing", channel: "web", capturedAt: "2026-03-20T10:15:00Z", consentId: "CNS-00456" }

For the CustomerConsentProfile view:

  • Given: ConsentCaptured event exists for CUS-00123, category: "marketing"
  • When: CustomerConsentProfile view is queried for CUS-00123
  • Then: View returns { customerId: "CUS-00123", activeConsents: ["marketing"], emailStatus: "pending_activation" }

This level of precision eliminates ambiguity entirely. The engineer implementing the view knows exactly what it must return. The QA engineer knows exactly what to test. The data privacy officer can audit the specification to confirm that consent data is handled correctly at every step.

Best Practices for Event Modelling

The following practices reflect lessons from applying Event Modelling across enterprise programmes. Each one addresses a specific failure mode.

  • Use past tense for all event names. "ConsentCaptured", not "CaptureConsent" or "ConsentCapture". Past tense signals immutability — the fact has occurred and cannot be undone. This naming discipline prevents a common error where events are named like commands or database operations.

  • Keep events atomic and business-meaningful. An event should represent a single, coherent business fact. "CustomerConsentAndProfileUpdated" is not an event; it is two events collapsed into one, obscuring the business significance of each. Separate them.

  • One command maps to a clear, defined set of events. A command should not silently produce events that are not visible in the blueprint. If CaptureConsent also produces a CustomerProfileUpdated event, that event must appear in the blueprint. Hidden side effects are the enemy of information completeness.

  • Write Given/When/Then for every slice before any code is written. The specification is the contract. If the Given/When/Then is not written, the slice is not specified, and "done" has no agreed definition. This is the single highest-leverage practice in Event Modelling.

  • Involve business and technical stakeholders from the first workshop. Event Modelling's vocabulary is accessible to non-technical participants. A product owner who cannot follow a UML sequence diagram can follow an Event Modelling blueprint. Use this. The most valuable discoveries happen when the person who knows the business rule is in the room with the person implementing it.

  • Start with the happy path, then add edge cases as parallel lanes. Trying to model all exception paths simultaneously overwhelms the workshop. Establish the primary narrative first — the flow that works — then layer in the alternatives. Edge cases become visible naturally as participants ask "but what if...?" during the plot step.

  • Events are immutable facts. Never update or delete them. If a business fact changes — a customer updates their address — model it as a new event ("AddressUpdated"), not a mutation of the original ("AddressRegistered"). This is the foundational principle of Event Sourcing, and Event Modelling inherits it. An audit trail that can be rewritten is not an audit trail.

  • Name events after their business significance, not their technical implementation. "OrderFulfilmentRecordInserted" is a database operation. "OrderFulfilled" is a business event. The difference matters enormously when the database implementation changes but the business fact remains the same.

  • Validate the blueprint by reading it aloud as a story. If you cannot narrate the blueprint coherently from left to right — "The customer fills in the consent form. The CaptureConsent command is issued. The system records ConsentCaptured. The CustomerConsentProfile view is updated. The email activation processor reads the view and issues ActivateEmail. The system records EmailActivated." — something is wrong. Gaps in the narrative reveal gaps in the specification.

  • Treat the blueprint as the living specification. Version-control it alongside the code. When a requirement changes, update the blueprint before updating the code. The blueprint should always reflect the current intended behaviour of the system, not its historical design. This discipline is what distinguishes Event Modelling from a workshop exercise — it is a long-lived artefact, not a one-time deliverable.

Common Mistakes to Avoid

Teams new to Event Modelling reliably make the same set of errors. Recognising them early prevents wasted effort.

  • CRUD thinking in event names. "UserCreated", "UserUpdated", "UserDeleted" — these are database operations masquerading as business events. They tell you what the system did technically but not what happened in the business. Replace them with "CustomerRegistered", "PreferencesUpdated", "AccountClosed". The business vocabulary is richer and more meaningful.

  • Events that are too granular. "FirstNameUpdated" and "LastNameUpdated" as separate events is almost certainly wrong. A user updates their full name in a single action; model it as a single "NameUpdated" event carrying both fields. Overly granular events create noise in the blueprint and make Given/When/Then specifications unnecessarily complex.

  • Events that are too coarse. "CustomerDataChanged" covers everything and specifies nothing. A consumer of this event cannot determine what actually changed, which means every consumer must re-derive all state every time. Appropriately scoped events — specific enough to carry business meaning, broad enough to represent a coherent business fact — are the goal.

  • Ignoring read models. A common shortcut is to specify commands and events in detail but treat views as implementation details to be figured out later. This is a mistake. Views determine what information reaches the user and what information is available to automation processors. An under-specified view is an ambiguous requirement. Every view must be elaborated with Given/When/Then just as thoroughly as every command.

  • Running the workshop without domain experts. Event Modelling is not a technique for architects to use in isolation and then hand to developers. Its value derives specifically from the shared understanding created when business domain experts and technical implementers specify the system together. A blueprint produced without domain expert input will contain incorrect business logic that no amount of technical rigour can compensate for.

  • Treating the blueprint as a one-time artefact. The blueprint produced in a discovery workshop is a starting point, not a finished product. Systems evolve. Requirements change. If the blueprint is not maintained as the system grows, it becomes misleading — worse than no specification at all, because it implies a false certainty about how the system works.

Tooling for Event Modelling

Event Modelling can be practised with any visual collaboration tool — Miro, FigJam, even physical stickies on a wall. However, for teams working in code-centric environments, dedicated tooling brings the blueprint closer to the implementation and reduces the maintenance burden of keeping it updated.

EventModeller is a VS Code extension that brings a visual Event Storming and Domain-Driven Design canvas directly into the development environment. Diagrams are stored as .eventmodel.json files alongside the code — meaning the specification lives in the same repository as the implementation, versioned together, and always in sync. This eliminates the common failure mode where the architecture diagram on Confluence describes a system from 18 months ago. EventModeller is available on the VS Code Marketplace.

For learning and community resources, eventmodeling.org is the primary reference. Adam Dymitruk's foundational post — "Event Modeling: What is it?" — remains the best single introduction to the methodology. The site also hosts the Event Modeling and Event Sourcing book on Leanpub, monthly workshops, and a Discord community where practitioners share blueprints and answer questions. For teams new to the methodology, attending a live workshop is the fastest path to genuine proficiency; reading about Event Modelling is useful, but experiencing the workshop dynamic is where the methodology's power becomes viscerally clear.

Conclusion

Event Modelling is not a silver bullet, and it is not a replacement for good engineering judgement. What it is — and this is its genuine, hard-won value — is a rigorous, visual, collaborative method for ensuring that everyone building a system understands exactly what it should do, how information flows through it, and what "done" means for every slice of work. Event Modelling closes the gap between business intent and technical implementation more reliably than any other specification technique available to solution architects today.

For architects working in event-driven domains — which, in 2026, is most serious enterprise architecture — Event Modelling is not optional. It is the blueprint. Start with the brainstorm, plot the line, wire the commands and views, elaborate with Given/When/Then, and keep the blueprint alive as your system evolves. The information completeness it enforces, the shared understanding it creates, and the flat cost curve it enables are compounding advantages that pay dividends across the full delivery lifecycle. A system that cannot be described as an Event Modelling blueprint is a system that has not yet been fully understood.

Santosh Pradhan

Santosh Pradhan

MarTech Solutions Architect · Munich