Event Driven Architecture — In a Nutshell

E.Y.
5 min readNov 14, 2021

--

Photo by Chad Montano on Unsplash

Event driven architecture has been on the market for years. It is closely linked with cloud, microservices, asynchronous workflow from its birth. In a nutshell, the event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events (Source) .

An event is a change of state and is immutable, and is notable for its:

  • Fanout: An event can be sent to multiple consumers down the pipeline vs. the traditional 1–2–1 communications between servers.
  • Asynchronous: The producer is not blocked after sending an event, regardless the consumer processes it or not.
  • Streaming: Events can and often occur in real-time as a stream, this is opposite to batch processing where collections of data have been grouped together within a specific time interval.

Advantages of EDA:

  • Decoupling of the producers and consumers.
  • Resiliency of the eventing systems: This is due to its decoupling architecture as well as async nature, as the fail of a producer/consumer will no impact the state of the other parts. This is contrary to REST architectures, which are synchronous, so immediate measures have to be taken to handle network failures.
  • Push-based messaging: On-demand processing vs. pull-based processing that requires more resources:
  • Single source of truth with a log of immutable events: This is achieved through event sourcing, aka an approach for maintaining the state of business entities by recording each change of state as an event. These events are stored in an event store, which guarantees ordering and immutability of the events. The store is served as source of truth and allows events replay to fully rebuild all other states.
  • Real-time event streams for data science.

(Source: IBM)

Challenges of EDA:

The challenges in an EDA are similar to challenges to all other distributed systems, mainly on unreliable networks and consistency. The loose coupling of the applications also makes it hard to achieve atomic transactions for business processes. To be clear, we are talking about fair-loss links, crash-recovery processes, and partial synchrony.

For EDA, these can be materialised as:

  • Strict ordering
  • No duplication
  • Consistency among services

Different message brokers tackles the problem differently, and just like in database there is eventual consistency and strong consistency, in event driven architecture we have deliveries that are:

  • At most once: The producer sends a message, but the consumer may not receive it.
  • At least once: The producer sends a message, but the consumer may process duplicated message.
  • Exactly once: The producer sends a message exactly once, and the consumer processes it exactly once.

However, note that the “exactly once” delivery often comes with a penalty cost. For example, you might need to implement it with “at-least-once event delivery plus message deduplication” or “distributed snapshot/state checkpointing” (Source)which both introduce some complexities and external dependencies into the system. This is also illustrated in our intro into AWS FIFO and standard queue later on. What we can do is to make the consumers idempotent, but this also needs extra logic wired into the system.

For ordering, this can normally be done through event metadata such as a timestamp or an auto-increasing ID.

Consistency is hard not just in EDA, as it’s difficult to reach consensus in a distributed world. To sync time for example, one option is to use event sourcing, which record a sequence of events in an append-only log, and the current state can be derived from the log from top to bottom.

Components in EDA:

Typical components consist of : Event Producers — → Event Channel — →Event Consumers

EDA Topologies:

  • Mediator topology:

The mediator topology pattern is used with systems that need orchestration to process events (e.g. steps for different stages of a relatively complex data flow). Let’s say if you are oder something from an e-commerce website, a series of events will be fired after you confirm paymet, e.g. subtract stock number, send order to fulfilment, calc VAT, send order to finance, wait for approval, send confirmation email, etc. A mediator is much needed in this case to decide the sequences of each steps and whether some can be grouped together to be processed in parallel.

The flow starts with producers sending events (called “initial event”) to the mediator, and once it is received, the mediator will coordinate the events by sending them asynchronously (“processing event”) through event channels (message queues or topics) for the corresponding processors to pick it up.

Note the mediator does not perform business logic on the initial event but it dispatches it to the steps required.

  • Broker topology:

The broker topology pattern is used with event flow that is relatively simple and does not require extra coordination. There is no mediator selectively sending messages to different processors in this pattern as it relies on the processors to respond to the events.

The broker serves as a pipe among different processors and can form a chain of events(like a relay race): Processor/Producer A emits event Aa → Broker → Processor B processes it and emits event Bb → Broker → Processor C processes it and emits event Cc →Broker…

This can continue until there’s no more events published for that business process.

Event Processing:

  • Simple event processing patterns (SEP): An event from a publisher immediately triggers an action from subscriber.
  • Event stream processing patterns (ESP)
  • Complex event processing patterns (CEP)

Event Modelling

  • Event types: Use this to classify events under different business domains and apply hierarchy structure to break down the dependencies of each other. This is mainly used for routing.
  • Event schema: Event schema contains event metadata (such as name, time stamp, source) and payload (data regarding the change of state) . Metadata is very useful for correlating and ordering, as well as filtering.
  • Versioning: Just like versioning for API, you should version your event model.
  • Serialisation: Different formats have its pros and cons. While Json is human readable, it has performance costs, and protobuf as a binary format is very efficient, but is difficult to debug.
  • Partitioning. The partitioning is important for concurrent processing of events. See Kafka as an example.
  • Ordering: Some events require strict ordering while others not. This is very much depending on your business requirements.
  • Storage: Events can be transient, e.g. with streaming data. But you might want to store events persistently somewhere else or have customised policy on its removal and retention.

This is an example event model from AWS EventBridge:

{
"version": "0",
"id": "6a7e8feb-b491-4cf7-a9f1-bf3703467718",
"detail-type": "EC2 Instance State-change Notification",
"source": "aws.ec2",
"account": "111122223333",
"time": "2017-12-22T18:43:48Z",
"region": "us-west-1",
"resources": [
"arn:aws:ec2:us-west-1:123456789012:instance/i-1234567890abcdef0"
],
"detail": {
"instance-id": " i-1234567890abcdef0",
"state": "terminated"
}
}

Lastly, Message Queue vs. Message Broker

One might wonder the differences between a message queue and a message broker, which we will illustrate more in the examples later on with SQS, and EventBridge. But in brief, a MQ is like a tunnel or a medium that can move data between applications(stateless). It often serves as a buffer to decouple the message sending/handling logic between applications.

Message brokers is built on top of message queues and has wider capabilities especially on understanding the context of the messages sent to them:

  • Filter: routes messages based on predefined matching policies.
  • Transform: tweak and modify message formats/contents before sending to the consumer..

That’s pretty much of it!

Happy Reading!

--

--

No responses yet