Message Queue — In a Nutshell

Photo by Chad Montano on Unsplash

What is a message queue?

A message queue is like a buffer that decouples sending and receiving of messages. It has:

  • Async operation by nature: the producer no need to wait for consumer to retrieve and process the message
  • Decouple: separating the post and receipt of messages to allow multiple producers/receivers post/receive through one/multiple queue.

Workflow:

  • Producer pushes a message to the queue.
  • Consumer polls a message from the queue (or peeks the next available message without moving the message out of the queue).

When to use:

  • Decoupling workloads: Decouple the message processing from sending so the producer’s thread is not blocked. This is very common in asynchronous event-driven system.
  • Load balancing: When it’s expensive to process a message, deploying a queue can distribute the load by adding the consumers(workers).
  • Load levelling: At peak request window, a sudden increase in message volume might exhaust the current workers (this is called back pressure), and queue can acts as a buffer for the load on workers to amortize overtime. The queue can further implement some throttling mechanism in that when the queue is bigger than a certain size, it can reject incoming request with statusCode 503 so the workers won’t be Karoshi.
  • Reliability. Dead letter queue can be introduced when the consumer is not able to process the message successfully.
  • Resilient message handling: You can use a message queue to add resiliency to the consumers in your system. For example, a consumer can “peek” and lock the next available message in a queue. This action retrieves a copy of the message but locks the original in the queue to prevent it being processed by another consumer. If the process fails, the message will then be released.

Patterns

  • One-way messaging:

The sender simply posts a message to the queue and leaves it to the receiver to process it at some point.

  • Request/response:

Sender posts a message to a queue and waits for an acknowledgment from the receiver. This is more reliable comparing to the above as the the sender can implement custom retry or error handling logic when no response coming back for a timespan.

However, this usually requires a separate communications channel in the form of a dedicated message queue to which the receiver can post its response messages.

E.g. the ReplyTo property from Azure Service Bus Queues

  • Broadcast/Fanout:

The sender posts a message to a queue, and multiple receivers can read a copy of the message. This often works with publisher/subscriber model together.

Some filtering mechanism is possible with the message metadata, e.g. a message labeled as “red” sent to receiver A and “blue” to receiver B.

Source:

That’s it!

Happy Reading!

--

--

--

Hi :)

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Educational NFT Utility

Running the MongoDB — A Beginner’s Guide to MongoDB Installation on Ubuntu with Secure Access…

2 Use Cases of VBA Macro to Overcome Certain Limitations of Base Excel

Common Web Application Architecture

How to Easily Fix the DNS_PROBE_FINISHED_NXDOMAIN Error

19coders

How to Root Micromax In note 1 with Magisk without TWRP

The Developer’s Guide to Choosing the Least Shitty APIs

Wen V2? Now!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
E.Y.

E.Y.

Hi :)

More from Medium

Sagas in distributed systems

Scalable read and write operations in backend systems

System Design: Useful concepts for building services at scale!-Part 2

System design: Debouncing and Throttling