6 Comments

Great content! It would be even better if I could zoom in on the images to see the diagrams clearly.

Expand full comment

Review this newsletter, images not opening

Expand full comment

I don't see any body for "EP106: How Does JavaScript Work?" heading. This is all I see:

This week’s system design refresher:

Roadmap for Learning SQL (Youtube video)

Can Kafka lose messages?

9 Best Practices for building microsercvices

Roadmap for Learning Cyber Security

How does Javascript Work?

SPONSOR US

New Relic IAST exceeds OWASP Benchmark with accuracy scoring above 100% (Sponsored)

New Relic Interactive Application Security Testing (IAST) allows security and engineering teams to save time by focusing on real application security problems with zero false positives, as validated by the OWASP benchmark result of 100% accuracy.

Get started for free

Roadmap for Learning SQL

Can Kafka lose messages?

Error handling is one of the most important aspects of building reliable systems.

Today, we will discuss an important topic: Can Kafka lose messages?

A common belief among many developers is that Kafka, by its very design, guarantees no message loss. However, understanding the nuances of Kafka's architecture and configuration is essential to truly grasp how and when it might lose messages, and more importantly, how to prevent such scenarios.

The diagram below shows how a message can be lost during its lifecycle in Kafka.

diagram

Producer

When we call producer.send() to send a message, it doesn't get sent to the broker directly. There are two threads and a queue involved in the message-sending process:

1. Application thread

2. Record accumulator

3. Sender thread (I/O thread)

We need to configure proper ‘acks’ and ‘retries’ for the producer to make sure messages are sent to the broker.

Broker

A broker cluster should not lose messages when it is functioning normally. However, we need to understand which extreme situations might lead to message loss:

1. The messages are usually flushed to the disk asynchronously for higher I/O throughput, so if the instance is down before the flush happens, the messages are lost.

2. The replicas in the Kafka cluster need to be properly configured to hold a valid copy of the data. The determinism in data synchronization is important.

Consumer

Kafka offers different ways to commit messages. Auto-committing might acknowledge the processing of records before they are actually processed. When the consumer is down in the middle of processing, some records may never be processed.

A good practice is to combine both synchronous and asynchronous commits, where we use asynchronous commits in the processing loop for higher throughput and synchronous commits in exception handling to make sure the the last offset is always committed.

Latest articles

If you’re not a paid subscriber, here’s what you missed.

A Crash Course in CI/CD

A Crash Course in IPv4 Addressing

A Brief History of Scaling Netflix

15 Open-Source Projects That Changed the World

The Top 3 Resume Mistakes Costing You the Job

To receive all the full articles and support ByteByteGo, consider subscribing:

Subscribed

9 Best Practices for building microsercvices

Creating a system using microservices is extremely difficult unless you follow some strong principles.

No alt text provided for this image

9 best practices that you must know before building microservices:

Design For Failure

A distributed system with microservices is going to fail.

You must design the system to tolerate failure at multiple levels such as infrastructure, database, and individual services. Use circuit breakers, bulkheads, or graceful degradation methods to deal with failures.

Build Small Services

...snip...

Expand full comment

Oh, the Javascript is all the way at the end of the page. I guess I wasn't expecting to see 5 intermediate articles between the title and the body.

Expand full comment

看不懂

Expand full comment

Short but crisp content, gold.

Expand full comment