This week’s system design refresher:
Top Kafka Use Cases You Should Know (Youtube video)
How Uber Served 40 Million Reads with Integrated Redis Cache?
What makes AWS Lambda so fast?
Why do we need to use a distributed lock?
SPONSOR US
The Enterprise Ready Conference for engineering leaders (Sponsored)
The Enterprise Ready Conference is a one-day event in SF, bringing together product and engineering leaders shaping the future of enterprise SaaS.
The event features a curated list of speakers with direct experience building for the enterprise, including OpenAI, Vanta, Checkr, Dropbox, and Canva.
Topics include advanced identity management, compliance, encryption, and logging — essential yet complex features that most enterprise customers require.
If you are a founder, exec, PM, or engineer tasked with the enterprise roadmap, this conference is for you. You’ll get detailed insights from industry leaders that have years of experience navigating the same challenges you face today. And best of all, it’s completely free since it’s hosted by WorkOS.
Top Kafka Use Cases You Should Know
How Uber Served 40 Million Reads with Integrated Redis Cache?
There are 3 main parts of the implementation:
CacheFront Read and Writes with CDC
Uber built CacheFront - an integrated caching solution with Redis, Docstore, and MySQL.
Rather than the microservice, Docstore’s query engine communicates with Redis for read requests.
For cache hits, the query engine fetches data from Redis. For cache misses, the request goes to the storage engine and the database.
In the case of writes, Docstore’s CDC service (Flux) invalidates the records in Redis. It tails MySQL binlog events to trigger the invalidation.
Multi-Region Cache Warming with Redis Streaming
A region fail-over can result in cache misses and overload the database.
To handle this, Uber’s engineering team uses cross-region Redis replication. This is done by tailing the Redis write stream to replicate keys to the remote region.
In the remote region, the stream consumer issues read requests to the query engine that reads the database and updates the cache.
Redis and Docstore Sharding
All teams in Uber use Docstore and some generate a huge number of requests.
Both Redis and Docstore instances are sharded or partitioned to handle the load. But a single Redis cluster going down may create a hot DB shard.
To prevent this, they partitioned the Redis cluster using a scheme that was different from the DB sharding. This ensures that the load is evenly distributed.
Over to you: Would you have done something differently?
Latest articles
If you’re not a paid subscriber, here’s what you missed.
To receive all the full articles and support ByteByteGo, consider subscribing:
What makes AWS Lambda so fast?
There are 4 main pillars:
Function Invocation
AWS Lambda supports synchronous and asynchronous invocation.
In synchronous invocation, the caller directly calls the Lambda function using AWS CLI, SDK, or other services.
In asynchronous invocation, the caller doesn’t wait for the function’s response. The request is authorized and an event is placed in an internal SQS queue. Pollers read messages from the queue and send them for processing.Assignment Service
The Assignment Service manages the execution environments.
The service is written in Rust for high performance and is divided into multiple partitions with a leader-follower approach for high availability.
The state of execution environments is written to an external journal log.Firecracker MicroVM
Firecracker is a lightweight virtual machine manager designed for running serverless workloads such as AWS Lambda and AWS Fargate.
It uses Linux’s Kernel-based virtual machine to create and manage secure, fast-booting microVMs.Component Storage
AWS Lambda also has to manage the state consisting of input data and function code.
To make it efficient, it uses multiple techniques:Chunking to store the container images more efficiently.
Using convergent encryption to secure the shared data. This involves appending additional data to the chunk to compute a more robust hash.
SnapStart feature to reduce cold start latency by pre-initializing the execution environment
Over to you: Which other features do you think make AWS Lambda fast?
Why do we need to use a distributed lock?
A distributed lock is a mechanism that ensures mutual exclusion across a distributed system.
Top 6 Use Cases for Distributed Locks
Leader Election
Distributed locks can be used to ensure that only one node becomes the leader at any given time.Task Scheduling
In a distributed task scheduler, distributed locks ensure that a scheduled task is executed by only one worker node, preventing duplicate execution.Resource Allocation
When managing shared resources like file systems, network sockets, or hardware devices, distributed locks ensure that only one process can access the resource at a time.Microservices Coordination
When multiple microservices need to perform coordinated operations, such as updating related data in different databases, distributed locks ensure that these operations are performed in a controlled and orderly manner.Inventory Management
In e-commerce platforms, distributed locks can manage inventory updates to ensure that stock levels are accurately maintained when multiple users attempt to purchase the same item simultaneously.Session Management
When handling user sessions in a distributed environment, distributed locks can ensure that a user session is only modified by one server at a time, preventing inconsistencies.
SPONSOR US
Get your product in front of more than 1,000,000 tech professionals.
Our newsletter puts your products and services directly in front of an audience that matters - hundreds of thousands of engineering leaders and senior engineers - who have influence over significant tech decisions and big purchases.
Space Fills Up Fast - Reserve Today
Ad spots typically sell out about 4 weeks in advance. To ensure your ad reaches this influential audience, reserve your space now by emailing sponsorship@bytebytego.com
This mindmap is so beautiful! Can you tell me how it was made?