Thanks for sharing. But for microservice architecture, it may not be accurate. The LB is usually behind the gateway, or is coarse LB fleet front of gateway, or one LB for one service behind gateway.
Great content. Though I want to know the answer of your question in part 3 (handling hotspot account). What are the limitations of using message queue in handling hotspot account? i would be really grateful if you could share the resources related to it. Looking forward to hearing from you
An MQ makes the process asynchronous, so we have to implement a way to inform the client when payment is actually processed. If we make the client wait until their process is completed, the experience can be even worse than rate-limiting during prime time (pun not intended). If we tell the client that their request is "being processed", then we need a different way to inform them whether it succeded or not.
Another one: How do we keep track of stock to mark a product "sold out"? Do we decrement stock when a request is made or processed? Both options have consequences.
Yep, thanks for the correction, I should have said "sequential write to HDD speed should be mentioned" as it is rational behind Kafka and many other db storage engine design as well (hopefully I make it right this time).
Thanks for sharing. But for microservice architecture, it may not be accurate. The LB is usually behind the gateway, or is coarse LB fleet front of gateway, or one LB for one service behind gateway.
Cool
Cool
Cool
The content is great. Thank you for sharing.
Great content. Though I want to know the answer of your question in part 3 (handling hotspot account). What are the limitations of using message queue in handling hotspot account? i would be really grateful if you could share the resources related to it. Looking forward to hearing from you
An MQ makes the process asynchronous, so we have to implement a way to inform the client when payment is actually processed. If we make the client wait until their process is completed, the experience can be even worse than rate-limiting during prime time (pun not intended). If we tell the client that their request is "being processed", then we need a different way to inform them whether it succeded or not.
Another one: How do we keep track of stock to mark a product "sold out"? Do we decrement stock when a request is made or processed? Both options have consequences.
the sequential write to SSD should be mentioned, as it is what some systems like Kafka or WAL is based, which is 30x slower than write RAM
Doing a sequential write in a SSD doesn't make sense. SSDs can offer greater read speeds for random data access.
If an application supports sequential writes (Kafka is an example) you should be using HDD since the costs are less and throughput isn't affected.
Kafka also recommends using HDD.
Yep, thanks for the correction, I should have said "sequential write to HDD speed should be mentioned" as it is rational behind Kafka and many other db storage engine design as well (hopefully I make it right this time).
Great article! Really love the L1 and L2 cache call out, as it's rarely discussed.
Very cool! What about an architecture for a Metaverse?? It is not a joke! It very useful mental experiment!
I love this content. The overview of the systems and the little details are easy to digest.