8 Comments
User's avatar
Jia Long Loh's avatar

Original author here, thanks for analysing my blog post and sharing it here! Always been a fan of Alex and bytebytego, it's an honour to be featured.

Expand full comment
Naina Chaturvedi's avatar

++ Good Post, Also, start here Compilation of 100+ Most Asked System Design, ML System Design Case Studies and LLM System Design

https://open.substack.com/pub/naina0405/p/important-compilation-of-most-asked?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment
Jay Vercellone's avatar

> In open source, the number of GitHub stars can reflect how widely a project is used and how much community support it may receive.

Why would Github stars reflect this? There's real usage data that can be used for this. Github stars can be gamed, and they don't necessarily mean that someone's using the library. For instance, I use Github stars to "follow" or "bookmark" a project. Others use it as the equivalent of a "like" button in social media.

On the other hand, crates.io provides download stats for each crate in their "Stats Overview" section. Repo forks is also a metric that can indicate how big the maintainer/contributors community is.

Expand full comment
Neural Foundry's avatar

Fantastic deep dive! The 70% cost reduction is remarkable, especially when latency remained similar. Your point about Rust not being 'faster' but more 'efficient' is crucial - Go's GC pauses are usually negligible for most services, but the lower resource utilization really matters at scale. The library evaluation approach was smart - balancing GitHub stars with official backing (like Scylla driver). Interesting that you had to switch Redis clients mid-project when redis-rs didn't support async properly. The Datadog StatsD client choice (Cadence with <500 stars) shows that sometimes intuitive APIs matter more than popularty. Question: did you benchmark memory usage differences? Rust's zero-copy patterns and lack of GC should show dramatic improvements there too, which could explain part of that 70% savings.

Expand full comment
Arun Manivannan's avatar

Great article! One minor observation - as the article claims, the real win here is resource efficiency rather than latency improvements—which makes perfect sense for a heavily IO bound Counter Service, where the bottleneck is I/O wait time rather than CPU processing.

Expand full comment
Arun Manivannan's avatar

Great article! One minor observation - as the article claims, the real win here is resource efficiency rather than latency improvements—which makes perfect sense for a heavily IO bound Counter Service, where the bottleneck is I/O wait time rather than CPU processing.

Expand full comment
Alex Pliutau's avatar

As any rewrite, the costs have been cut also because there was a refactoring done along the way.

Expand full comment
Tommy's avatar
Oct 1Edited

Since it was a rewrite it’s not really clear that the CPU reduction was entirely due to switching to Rust. It is extremely suspicious to see 20 cores vs 4.5 cores, around a 4x difference, being attributed to a difference of languages, which elsewhere are reported to be less than 2x difference in CPU requirements. There’s likely some optimisation that would also make the Golang code much more performant on 4.5 cores.

At any rate it is an interesting write-up. Rewriting services can lead to significant improvements. Recent post on LinkedIn described 14x improvement by rewriting Golang code in Go: https://www.linkedin.com/posts/sergei-skoredin_golang-performance-backend-activity-7378744624548438016-4E-6

Expand full comment