Hi, I have a question about SOA. I keep hearing about how it's difficult to support eg 100k writes per second and one needs a super tuned replicated and sharded database. But somehow, it's a "solved problem" to support 100k writes per second to the network cards of each one of this services and to the API gateway? I understand that the services in SOA have multiple instances of themselves. But let's say a client request in json format goes through web layer->api gateway->presentation service->middle-tier service -> data service-> DB.

How is this comparably efficient to web layer->DB ? Or, rather, is the network latency so negligible than it's OK to add multiple hops?

Some thoughts I have that may explain it:

1) The data is written in the network card buffers but never makes it to disk. It is read from memory in all the services, except for the data tier.

1b) The data is written in the first available place of free memory, sequentially when writing to the receving server's receive queue. Whereas when writing to a db, we may need to update several places and also overwrite an entire block on disk if only a part of it changes.

2) Most of the network hops are within same DC so they can be quite fast

I would be very happy to receive some more explanation as to how the network is so fast

Expand full comment

I did not understand 1b so I won't comment on that. But the other 2 assumptions are correct to my knowledge. Specially for 2, inter-service communication is really fast, they are in the same cluster, and (not sure about thrift) but gRPC uses http2 at transport layer which is even more performant.

Expand full comment

So I’m paying for this newsletter and I also have to see ads in it? Time to cancel.

Expand full comment

love this one!

Expand full comment