In this newsletter, we’ll cover the following topics: How live streaming works Visa vs. American Express Why is the credit card called “the most profitable product in banks”? Why is single-threaded Redis fast (video) Debugging Tactics Live streaming explained
Hi. Thanks for these newsletters. Subscribed just a few weeks ago and I am liking them a lot. The short YT videos too!
One question regarding the livestream article.. why is the video sent for storage (for replay) after segmentation and via the streaming server? Why can't it be sent directly after compression and encoding?
Thanks for the write up. One thing I was curious about is whether "Video compression & encoding" should typically happen on the client side, mostly due to network bottlenecks for raw video data? I guess for audio it could happen at either side depending on the use cases.
So "The viewers’ devices decode and decompress the video data and play the video in a video player." What exactly does the decoding? Let's say I'm watching the stream from the browser of my PC, does the browser does that?
My favorite one is definitely "take a walk", although it's not always immediately possible. I've also used "switch to someone else and return to the problem later" as a second approach when "take a walk" was not an option.
Would it make sense to segment videos before encoding ? That way we could encode all the segments in parallel, and speed up the process ? Is it because since the encoding is done by the capturing device, and therefore it isn't an option ?
Hi. Thanks for these newsletters. Subscribed just a few weeks ago and I am liking them a lot. The short YT videos too!
One question regarding the livestream article.. why is the video sent for storage (for replay) after segmentation and via the streaming server? Why can't it be sent directly after compression and encoding?
Thanks for the write up. One thing I was curious about is whether "Video compression & encoding" should typically happen on the client side, mostly due to network bottlenecks for raw video data? I guess for audio it could happen at either side depending on the use cases.
So "The viewers’ devices decode and decompress the video data and play the video in a video player." What exactly does the decoding? Let's say I'm watching the stream from the browser of my PC, does the browser does that?
My favorite one is definitely "take a walk", although it's not always immediately possible. I've also used "switch to someone else and return to the problem later" as a second approach when "take a walk" was not an option.
Would it make sense to segment videos before encoding ? That way we could encode all the segments in parallel, and speed up the process ? Is it because since the encoding is done by the capturing device, and therefore it isn't an option ?