Hi. Thanks for these newsletters. Subscribed just a few weeks ago and I am liking them a lot. The short YT videos too!
One question regarding the livestream article.. why is the video sent for storage (for replay) after segmentation and via the streaming server? Why can't it be sent directly after compression and encoding?
Thanks for the write up. One thing I was curious about is whether "Video compression & encoding" should typically happen on the client side, mostly due to network bottlenecks for raw video data? I guess for audio it could happen at either side depending on the use cases.
So "The viewers’ devices decode and decompress the video data and play the video in a video player." What exactly does the decoding? Let's say I'm watching the stream from the browser of my PC, does the browser does that?
My favorite one is definitely "take a walk", although it's not always immediately possible. I've also used "switch to someone else and return to the problem later" as a second approach when "take a walk" was not an option.
Would it make sense to segment videos before encoding ? That way we could encode all the segments in parallel, and speed up the process ? Is it because since the encoding is done by the capturing device, and therefore it isn't an option ?
If I am not wrong, segementing is done AFTER for it must be much faster to segment a compressed video. Moreover, compressing/encoding individual segments should have its own complexities and/or overheads.
Hi. Thanks for these newsletters. Subscribed just a few weeks ago and I am liking them a lot. The short YT videos too!
One question regarding the livestream article.. why is the video sent for storage (for replay) after segmentation and via the streaming server? Why can't it be sent directly after compression and encoding?
Because the video should be replayed at different bit-rates and qualities, remember that we do segmentation for various bit rates? Thats why!!
Ahhh so various different files are then sent for storage I assume??
Thanks for the help!
Thanks for the write up. One thing I was curious about is whether "Video compression & encoding" should typically happen on the client side, mostly due to network bottlenecks for raw video data? I guess for audio it could happen at either side depending on the use cases.
So "The viewers’ devices decode and decompress the video data and play the video in a video player." What exactly does the decoding? Let's say I'm watching the stream from the browser of my PC, does the browser does that?
My favorite one is definitely "take a walk", although it's not always immediately possible. I've also used "switch to someone else and return to the problem later" as a second approach when "take a walk" was not an option.
Would it make sense to segment videos before encoding ? That way we could encode all the segments in parallel, and speed up the process ? Is it because since the encoding is done by the capturing device, and therefore it isn't an option ?
If I am not wrong, segementing is done AFTER for it must be much faster to segment a compressed video. Moreover, compressing/encoding individual segments should have its own complexities and/or overheads.
Just taking a guess.