In this article, we look at why Google built custom silicon, and how it works, revealing the physical constraints and engineering trade-offs they had to make.
I had no interest in this topic but once I started reading through I couldn't stop. What an fantastic way to explain complex concept! Truly one of the best newsletter I subscribe to.
If the evolution of custom AI hardware like Google’s TPU accelerates both the scale and ubiquity of large language models, how might that reshape human behavior? I think Americans can already be very demanding, yet this might normalize expectations of instantaneous, machine-mediated reasoning and decision-making in everyday life. Will people eventually select regularly from several machine-generated conclusions?
I had no interest in this topic but once I started reading through I couldn't stop. What an fantastic way to explain complex concept! Truly one of the best newsletter I subscribe to.
Jax seems the only native option to operate tpus but it’s still less mature than PyTorch. I wish PyTorch can merge xla as one of its backend.
If the evolution of custom AI hardware like Google’s TPU accelerates both the scale and ubiquity of large language models, how might that reshape human behavior? I think Americans can already be very demanding, yet this might normalize expectations of instantaneous, machine-mediated reasoning and decision-making in everyday life. Will people eventually select regularly from several machine-generated conclusions?
Ty for writing this
A hot topic. Thanks for tackling it!