Data scientists should strive to know every last detail about their compute stack
I believe in this philosophy: to execute on your best work, you need to have end-to-end control over every last detail.
This is illustrated by the following examples.
Firstly, the story of Jeff Dean and Sanjay Ghemawat: in order to optimize Google's software early on in Google's life, they went all the way down to the hardware and manipulated bits. I believe we should aspire to do the same if we want maximal effectiveness.
Secondly, on another tech giant constantly in my radar: Apple makes some of the world's most beloved, though to some most hated, computers. One thing that cannot be denied: there is a general pattern of excellence in their execution. And one of the causal factors is their relentless control over every little last detail in how they build computers. I believe the same exists for our work as data scientists.
Thirdly, I remember stories of digging deep into 64-bit vs. 32-bit computation on GPU vs. CPU in order to solve a multinomial sampler problem in NumPy, in order to complete the implementation of a Bayesian neural network that I was writing. If I didn't dig deep, I would have been stuck and be unable to complete the neural network implementation that I was working on, which would have been a pity: my PyData NYC 2017 talk on Bayesian deep learning would never have materialized.
The compute stack that we use, from as low-level as the processor architecture (ARM/x86/x64/RISC) to the web technologies that we touch, affect what we do on a day-to-day basis, and thus what we can deliver. Having the breadth of knowledge to be able to effectively navigate every layer of the stack gives us superpowers to deliver the best work that we can.
The philosophies that ground the bootstrap
Here are the philosophies that ground the bootstrap. Internalizing these philosophies will help you understand where we're coming from.