Archive for May 2020

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.

In this episode I explain one of the first methods: knowledge distillation

 Come join us on Slack


Read Full Post »

Codiv-19 is an emergency. True. Let's just not prepare for another emergency about privacy violation when this one is over.


Join our new Slack channel


This episode is supported by Proton. You can check them out at or

Read Full Post »

Podbean App

Play this podcast on Podbean App