Data Science at Home
Episodes

Wednesday Aug 12, 2020
Why you care about homomorphic encryption (Ep. 116)
Wednesday Aug 12, 2020
Wednesday Aug 12, 2020
After deep learning, a new entry is about ready to go on stage. The usual journalists are warming up their keyboards for blogs, news feeds, tweets, in one word, hype.This time it's all about privacy and data confidentiality. The new words, homomorphic encryption.
Join and chat with us on the official Discord channel.
Sponsors
This episode is supported by Amethix Technologies.
Amethix works to create and maximize the impact of the world’s leading corporations, startups, and nonprofits, so they can create a better future for everyone they serve. They are a consulting firm focused on data science, machine learning, and artificial intelligence.
References
Towards a Homomorphic Machine Learning Big Data Pipeline for the Financial Services Sector
IBM Fully Homomorphic Encryption Toolkit for Linux

Sunday Jul 26, 2020
GPT-3 cannot code (and never will) (Ep. 114)
Sunday Jul 26, 2020
Sunday Jul 26, 2020
The hype around GPT-3 is alarming and gives and provides us with the awful picture of people misunderstanding artificial intelligence. In response to some comments that claim GPT-3 will take developers' jobs, in this episode I express some personal opinions about the state of AI in generating source code (and in particular GPT-3).
If you have comments about this episode or just want to chat, come join us on the official Discord channel.
This episode is supported by Amethix Technologies.
Amethix works to create and maximize the impact of the world’s leading corporations, startups, and nonprofits, so they can create a better future for everyone they serve. They are a consulting firm focused on data science, machine learning, and artificial intelligence.

Wednesday Jul 22, 2020
Make Stochastic Gradient Descent Fast Again (Ep. 113)
Wednesday Jul 22, 2020
Wednesday Jul 22, 2020
There is definitely room for improvement in the family of algorithms of stochastic gradient descent. In this episode I explain a relatively simple method that has shown to improve on the Adam optimizer. But, watch out! This approach does not generalize well.
Join our Discord channel and chat with us.
References
More descent, less gradient
Taylor Series
![[RB] It’s cold outside. Let’s speak about AI winter (Ep. 111)](https://pbcdn1.podbean.com/imglogo/image-logo/1799802/dsh-cover-2_300x300.jpg)
Friday Jul 03, 2020
[RB] It’s cold outside. Let’s speak about AI winter (Ep. 111)
Friday Jul 03, 2020
Friday Jul 03, 2020
In this episode I speak with Filip Piekniewski about some of the most worth noting findings in AI and machine learning in 2019. As a matter of fact, the entire field of AI has been inflated by hype and claims that are hard to believe. A lot of the promises made a few years ago have revealed quite hard to achieve, if not impossible. Let's stay grounded and realistic on the potential of this amazing field of research, not to bring disillusion in the near future.
Join us to our Discord channel to discuss your favorite episode and propose new ones.
This episode is brought to you by Protonmail
Click on the link in the description or go to protonmail.com/datascience and get 20% off their annual subscription.

Monday Jun 29, 2020
Rust and machine learning #4: practical tools (Ep. 110)
Monday Jun 29, 2020
Monday Jun 29, 2020
In this episode I make a non exhaustive list of machine learning tools and frameworks, written in Rust. Not all of them are mature enough for production environments. I believe that community effort can change this very quickly.
To make a comparison with the Python ecosystem I will cover frameworks for linear algebra (numpy), dataframes (pandas), off-the-shelf machine learning (scikit-learn), deep learning (tensorflow) and reinforcement learning (openAI).
Rust is the language of the future.Happy coding!
Reference
BLAS linear algebra https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms
Rust dataframe https://github.com/nevi-me/rust-dataframe
Rustlearn https://github.com/maciejkula/rustlearn
Rusty machine https://github.com/AtheMathmo/rusty-machine
Tensorflow bindings https://lib.rs/crates/tensorflow
Juice (machine learning for hackers) https://lib.rs/crates/juice
Rust reinforcement learning https://lib.rs/crates/rsrl

Monday Jun 01, 2020
Compressing deep learning models: rewinding (Ep.105)
Monday Jun 01, 2020
Monday Jun 01, 2020
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io Pryml is an enterprise-scale platform to synthesise data and deploy applications built on that data back to a production environment.
References
Comparing Rewinding and Fine-tuning in Neural Network Pruninghttps://arxiv.org/abs/2003.02389

Wednesday May 20, 2020
Compressing deep learning models: distillation (Ep.104)
Wednesday May 20, 2020
Wednesday May 20, 2020
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
Reference
Distilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531
Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks https://arxiv.org/abs/2004.05937

Wednesday Apr 01, 2020
Activate deep learning neurons faster with Dynamic RELU (ep. 101)
Wednesday Apr 01, 2020
Wednesday Apr 01, 2020
In this episode I briefly explain the concept behind activation functions in deep learning. One of the most widely used activation function is the rectified linear unit (ReLU). While there are several flavors of ReLU in the literature, in this episode I speak about a very interesting approach that keeps computational complexity low while improving performance quite consistently.
This episode is supported by pryml.io. At pryml we let companies share confidential data. Visit our website.
Don't forget to join us on discord channel to propose new episode or discuss the previous ones.
References
Dynamic ReLU https://arxiv.org/abs/2003.10027

Monday Mar 23, 2020
WARNING!! Neural networks can memorize secrets (ep. 100)
Monday Mar 23, 2020
Monday Mar 23, 2020
One of the best features of neural networks and machine learning models is to memorize patterns from training data and apply those to unseen observations. That's where the magic is. However, there are scenarios in which the same machine learning models learn patterns so well such that they can disclose some of the data they have been trained on. This phenomenon goes under the name of unintended memorization and it is extremely dangerous.
Think about a language generator that discloses the passwords, the credit card numbers and the social security numbers of the records it has been trained on. Or more generally, think about a synthetic data generator that can disclose the training data it is trying to protect.
In this episode I explain why unintended memorization is a real problem in machine learning. Except for differentially private training there is no other way to mitigate such a problem in realistic conditions.At Pryml we are very aware of this. Which is why we have been developing a synthetic data generation technology that is not affected by such an issue.
This episode is supported by Harmonizely. Harmonizely lets you build your own unique scheduling page based on your availability so you can start scheduling meetings in just a couple minutes.Get started by connecting your online calendar and configuring your meeting preferences.Then, start sharing your scheduling page with your invitees!
References
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networkshttps://www.usenix.org/conference/usenixsecurity19/presentation/carlini

Friday Feb 07, 2020
Friday Feb 07, 2020
Why so much silence? Building a company! That's why :) I am building pryml, a platform that allows data scientists build their applications on data they cannot get access to. This is the first of a series of episodes in which I will speak about the technology and the challenges we are facing while we build it.
Happy listening and stay tuned!