About this Show
Data Science at Home is a podcast about machine learning, artificial intelligence and algorithms.
The show is hosted by Dr. Francesco Gadaleta on solo episodes and interviews with some of the most influential figures in the field
Technology, AI, machine learning and algorithms. Come join the discussion on Discord! https://discord.gg/4UNKGf3
Monday Jun 15, 2020
Monday Jun 15, 2020
In this episode I have a chat with Sandeep Pandya, CEO at Everguard.ai a company that uses sensor fusion, computer vision and more to provide safer working environments to workers in heavy industry.Sandeep is a senior executive who can hide the complexity of the topic with great talent.
This episode is supported by Pryml.io Pryml is an enterprise-scale platform to synthesise data and deploy applications built on that data back to a production environment.Test ideas. Launch new products. Fast. Secure.
Monday Jun 01, 2020
Monday Jun 01, 2020
Monday Jun 01, 2020
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io Pryml is an enterprise-scale platform to synthesise data and deploy applications built on that data back to a production environment.
References
Comparing Rewinding and Fine-tuning in Neural Network Pruninghttps://arxiv.org/abs/2003.02389
Wednesday May 20, 2020
Wednesday May 20, 2020
Wednesday May 20, 2020
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
Reference
Distilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531
Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks https://arxiv.org/abs/2004.05937
Friday May 08, 2020
Friday May 08, 2020
Friday May 08, 2020
Codiv-19 is an emergency. True. Let's just not prepare for another emergency about privacy violation when this one is over.
Join our new Slack channel
This episode is supported by Proton. You can check them out at protonmail.com or protonvpn.com
Sunday Apr 19, 2020
Sunday Apr 19, 2020
Sunday Apr 19, 2020
Whenever people reason about probability of events, they have the tendency to consider average values between two extremes. In this episode I explain why such a way of approximating is wrong and dangerous, with a numerical example.
We are moving our community to Slack. See you there!
Wednesday Apr 01, 2020
Wednesday Apr 01, 2020
Wednesday Apr 01, 2020
In this episode I briefly explain the concept behind activation functions in deep learning. One of the most widely used activation function is the rectified linear unit (ReLU). While there are several flavors of ReLU in the literature, in this episode I speak about a very interesting approach that keeps computational complexity low while improving performance quite consistently.
This episode is supported by pryml.io. At pryml we let companies share confidential data. Visit our website.
Don't forget to join us on discord channel to propose new episode or discuss the previous ones.
References
Dynamic ReLU https://arxiv.org/abs/2003.10027
Monday Mar 23, 2020
Monday Mar 23, 2020
Monday Mar 23, 2020
One of the best features of neural networks and machine learning models is to memorize patterns from training data and apply those to unseen observations. That's where the magic is. However, there are scenarios in which the same machine learning models learn patterns so well such that they can disclose some of the data they have been trained on. This phenomenon goes under the name of unintended memorization and it is extremely dangerous.
Think about a language generator that discloses the passwords, the credit card numbers and the social security numbers of the records it has been trained on. Or more generally, think about a synthetic data generator that can disclose the training data it is trying to protect.
In this episode I explain why unintended memorization is a real problem in machine learning. Except for differentially private training there is no other way to mitigate such a problem in realistic conditions.At Pryml we are very aware of this. Which is why we have been developing a synthetic data generation technology that is not affected by such an issue.
This episode is supported by Harmonizely. Harmonizely lets you build your own unique scheduling page based on your availability so you can start scheduling meetings in just a couple minutes.Get started by connecting your online calendar and configuring your meeting preferences.Then, start sharing your scheduling page with your invitees!
References
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networkshttps://www.usenix.org/conference/usenixsecurity19/presentation/carlini
Saturday Mar 14, 2020
Saturday Mar 14, 2020
Saturday Mar 14, 2020
In this episode I explain a very effective technique that allows one to infer the membership of any record at hand to the (private) training dataset used to train the target model. The effectiveness of such technique is due to the fact that it works on black-box models of which there is no access to the data used for training, nor model parameters and hyperparameters. Such a scenario is very realistic and typical of machine learning as a service APIs.
This episode is supported by pryml.io, a platform I am personally working on that enables data sharing without giving up confidentiality.
As promised below is the schema of the attack explained in the episode.
References
Membership Inference Attacks Against Machine Learning Models
Sunday Mar 08, 2020
Sunday Mar 08, 2020
Sunday Mar 08, 2020
Masking, obfuscating, stripping, shuffling. All the above techniques try to do one simple thing: keeping the data private while sharing it with third parties. Unfortunately, they are not the silver bullet to confidentiality. All the players in the synthetic data space rely on simplistic techniques that are not secure, might not be compliant and risky for production. At pryml we do things differently.
Sunday Mar 01, 2020
Sunday Mar 01, 2020
Sunday Mar 01, 2020
There are very good reasons why a financial institution should never share their data. Actually, they should never even move their data. Ever.In this episode I explain you why.
Data Science at Home is the top-10 best data science podcasts on Apple Podcasts, Spotify, Stitcher, Podbean and many more aggregators.
We reach our audience on a weekly basis via 30-minute episodes enriched with blog posts and show notes. Our episodes reach a highly targeted audience across a wide demographics and globally distributed.
Data Science at home currently accepts at most two advertising slots per episode. The scheduled episode for your advertising campaign will be defined by our team, depending on the topic and the current advertising queue.
Our team is available to give you recommendations about your application and to discuss rates. Please send a direct email to media@amethix.com to make first contact. After connecting, we will share the best available date for you to proceed with the onboarding.
We promote services and products related to IT, Internet services, Research, Data Science, Machine learning, Fintech and Banking, Healthcare, Energy, etc. Below are some of the most recent statistics of the show.
Contact us and let’s talk about how we can help get your message to the audience of Data Science at Home podcast.
Data Science at Home is a podcast about machine learning, artificial intelligence and algorithms.
The show is hosted by Dr. Francesco Gadaleta on solo episodes and interviews with some of the most influential figures in the field