Data Science at Home
Episodes
Monday Oct 30, 2017
Episode 27: Techstars accelerator and the culture of fireflies
Monday Oct 30, 2017
Monday Oct 30, 2017
In the aftermath of the Barclays Accelerator, powered by Techstars experience, one of the most innovative and influential startup accelerators in the world, I’d like to give back to the community lessons learned, including the need for confidence, soft-skills, and efficiency, to be applied to startups that deal with artificial intelligence and data science.In this episode I also share some thoughts about the culture of fireflies in modern and dynamic organisations.
Monday Oct 23, 2017
Episode 26: Deep Learning and Alzheimer
Monday Oct 23, 2017
Monday Oct 23, 2017
In this episode I speak about Deep Learning technology applied to Alzheimer disorder prediction. I had a great chat with Saman Sarraf, machine learning engineer at Konica Minolta, former lab manager at the Rotman Research Institute at Baycrest, University of Toronto and author of DeepAD: Alzheimer′ s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI.
I hope you enjoy the show.
Monday Oct 16, 2017
Episode 25: How to become data scientist [RB]
Monday Oct 16, 2017
Monday Oct 16, 2017
In this episode, I speak about the requirements and the skills to become data scientist and join an amazing community that is changing the world with data analyticsa
Monday Oct 09, 2017
Episode 24: How to handle imbalanced datasets
Monday Oct 09, 2017
Monday Oct 09, 2017
In machine learning and data science in general it is very common to deal at some point with imbalanced datasets and class distributions. This is the typical case where the number of observations that belong to one class is significantly lower than those belonging to the other classes. Actually this happens all the time, in several domains, from finance, to healthcare to social media, just to name a few I have personally worked with. Think about a bank detecting fraudulent transactions among millions or billions of daily operations, or equivalently in healthcare for the identification of rare disorders. In genetics but also with clinical lab tests this is a normal scenario, in which, fortunately there are very few patients affected by a disorder and therefore very few cases wrt the large pool of healthy patients (or not affected). There is no algorithm that can take into account the class distribution or the amount of observations in each class, if it is not explicitly designed to handle such situations. In this episode I speak about some effective techniques to handle imbalanced datasets, advising the right method, or the most appropriate one to the right dataset or problem.
In this episode I explain how to deal with such common and challenging scenarios.
Tuesday Oct 03, 2017
Episode 23: Why do ensemble methods work?
Tuesday Oct 03, 2017
Tuesday Oct 03, 2017
Ensemble methods have been designed to improve the performance of the single model, when the single model is not very accurate. According to the general definition of ensembling, it consists in building a number of single classifiers and then combining or aggregating their predictions into one classifier that is usually stronger than the single one.
The key idea behind ensembling is that some models will do well when they model certain aspects of the data while others will do well in modelling other aspects. In this episode I show with a numeric example why and when ensemble methods work.