About this Show
Data Science at Home is a podcast about machine learning, artificial intelligence and algorithms.
The show is hosted by Dr. Francesco Gadaleta on solo episodes and interviews with some of the most influential figures in the field
Technology, AI, machine learning and algorithms. Come join the discussion on Discord! https://discord.gg/4UNKGf3
Saturday Nov 04, 2017
Saturday Nov 04, 2017
Saturday Nov 04, 2017
The enthusiasm for artificial intelligence is raising some concerns especially with respect to some ventured conclusions about what AI can really do and what its direct descendent, artificial general intelligence would be capable of doing in the immediate future. From stealing jobs, to exterminating the entire human race, the creativity (of some) seems to have no limits. In this episode I make sure that everyone comes back to reality - which might sound less exciting than Hollywood but definitely more... real.
Monday Oct 30, 2017
Monday Oct 30, 2017
Monday Oct 30, 2017
In the aftermath of the Barclays Accelerator, powered by Techstars experience, one of the most innovative and influential startup accelerators in the world, I’d like to give back to the community lessons learned, including the need for confidence, soft-skills, and efficiency, to be applied to startups that deal with artificial intelligence and data science.In this episode I also share some thoughts about the culture of fireflies in modern and dynamic organisations.
Monday Oct 23, 2017
Monday Oct 23, 2017
Monday Oct 23, 2017
In this episode I speak about Deep Learning technology applied to Alzheimer disorder prediction. I had a great chat with Saman Sarraf, machine learning engineer at Konica Minolta, former lab manager at the Rotman Research Institute at Baycrest, University of Toronto and author of DeepAD: Alzheimer′ s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI.
I hope you enjoy the show.
Monday Oct 16, 2017
Monday Oct 16, 2017
Monday Oct 16, 2017
In this episode, I speak about the requirements and the skills to become data scientist and join an amazing community that is changing the world with data analyticsa
Monday Oct 09, 2017
Monday Oct 09, 2017
Monday Oct 09, 2017
In machine learning and data science in general it is very common to deal at some point with imbalanced datasets and class distributions. This is the typical case where the number of observations that belong to one class is significantly lower than those belonging to the other classes. Actually this happens all the time, in several domains, from finance, to healthcare to social media, just to name a few I have personally worked with. Think about a bank detecting fraudulent transactions among millions or billions of daily operations, or equivalently in healthcare for the identification of rare disorders. In genetics but also with clinical lab tests this is a normal scenario, in which, fortunately there are very few patients affected by a disorder and therefore very few cases wrt the large pool of healthy patients (or not affected). There is no algorithm that can take into account the class distribution or the amount of observations in each class, if it is not explicitly designed to handle such situations. In this episode I speak about some effective techniques to handle imbalanced datasets, advising the right method, or the most appropriate one to the right dataset or problem.
In this episode I explain how to deal with such common and challenging scenarios.
Tuesday Oct 03, 2017
Tuesday Oct 03, 2017
Tuesday Oct 03, 2017
Ensemble methods have been designed to improve the performance of the single model, when the single model is not very accurate. According to the general definition of ensembling, it consists in building a number of single classifiers and then combining or aggregating their predictions into one classifier that is usually stronger than the single one.
The key idea behind ensembling is that some models will do well when they model certain aspects of the data while others will do well in modelling other aspects. In this episode I show with a numeric example why and when ensemble methods work.
Monday Sep 25, 2017
Monday Sep 25, 2017
Monday Sep 25, 2017
Continuing the discussion of the last two episodes, there is one more aspect of deep learning that I would love to consider and therefore left as a full episode, that is parallelising and distributing deep learning on relatively large clusters.
As a matter of fact, computing architectures are changing in a way that is encouraging parallelism more than ever before. And deep learning is no exception and despite the greatest improvements with commodity GPUs - graphical processing units, when it comes to speed, there is still room for improvement.
Together with the last two episodes, this one completes the picture of deep learning at scale. Indeed, as I mentioned in the previous episode, How to master optimisation in deep learning, the function optimizer is the horsepower of deep learning and neural networks in general. A slow and inaccurate optimisation method leads to networks that slowly converge to unreliable results.
In another episode titled “Additional strategies for optimizing deeplearning” I explained some ways to improve function minimisation and model tuning in order to get better parameters in less time. So feel free to listen to these episodes again, share them with your friends, even re-broadcast or download for your commute.
While the methods that I have explained so far represent a good starting point for prototyping a network, when you need to switch to production environments or take advantage of the most recent and advanced hardware capabilities of your GPU, well... in all those cases, you would like to do something more.
Monday Sep 18, 2017
Monday Sep 18, 2017
Monday Sep 18, 2017
In the last episode How to master optimisation in deep learning I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning. I explored the family of gradient descent methods - even though not exhaustively - giving a list of approaches that deep learning researchers are considering for different scenarios. Every method has its own benefits and drawbacks, pretty much depending on the type of data, and data sparsity. But there is one method that seems to be, at least empirically, the best approach so far.
Feel free to listen to the previous episode, share it, re-broadcast or just download for your commute.
In this episode I would like to continue that conversation about some additional strategies for optimising gradient descent in deep learning and introduce you to some tricks that might come useful when your neural network stops learning from data or when the learning process becomes so slow that it really seems it reached a plateau even by feeding in fresh data.
Monday Aug 28, 2017
Monday Aug 28, 2017
Monday Aug 28, 2017
The secret behind deep learning is not really a secret. It is function optimisation. What a neural network essentially does, is optimising a function. In this episode I illustrate a number of optimisation methods and explain which one is the best and why.
Wednesday Aug 09, 2017
Wednesday Aug 09, 2017
Wednesday Aug 09, 2017
Over the past few years, neural networks have re-emerged as powerful machine-learning models, reaching state-of-the-art results in several fields like image recognition and speech processing. More recently, neural network models started to be applied also to textual data in order to deal with natural language, and there too with promising results. In this episode I explain why is deep learning performing the way it does, and what are some of the most tedious causes of failure.
Data Science at Home is the top-10 best data science podcasts on Apple Podcasts, Spotify, Stitcher, Podbean and many more aggregators.
We reach our audience on a weekly basis via 30-minute episodes enriched with blog posts and show notes. Our episodes reach a highly targeted audience across a wide demographics and globally distributed.
Data Science at home currently accepts at most two advertising slots per episode. The scheduled episode for your advertising campaign will be defined by our team, depending on the topic and the current advertising queue.
Our team is available to give you recommendations about your application and to discuss rates. Please send a direct email to media@amethix.com to make first contact. After connecting, we will share the best available date for you to proceed with the onboarding.
We promote services and products related to IT, Internet services, Research, Data Science, Machine learning, Fintech and Banking, Healthcare, Energy, etc. Below are some of the most recent statistics of the show.
Contact us and let’s talk about how we can help get your message to the audience of Data Science at Home podcast.
Data Science at Home is a podcast about machine learning, artificial intelligence and algorithms.
The show is hosted by Dr. Francesco Gadaleta on solo episodes and interviews with some of the most influential figures in the field