Data Science at Home
Episodes

Friday Jun 22, 2018
Episode 34: Get ready for AI winter
Friday Jun 22, 2018
Friday Jun 22, 2018
Today I am having a conversation with Filip Piękniewski, researcher working on computer vision and AI at Koh Young Research America. His adventure with AI started in the 90s and since then a long list of experiences at the intersection of computer science and physics, led him to the conclusion that deep learning might not be sufficient nor appropriate to solve the problem of intelligence, specifically artificial intelligence. I read some of his publications and got familiar with some of his ideas. Honestly, I have been attracted by the fact that Filip does not buy the hype around AI and deep learning in particular. He doesn’t seem to share the vision of folks like Elon Musk who claimed that we are going to see an exponential improvement in self driving cars among other things (he actually said that before a Tesla drove over a pedestrian).

Monday Jun 11, 2018
Episode 33: Decentralized Machine Learning and the proof-of-train
Monday Jun 11, 2018
Monday Jun 11, 2018
In the attempt of democratizing machine learning, data scientists should have the possibility to train their models on data they do not necessarily own, nor see. A model that is privately trained should be verified and uniquely identified across its entire life cycle, from its random initialization to setting the optimal values of its parameters.How does blockchain allow all this? Fitchain is the decentralized machine learning platform that provides models an identity and a certification of their training procedure, the proof-of-train

Tuesday Nov 21, 2017
Episode 30: Neural networks and genetic evolution: an unfeasible approach
Tuesday Nov 21, 2017
Tuesday Nov 21, 2017
Despite what researchers claim about genetic evolution, in this episode we give a realistic view of the field.

Saturday Nov 04, 2017
Episode 28: Towards Artificial General Intelligence: preliminary talk
Saturday Nov 04, 2017
Saturday Nov 04, 2017
The enthusiasm for artificial intelligence is raising some concerns especially with respect to some ventured conclusions about what AI can really do and what its direct descendent, artificial general intelligence would be capable of doing in the immediate future. From stealing jobs, to exterminating the entire human race, the creativity (of some) seems to have no limits. In this episode I make sure that everyone comes back to reality - which might sound less exciting than Hollywood but definitely more... real.

Monday Oct 23, 2017
Episode 26: Deep Learning and Alzheimer
Monday Oct 23, 2017
Monday Oct 23, 2017
In this episode I speak about Deep Learning technology applied to Alzheimer disorder prediction. I had a great chat with Saman Sarraf, machine learning engineer at Konica Minolta, former lab manager at the Rotman Research Institute at Baycrest, University of Toronto and author of DeepAD: Alzheimer′ s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI.
I hope you enjoy the show.

Monday Sep 25, 2017
Episode 22: Parallelising and distributing Deep Learning
Monday Sep 25, 2017
Monday Sep 25, 2017
Continuing the discussion of the last two episodes, there is one more aspect of deep learning that I would love to consider and therefore left as a full episode, that is parallelising and distributing deep learning on relatively large clusters.
As a matter of fact, computing architectures are changing in a way that is encouraging parallelism more than ever before. And deep learning is no exception and despite the greatest improvements with commodity GPUs - graphical processing units, when it comes to speed, there is still room for improvement.
Together with the last two episodes, this one completes the picture of deep learning at scale. Indeed, as I mentioned in the previous episode, How to master optimisation in deep learning, the function optimizer is the horsepower of deep learning and neural networks in general. A slow and inaccurate optimisation method leads to networks that slowly converge to unreliable results.
In another episode titled “Additional strategies for optimizing deeplearning” I explained some ways to improve function minimisation and model tuning in order to get better parameters in less time. So feel free to listen to these episodes again, share them with your friends, even re-broadcast or download for your commute.
While the methods that I have explained so far represent a good starting point for prototyping a network, when you need to switch to production environments or take advantage of the most recent and advanced hardware capabilities of your GPU, well... in all those cases, you would like to do something more.

Monday Sep 18, 2017
Episode 21: Additional optimisation strategies for deep learning
Monday Sep 18, 2017
Monday Sep 18, 2017
In the last episode How to master optimisation in deep learning I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning. I explored the family of gradient descent methods - even though not exhaustively - giving a list of approaches that deep learning researchers are considering for different scenarios. Every method has its own benefits and drawbacks, pretty much depending on the type of data, and data sparsity. But there is one method that seems to be, at least empirically, the best approach so far.
Feel free to listen to the previous episode, share it, re-broadcast or just download for your commute.
In this episode I would like to continue that conversation about some additional strategies for optimising gradient descent in deep learning and introduce you to some tricks that might come useful when your neural network stops learning from data or when the learning process becomes so slow that it really seems it reached a plateau even by feeding in fresh data.

Monday Aug 28, 2017
Episode 20: How to master optimisation in deep learning
Monday Aug 28, 2017
Monday Aug 28, 2017
The secret behind deep learning is not really a secret. It is function optimisation. What a neural network essentially does, is optimising a function. In this episode I illustrate a number of optimisation methods and explain which one is the best and why.

Wednesday Aug 09, 2017
Episode 19: How to completely change your data analytics strategy with deep learning
Wednesday Aug 09, 2017
Wednesday Aug 09, 2017
Over the past few years, neural networks have re-emerged as powerful machine-learning models, reaching state-of-the-art results in several fields like image recognition and speech processing. More recently, neural network models started to be applied also to textual data in order to deal with natural language, and there too with promising results. In this episode I explain why is deep learning performing the way it does, and what are some of the most tedious causes of failure.

Monday Mar 14, 2016
Episode 10: History and applications of Deep Learning
Monday Mar 14, 2016
Monday Mar 14, 2016
What is deep learning?If you have no patience, deep learning is the result of training many layers of non-linear processing units for feature extraction and data transformation e.g. from pixel, to edges, to shapes, to object classification, to scene description, captioning, etc.