Data Science at Home
Episodes

Thursday May 24, 2018
Founder Interview – Francesco Gadaleta of Fitchain
Thursday May 24, 2018
Thursday May 24, 2018
Cross-posting from Cryptoradio.io
Overview
Francesco Gadaleta introduces Fitchain, a decentralized machine learning platform that combines blockchain technology and AI to solve the data manipulation problem in restrictive environments such as healthcare or financial institutions.Francesco Gadaleta is the founder of Fitchain.io and senior advisor to Abe AI. Fitchain is a platform that officially started in October 2017, which allows data scientists to write machine learning models on data they cannot see and access due to restrictions imposed in healthcare or financial environments. In the Fitchain platform, there are two actors, the data owner and the data scientist. They both run the Fitchain POD, which orchestrates the relationship between these two sides. The idea behind Fitchain is summarized in the thesis “do not move the data, move the model – bring the model where the data is stored.”
The Fitchain team has also coined a new term called “proof of train” – a way to guarantee that the model is truly trained at the organization, and that it becomes traceable on the blockchain. To develop the complex technological aspects of the platform, Fitchain has partnered up with BigChainDB, the project we have recently featured on Crypto Radio.
Roadmap
Fitchain team is currently validating the assumptions and increasing the security of the platform. In the next few months, they will extend the portfolio of machine learning libraries and are planning to move from a B2B product towards a Fitchain for consumers.
By June 2018 they plan to start the Internet of PODs. They will also design the Fitchain token – FitCoin, which will be a utility token to enable operating on the Fitchain platform.

Monday Apr 02, 2018
Episode 31: The End of Privacy
Monday Apr 02, 2018
Monday Apr 02, 2018
Data is a complex topic, not only related to machine learning algorithms, but also and especially to privacy and security of individuals, the same individuals who create such data just by using the many mobile apps and services that characterize their digital life.
In this episode I am together with B.J.n Mendelson, author of “Social Media is Bullshit” from St. Martin’s Press and world-renowned speaker on issues involving the myths and realities involving today’s Internet platforms. B.J. has a new a book about privacy and sent me a free copy of "Privacy, and how to get it back" that I read in just one day. That was enough to realise how much we have in common when it comes to data and data collection.

Tuesday Nov 21, 2017
Episode 30: Neural networks and genetic evolution: an unfeasible approach
Tuesday Nov 21, 2017
Tuesday Nov 21, 2017
Despite what researchers claim about genetic evolution, in this episode we give a realistic view of the field.

Saturday Nov 11, 2017
Episode 29: Fail your AI company in 9 steps
Saturday Nov 11, 2017
Saturday Nov 11, 2017
In order to succeed with artificial intelligence, it is better to know how to fail first. It is easier than you think.Here are 9 easy steps to fail your AI startup.
![Episode 25: How to become data scientist [RB]](https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog1799802/dsh_logo_v2.png)
Monday Oct 16, 2017
Episode 25: How to become data scientist [RB]
Monday Oct 16, 2017
Monday Oct 16, 2017
In this episode, I speak about the requirements and the skills to become data scientist and join an amazing community that is changing the world with data analyticsa

Monday Oct 09, 2017
Episode 24: How to handle imbalanced datasets
Monday Oct 09, 2017
Monday Oct 09, 2017
In machine learning and data science in general it is very common to deal at some point with imbalanced datasets and class distributions. This is the typical case where the number of observations that belong to one class is significantly lower than those belonging to the other classes. Actually this happens all the time, in several domains, from finance, to healthcare to social media, just to name a few I have personally worked with. Think about a bank detecting fraudulent transactions among millions or billions of daily operations, or equivalently in healthcare for the identification of rare disorders. In genetics but also with clinical lab tests this is a normal scenario, in which, fortunately there are very few patients affected by a disorder and therefore very few cases wrt the large pool of healthy patients (or not affected). There is no algorithm that can take into account the class distribution or the amount of observations in each class, if it is not explicitly designed to handle such situations. In this episode I speak about some effective techniques to handle imbalanced datasets, advising the right method, or the most appropriate one to the right dataset or problem.
In this episode I explain how to deal with such common and challenging scenarios.

Tuesday Oct 03, 2017
Episode 23: Why do ensemble methods work?
Tuesday Oct 03, 2017
Tuesday Oct 03, 2017
Ensemble methods have been designed to improve the performance of the single model, when the single model is not very accurate. According to the general definition of ensembling, it consists in building a number of single classifiers and then combining or aggregating their predictions into one classifier that is usually stronger than the single one.
The key idea behind ensembling is that some models will do well when they model certain aspects of the data while others will do well in modelling other aspects. In this episode I show with a numeric example why and when ensemble methods work.

Monday Sep 25, 2017
Episode 22: Parallelising and distributing Deep Learning
Monday Sep 25, 2017
Monday Sep 25, 2017
Continuing the discussion of the last two episodes, there is one more aspect of deep learning that I would love to consider and therefore left as a full episode, that is parallelising and distributing deep learning on relatively large clusters.
As a matter of fact, computing architectures are changing in a way that is encouraging parallelism more than ever before. And deep learning is no exception and despite the greatest improvements with commodity GPUs - graphical processing units, when it comes to speed, there is still room for improvement.
Together with the last two episodes, this one completes the picture of deep learning at scale. Indeed, as I mentioned in the previous episode, How to master optimisation in deep learning, the function optimizer is the horsepower of deep learning and neural networks in general. A slow and inaccurate optimisation method leads to networks that slowly converge to unreliable results.
In another episode titled “Additional strategies for optimizing deeplearning” I explained some ways to improve function minimisation and model tuning in order to get better parameters in less time. So feel free to listen to these episodes again, share them with your friends, even re-broadcast or download for your commute.
While the methods that I have explained so far represent a good starting point for prototyping a network, when you need to switch to production environments or take advantage of the most recent and advanced hardware capabilities of your GPU, well... in all those cases, you would like to do something more.

Monday Aug 28, 2017
Episode 20: How to master optimisation in deep learning
Monday Aug 28, 2017
Monday Aug 28, 2017
The secret behind deep learning is not really a secret. It is function optimisation. What a neural network essentially does, is optimising a function. In this episode I illustrate a number of optimisation methods and explain which one is the best and why.

Friday Dec 23, 2016
Episode 16: 2017 Predictions in Data Science
Friday Dec 23, 2016
Friday Dec 23, 2016
We strongly believe 2017 will be a very interesting year for data science and artificial intelligence. Let me tell you what I expect and why.