Data Science at Home
Episodes

Tuesday May 21, 2019
Episode 61: The 4 best use cases of entropy in machine learning
Tuesday May 21, 2019
Tuesday May 21, 2019
It all starts from physics. The entropy of an isolated system never decreases… Everyone at school, at some point of his life, learned this in his physics class. What does this have to do with machine learning? To find out, listen to the show.
References
Entropy in machine learning https://amethix.com/entropy-in-machine-learning/

Thursday May 16, 2019
Episode 60: Predicting your mouse click (and a crash course in deeplearning)
Thursday May 16, 2019
Thursday May 16, 2019
Deep learning is the future. Get a crash course on deep learning. Now! In this episode I speak to Oliver Zeigermann, author of Deep Learning Crash Course published by Manning Publications at https://www.manning.com/livevideo/deep-learning-crash-course
Oliver (Twitter: @DJCordhose) is a veteran of neural networks and machine learning. In addition to the course - that teaches you concepts from prototype to production - he's working on a really cool project that predicts something people do every day... clicking their mouse.
If you use promo code poddatascienceathome19 you get a 40% discount for all products on the Manning platform
Enjoy the show!
References:
Deep Learning Crash Course (Manning Publications)
https://www.manning.com/livevideo/deep-learning-crash-course?a_aid=djcordhose&a_bid=e8e77cbf
Companion notebooks for the code samples of the video course "Deep Learning Crash Course"
https://github.com/DJCordhose/deep-learning-crash-course-notebooks/blob/master/README.md
Next-button-to-click predictor source code
https://github.com/DJCordhose/ux-by-tfjs

Tuesday May 07, 2019
Episode 59: How to fool a smart camera with deep learning
Tuesday May 07, 2019
Tuesday May 07, 2019
In this episode I met three crazy researchers from KULeuven (Belgium) who found a method to fool surveillance cameras and stay hidden just by holding a special t-shirt. We discussed about the technique they used and some consequences of their findings.
They published their paper on Arxiv and made their source code available at https://gitlab.com/EAVISE/adversarial-yolo
Enjoy the show!
References
Fooling automated surveillance cameras: adversarial patches to attack person detection Simen Thys, Wiebe Van Ranst, Toon Goedemé
Eavise Research Group KULeuven (Belgium)https://iiw.kuleuven.be/onderzoek/eavise

Tuesday Apr 30, 2019
Episode 58: There is physics in deep learning!
Tuesday Apr 30, 2019
Tuesday Apr 30, 2019
There is a connection between gradient descent based optimizers and the dynamics of damped harmonic oscillators. What does that mean? We now have a better theory for optimization algorithms.In this episode I explain how all this works.
All the formulas I mention in the episode can be found in the post The physics of optimization algorithms
Enjoy the show.

Tuesday Apr 23, 2019
Episode 57: Neural networks with infinite layers
Tuesday Apr 23, 2019
Tuesday Apr 23, 2019
How are differential equations related to neural networks? What are the benefits of re-thinking neural network as a differential equation engine? In this episode we explain all this and we provide some material that is worth learning. Enjoy the show!
Residual Block
References
[1] K. He, et al., “Deep Residual Learning for Image Recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016
[2] S. Hochreiter, et al., “Long short-term memory”, Neural Computation 9(8), pages 1735-1780, 1997.
[3] Q. Liao, et al.,”Bridging the gaps between residual learning, recurrent neural networks and visual cortex”, arXiv preprint, arXiv:1604.03640, 2016.
[4] Y. Lu, et al., “Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equation”, Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, 2018.
[5] T. Q. Chen, et al., ” Neural Ordinary Differential Equations”, Advances in Neural Information Processing Systems 31, pages 6571-6583}, 2018

Tuesday Apr 16, 2019
Episode 56: The graph network
Tuesday Apr 16, 2019
Tuesday Apr 16, 2019
Since the beginning of AI in the 1950s and until the 1980s, symbolic AI approaches have dominated the field. These approaches, also known as expert systems, used mathematical symbols to represent objects and the relationship between them, in order to depict the extensive knowledge bases built by humans. The opposite of the symbolic AI paradigm is named connectionism, which is behind the machine learning approaches of today

Tuesday Apr 09, 2019
Episode 55: Beyond deep learning
Tuesday Apr 09, 2019
Tuesday Apr 09, 2019
The successes that deep learning systems have achieved in the last decade in all kinds of domains are unquestionable. Self-driving cars, skin cancer diagnostics, movie and song recommendations, language translation, automatic video surveillance, digital assistants represent just a few examples of the ongoing revolution that affects or is going to disrupt soon our everyday life.But all that glitters is not gold…Read the full post on the Amethix Technologies blog

Wednesday Jan 23, 2019
Episode 53: Estimating uncertainty with neural networks
Wednesday Jan 23, 2019
Wednesday Jan 23, 2019
Have you ever wanted to get an estimate of the uncertainty of your neural network? Clearly Bayesian modelling provides a solid framework to estimate uncertainty by design. However, there are many realistic cases in which Bayesian sampling is not really an option and ensemble models can play a role.
In this episode I describe a simple yet effective way to estimate uncertainty, without changing your neural network’s architecture nor your machine learning pipeline at all.
The post with mathematical background and sample source code is published here.
![Episode 52: why do machine learning models fail? [RB]](https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog1799802/logo_squared_datascience_v3_300x300.png)
Thursday Jan 17, 2019
Episode 52: why do machine learning models fail? [RB]
Thursday Jan 17, 2019
Thursday Jan 17, 2019
The success of a machine learning model depends on several factors and events. True generalization to data that the model has never seen before is more a chimera than a reality. But under specific conditions a well trained machine learning model can generalize well and perform with testing accuracy that is similar to the one performed during training.
In this episode I explain when and why machine learning models fail from training to testing datasets.

Wednesday Dec 19, 2018
Episode 49: The promises of Artificial Intelligence
Wednesday Dec 19, 2018
Wednesday Dec 19, 2018
It's always good to put in perspective all the findings in AI, in order to clear some of the most common misunderstandings and promises. In this episode I make a list of some of the most misleading statements about what artificial intelligence can achieve in the near future.