Data Science at Home
Episodes

Tuesday Jun 04, 2019
Episode 63: Financial time series and machine learning
Tuesday Jun 04, 2019
Tuesday Jun 04, 2019
In this episode I speak to Alexandr Honchar, data scientist and owner of blog https://medium.com/@alexrachnogAlexandr has written very interesting posts about time series analysis for financial data. His blog is in my personal list of best tutorial blogs. We discuss about financial time series and machine learning, what makes predicting the price of stocks a very challenging task and why machine learning might not be enough.As usual, I ask Alexandr how he sees machine learning in the next 10 years. His answer - in my opinion quite futuristic - makes perfect sense.
You can contact Alexandr on
Twitter https://twitter.com/AlexRachnog
Facebook https://www.facebook.com/rachnog
Medium https://medium.com/@alexrachnog
Enjoy the show!

Tuesday May 28, 2019
Episode 62: AI and the future of banking with Chris Skinner
Tuesday May 28, 2019
Tuesday May 28, 2019
In this episode I have a wonderful conversation with Chris Skinner.
Chris and I recently got in touch at The banking scene 2019, fintech conference recently held in Brussels. During that conference he talked as a real trouble maker - that’s how he defines himself - saying that “People are not educated with loans, credit, money” and that “Banks are failing at digital”.
After I got my hands on his last book Digital Human, I invited him to the show to ask him a few questions about innovation, regulation and technology in finance.

Tuesday May 21, 2019
Episode 61: The 4 best use cases of entropy in machine learning
Tuesday May 21, 2019
Tuesday May 21, 2019
It all starts from physics. The entropy of an isolated system never decreases… Everyone at school, at some point of his life, learned this in his physics class. What does this have to do with machine learning? To find out, listen to the show.
References
Entropy in machine learning https://amethix.com/entropy-in-machine-learning/

Thursday May 16, 2019
Episode 60: Predicting your mouse click (and a crash course in deeplearning)
Thursday May 16, 2019
Thursday May 16, 2019
Deep learning is the future. Get a crash course on deep learning. Now! In this episode I speak to Oliver Zeigermann, author of Deep Learning Crash Course published by Manning Publications at https://www.manning.com/livevideo/deep-learning-crash-course
Oliver (Twitter: @DJCordhose) is a veteran of neural networks and machine learning. In addition to the course - that teaches you concepts from prototype to production - he's working on a really cool project that predicts something people do every day... clicking their mouse.
If you use promo code poddatascienceathome19 you get a 40% discount for all products on the Manning platform
Enjoy the show!
References:
Deep Learning Crash Course (Manning Publications)
https://www.manning.com/livevideo/deep-learning-crash-course?a_aid=djcordhose&a_bid=e8e77cbf
Companion notebooks for the code samples of the video course "Deep Learning Crash Course"
https://github.com/DJCordhose/deep-learning-crash-course-notebooks/blob/master/README.md
Next-button-to-click predictor source code
https://github.com/DJCordhose/ux-by-tfjs

Tuesday May 07, 2019
Episode 59: How to fool a smart camera with deep learning
Tuesday May 07, 2019
Tuesday May 07, 2019
In this episode I met three crazy researchers from KULeuven (Belgium) who found a method to fool surveillance cameras and stay hidden just by holding a special t-shirt. We discussed about the technique they used and some consequences of their findings.
They published their paper on Arxiv and made their source code available at https://gitlab.com/EAVISE/adversarial-yolo
Enjoy the show!
References
Fooling automated surveillance cameras: adversarial patches to attack person detection Simen Thys, Wiebe Van Ranst, Toon Goedemé
Eavise Research Group KULeuven (Belgium)https://iiw.kuleuven.be/onderzoek/eavise

Tuesday Apr 30, 2019
Episode 58: There is physics in deep learning!
Tuesday Apr 30, 2019
Tuesday Apr 30, 2019
There is a connection between gradient descent based optimizers and the dynamics of damped harmonic oscillators. What does that mean? We now have a better theory for optimization algorithms.In this episode I explain how all this works.
All the formulas I mention in the episode can be found in the post The physics of optimization algorithms
Enjoy the show.

Tuesday Apr 23, 2019
Episode 57: Neural networks with infinite layers
Tuesday Apr 23, 2019
Tuesday Apr 23, 2019
How are differential equations related to neural networks? What are the benefits of re-thinking neural network as a differential equation engine? In this episode we explain all this and we provide some material that is worth learning. Enjoy the show!
Residual Block
References
[1] K. He, et al., “Deep Residual Learning for Image Recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016
[2] S. Hochreiter, et al., “Long short-term memory”, Neural Computation 9(8), pages 1735-1780, 1997.
[3] Q. Liao, et al.,”Bridging the gaps between residual learning, recurrent neural networks and visual cortex”, arXiv preprint, arXiv:1604.03640, 2016.
[4] Y. Lu, et al., “Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equation”, Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, 2018.
[5] T. Q. Chen, et al., ” Neural Ordinary Differential Equations”, Advances in Neural Information Processing Systems 31, pages 6571-6583}, 2018

Tuesday Apr 16, 2019
Episode 56: The graph network
Tuesday Apr 16, 2019
Tuesday Apr 16, 2019
Since the beginning of AI in the 1950s and until the 1980s, symbolic AI approaches have dominated the field. These approaches, also known as expert systems, used mathematical symbols to represent objects and the relationship between them, in order to depict the extensive knowledge bases built by humans. The opposite of the symbolic AI paradigm is named connectionism, which is behind the machine learning approaches of today

Tuesday Apr 09, 2019
Episode 55: Beyond deep learning
Tuesday Apr 09, 2019
Tuesday Apr 09, 2019
The successes that deep learning systems have achieved in the last decade in all kinds of domains are unquestionable. Self-driving cars, skin cancer diagnostics, movie and song recommendations, language translation, automatic video surveillance, digital assistants represent just a few examples of the ongoing revolution that affects or is going to disrupt soon our everyday life.But all that glitters is not gold…Read the full post on the Amethix Technologies blog

Saturday Mar 09, 2019
Episode 54: Reproducible machine learning
Saturday Mar 09, 2019
Saturday Mar 09, 2019
In this episode I speak about how important reproducible machine learning pipelines are. When you are collaborating with diverse teams, several tasks will be distributed among different individuals. Everyone will have good reasons to change parts of your pipeline, leading to confusion and definitely a number of options that soon explode. In all those cases, tracking data and code is extremely helpful to build models that are reproducible anytime, anywhere. Listen to the podcast and learn how.