Data Science at Home
Episodes

Tuesday Oct 15, 2019
What is wrong with reinforcement learning? (Ep. 82)
Tuesday Oct 15, 2019
Tuesday Oct 15, 2019
Join the discussion on our Discord server
After reinforcement learning agents doing great at playing Atari video games, Alpha Go, doing financial trading, dealing with language modeling, let me tell you the real story here.In this episode I want to shine some light on reinforcement learning (RL) and the limitations that every practitioner should consider before taking certain directions. RL seems to work so well! What is wrong with it?
Are you a listener of Data Science at Home podcast? A reader of the Amethix Blog? Or did you subscribe to the Artificial Intelligence at your fingertips newsletter? In any case let’s stay in touch! https://amethix.com/survey/
References
Emergence of Locomotion Behaviours in Rich Environments https://arxiv.org/abs/1707.02286
Rainbow: Combining Improvements in Deep Reinforcement Learning https://arxiv.org/abs/1710.02298
AlphaGo Zero: Starting from scratch https://deepmind.com/blog/article/alphago-zero-starting-scratch

Thursday Oct 10, 2019
Thursday Oct 10, 2019
Join the discussion on our Discord server
In this episode I have an amazing conversation with Jimmy Soni and Rob Goodman, authors of “A mind at play”, a book entirely dedicated to the life and achievements of Claude Shannon. Claude Shannon does not need any introduction. But for those who need a refresh, Shannon is the inventor of the information age.
Have you heard of binary code, entropy in information theory, data compression theory (the stuff behind mp3, mpg, zip, etc.), error correcting codes (the stuff that makes your RAM work well), n-grams, block ciphers, the beta distribution, the uncertainty coefficient?
All that stuff has been invented by Claude Shannon :)
Articles:
https://medium.com/the-mission/10-000-hours-with-claude-shannon-12-lessons-on-life-and-learning-from-a-genius-e8b9297bee8f
https://medium.com/the-mission/on-claude-shannons-103rd-birthday-here-are-103-memorable-claude-shannon-quotes-maxims-and-843de4c716cf?source=your_stories_page---------------------------
http://nautil.us/issue/51/limits/how-information-got-re_invented
http://nautil.us/issue/50/emergence/claude-shannon-the-las-vegas-cheat
Claude's papers:
https://medium.com/the-mission/a-genius-explains-how-to-be-creative-claude-shannons-long-lost-1952-speech-fbbcb2ebe07f
http://www.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
A mind at play (book links):
http://amzn.to/2pasLMz -- Hardcover
https://amzn.to/2oCfVL0 -- Audio
![[RB] How to scale AI in your organisation (Ep. 79)](https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog1799802/data_science_at_home_podcast_cover_300x300.png)
Thursday Sep 26, 2019
[RB] How to scale AI in your organisation (Ep. 79)
Thursday Sep 26, 2019
Thursday Sep 26, 2019
Join the discussion on our Discord server
Scaling technology and business processes are not equal. Since the beginning of the enterprise technology, scaling software has been a difficult task to get right inside large organisations. When it comes to Artificial Intelligence and Machine Learning, it becomes vastly more complicated.
In this episode I propose a framework - in five pillars - for the business side of artificial intelligence.
![Training neural networks faster without GPU [RB] (Ep. 77)](https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog1799802/data_science_at_home_podcast_cover_300x300.png)
Tuesday Sep 17, 2019
Training neural networks faster without GPU [RB] (Ep. 77)
Tuesday Sep 17, 2019
Tuesday Sep 17, 2019
Join the discussion on our Discord server
Training neural networks faster usually involves the usage of powerful GPUs. In this episode I explain an interesting method from a group of researchers from Google Brain, who can train neural networks faster by squeezing the hardware to their needs and making the training pipeline more dense.
Enjoy the show!
References
Faster Neural Network Training with Data Echoinghttps://arxiv.org/abs/1907.05550
![[RB] Complex video analysis made easy with Videoflow (Ep. 75)](https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog1799802/data_science_at_home_podcast_cover_300x300.png)
Thursday Aug 29, 2019
[RB] Complex video analysis made easy with Videoflow (Ep. 75)
Thursday Aug 29, 2019
Thursday Aug 29, 2019
In this episode I am with Jadiel de Armas, senior software engineer at Disney and author of Videflow, a Python framework that facilitates the quick development of complex video analysis applications and other series-processing based applications in a multiprocessing environment.
I have inspected the videoflow repo on Github and some of the capabilities of this framework and I must say that it’s really interesting. Jadiel is going to tell us a lot more than what you can read from Github
References
Videflow Github official repository https://github.com/videoflow/videoflow

Tuesday Jul 23, 2019
Validate neural networks without data with Dr. Charles Martin (Ep. 70)
Tuesday Jul 23, 2019
Tuesday Jul 23, 2019
In this episode, I am with Dr. Charles Martin from Calculation Consulting a machine learning and data science consulting company based in San Francisco. We speak about the nuts and bolts of deep neural networks and some impressive findings about the way they work.
The questions that Charles answers in the show are essentially two:
Why is regularisation in deep learning seemingly quite different than regularisation in other areas on ML?
How can we dominate DNN in a theoretically principled way?
References
The WeightWatcher tool for predicting the accuracy of Deep Neural Networks https://github.com/CalculatedContent/WeightWatcher
Slack channel https://weightwatcherai.slack.com/
Dr. Charles Martin Blog http://calculatedcontent.com and channel https://www.youtube.com/c/calculationconsulting
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning - Charles H. Martin, Michael W. Mahoney

Tuesday Jul 16, 2019
Complex video analysis made easy with Videoflow (Ep. 69)
Tuesday Jul 16, 2019
Tuesday Jul 16, 2019
In this episode I am with Jadiel de Armas, senior software engineer at Disney and author of Videflow, a Python framework that facilitates the quick development of complex video analysis applications and other series-processing based applications in a multiprocessing environment.
I have inspected the videoflow repo on Github and some of the capabilities of this framework and I must say that it’s really interesting. Jadiel is going to tell us a lot more than what you can read from Github
References
Videflow Github official repository https://github.com/videoflow/videoflow

Tuesday Jul 02, 2019
Episode 67: Classic Computer Science Problems in Python
Tuesday Jul 02, 2019
Tuesday Jul 02, 2019
Today I am with David Kopec, author of Classic Computer Science Problems in Python, published by Manning Publications.
His book deepens your knowledge of problem solving techniques from the realm of computer science by challenging you with interesting and realistic scenarios, exercises, and of course algorithms. There are examples in the major topics any data scientist should be familiar with, for example search, clustering, graphs, and much more.
Get the book from https://www.manning.com/books/classic-computer-science-problems-in-python and use coupon code poddatascienceathome19 to get 40% discount.
References
Twitter https://twitter.com/davekopec
GitHub https://github.com/davecom
classicproblems.com