About this Show
Data Science at Home is a podcast about machine learning, artificial intelligence and algorithms.
The show is hosted by Dr. Francesco Gadaleta on solo episodes and interviews with some of the most influential figures in the field
Technology, AI, machine learning and algorithms. Come join the discussion on Discord! https://discord.gg/4UNKGf3
Tuesday May 21, 2019
Tuesday May 21, 2019
Tuesday May 21, 2019
It all starts from physics. The entropy of an isolated system never decreases… Everyone at school, at some point of his life, learned this in his physics class. What does this have to do with machine learning? To find out, listen to the show.
References
Entropy in machine learning https://amethix.com/entropy-in-machine-learning/
Thursday May 16, 2019
Thursday May 16, 2019
Thursday May 16, 2019
Deep learning is the future. Get a crash course on deep learning. Now! In this episode I speak to Oliver Zeigermann, author of Deep Learning Crash Course published by Manning Publications at https://www.manning.com/livevideo/deep-learning-crash-course
Oliver (Twitter: @DJCordhose) is a veteran of neural networks and machine learning. In addition to the course - that teaches you concepts from prototype to production - he's working on a really cool project that predicts something people do every day... clicking their mouse.
If you use promo code poddatascienceathome19 you get a 40% discount for all products on the Manning platform
Enjoy the show!
References:
Deep Learning Crash Course (Manning Publications)
https://www.manning.com/livevideo/deep-learning-crash-course?a_aid=djcordhose&a_bid=e8e77cbf
Companion notebooks for the code samples of the video course "Deep Learning Crash Course"
https://github.com/DJCordhose/deep-learning-crash-course-notebooks/blob/master/README.md
Next-button-to-click predictor source code
https://github.com/DJCordhose/ux-by-tfjs
Tuesday May 07, 2019
Tuesday May 07, 2019
Tuesday May 07, 2019
In this episode I met three crazy researchers from KULeuven (Belgium) who found a method to fool surveillance cameras and stay hidden just by holding a special t-shirt. We discussed about the technique they used and some consequences of their findings.
They published their paper on Arxiv and made their source code available at https://gitlab.com/EAVISE/adversarial-yolo
Enjoy the show!
References
Fooling automated surveillance cameras: adversarial patches to attack person detection Simen Thys, Wiebe Van Ranst, Toon Goedemé
Eavise Research Group KULeuven (Belgium)https://iiw.kuleuven.be/onderzoek/eavise
Tuesday Apr 30, 2019
Tuesday Apr 30, 2019
Tuesday Apr 30, 2019
There is a connection between gradient descent based optimizers and the dynamics of damped harmonic oscillators. What does that mean? We now have a better theory for optimization algorithms.In this episode I explain how all this works.
All the formulas I mention in the episode can be found in the post The physics of optimization algorithms
Enjoy the show.
Tuesday Apr 23, 2019
Tuesday Apr 23, 2019
Tuesday Apr 23, 2019
How are differential equations related to neural networks? What are the benefits of re-thinking neural network as a differential equation engine? In this episode we explain all this and we provide some material that is worth learning. Enjoy the show!
Residual Block
References
[1] K. He, et al., “Deep Residual Learning for Image Recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016
[2] S. Hochreiter, et al., “Long short-term memory”, Neural Computation 9(8), pages 1735-1780, 1997.
[3] Q. Liao, et al.,”Bridging the gaps between residual learning, recurrent neural networks and visual cortex”, arXiv preprint, arXiv:1604.03640, 2016.
[4] Y. Lu, et al., “Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equation”, Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, 2018.
[5] T. Q. Chen, et al., ” Neural Ordinary Differential Equations”, Advances in Neural Information Processing Systems 31, pages 6571-6583}, 2018
Tuesday Apr 16, 2019
Tuesday Apr 16, 2019
Tuesday Apr 16, 2019
Since the beginning of AI in the 1950s and until the 1980s, symbolic AI approaches have dominated the field. These approaches, also known as expert systems, used mathematical symbols to represent objects and the relationship between them, in order to depict the extensive knowledge bases built by humans. The opposite of the symbolic AI paradigm is named connectionism, which is behind the machine learning approaches of today
Tuesday Apr 09, 2019
Tuesday Apr 09, 2019
Tuesday Apr 09, 2019
The successes that deep learning systems have achieved in the last decade in all kinds of domains are unquestionable. Self-driving cars, skin cancer diagnostics, movie and song recommendations, language translation, automatic video surveillance, digital assistants represent just a few examples of the ongoing revolution that affects or is going to disrupt soon our everyday life.But all that glitters is not gold…Read the full post on the Amethix Technologies blog
Saturday Mar 09, 2019
Saturday Mar 09, 2019
Saturday Mar 09, 2019
In this episode I speak about how important reproducible machine learning pipelines are. When you are collaborating with diverse teams, several tasks will be distributed among different individuals. Everyone will have good reasons to change parts of your pipeline, leading to confusion and definitely a number of options that soon explode. In all those cases, tracking data and code is extremely helpful to build models that are reproducible anytime, anywhere. Listen to the podcast and learn how.
Wednesday Jan 23, 2019
Wednesday Jan 23, 2019
Wednesday Jan 23, 2019
Have you ever wanted to get an estimate of the uncertainty of your neural network? Clearly Bayesian modelling provides a solid framework to estimate uncertainty by design. However, there are many realistic cases in which Bayesian sampling is not really an option and ensemble models can play a role.
In this episode I describe a simple yet effective way to estimate uncertainty, without changing your neural network’s architecture nor your machine learning pipeline at all.
The post with mathematical background and sample source code is published here.
Thursday Jan 17, 2019
Thursday Jan 17, 2019
Thursday Jan 17, 2019
The success of a machine learning model depends on several factors and events. True generalization to data that the model has never seen before is more a chimera than a reality. But under specific conditions a well trained machine learning model can generalize well and perform with testing accuracy that is similar to the one performed during training.
In this episode I explain when and why machine learning models fail from training to testing datasets.
Data Science at Home is the top-10 best data science podcasts on Apple Podcasts, Spotify, Stitcher, Podbean and many more aggregators.
We reach our audience on a weekly basis via 30-minute episodes enriched with blog posts and show notes. Our episodes reach a highly targeted audience across a wide demographics and globally distributed.
Data Science at home currently accepts at most two advertising slots per episode. The scheduled episode for your advertising campaign will be defined by our team, depending on the topic and the current advertising queue.
Our team is available to give you recommendations about your application and to discuss rates. Please send a direct email to media@amethix.com to make first contact. After connecting, we will share the best available date for you to proceed with the onboarding.
We promote services and products related to IT, Internet services, Research, Data Science, Machine learning, Fintech and Banking, Healthcare, Energy, etc. Below are some of the most recent statistics of the show.
Contact us and let’s talk about how we can help get your message to the audience of Data Science at Home podcast.
Data Science at Home is a podcast about machine learning, artificial intelligence and algorithms.
The show is hosted by Dr. Francesco Gadaleta on solo episodes and interviews with some of the most influential figures in the field