Data Science at Home
Episodes
Tuesday May 16, 2023
Tuesday May 16, 2023
Hold on to your calculators and buckle up for a wild mathematical ride in this episode! Brace yourself as we dive into the fascinating realm of Liquid Time-Constant Networks (LTCs), where mathematical content reaches new heights of excitement.
In this mind-bending adventure, we demystify the intricacies of LTCs, from complex equations to mind-boggling mathematical concepts, we break them down into digestible explanations.
References
https://www.science.org/doi/10.1126/scirobotics.adc8892
https://spectrum.ieee.org/liquid-neural-networks#toggle-gdpr
Thursday May 11, 2023
Thursday May 11, 2023
Get ready for an eye-opening episode! 🎙️
In our latest podcast episode, we dive deep into the world of LoRa (Low-Rank Adaptation) for large language models (LLMs). This groundbreaking technique is revolutionizing the way we approach language model training by leveraging low-rank approximations.
Join us as we unravel the mysteries of LoRa and discover how it enables us to retrain LLMs with minimal expenditure of money and resources. We'll explore the ingenious strategies and practical methods that empower you to fine-tune your language models without breaking the bank.
Whether you're a researcher, developer, or language model enthusiast, this episode is packed with invaluable insights. Learn how to unlock the potential of LLMs without draining your resources.
Tune in and join the conversation as we unravel the secrets of LoRa low-rank adaptation and show you how to retrain LLMs on a budget.
Listen to the full episode now on your favorite podcast platform! 🎧✨
References
LoRA: Low-Rank Adaptation of Large Language Models https://arxiv.org/abs/2106.09685
Low-rank approximation https://en.wikipedia.org/wiki/Low-rank_approximation
Attention is all you need https://arxiv.org/pdf/1706.03762.pdf
Wednesday May 03, 2023
Wednesday May 03, 2023
This is the first episode about the latest trend in artificial intelligence that's shaking up the industry - running large language models locally on your machine. This new approach allows you to bypass the limitations and constraints of cloud-based models controlled by big tech companies, and take control of your own AI journey.
We'll delve into the benefits of running models locally, such as increased speed, improved privacy and security, and greater customization and flexibility. We'll also discuss the technical requirements and considerations for running these models on your own hardware, and provide practical tips and advice to get you started.
Join us as we uncover the secrets to unleashing the full potential of large language models and taking your AI game to the next level!
Sponsors
AI-powered Email Security Best-in-class protection against the most sophisticated attacks,from phishing and impersonation to BEC and zero-day threats https://www.mimecast.com/
References
https://agi-sphere.com/llama-models/
https://crfm.stanford.edu/2023/03/13/alpaca.html
https://beebom.com/how-run-chatgpt-like-language-model-pc-offline/
https://sharegpt.com/
https://stability.ai/
Tuesday Apr 18, 2023
Tuesday Apr 18, 2023
In this episode of our podcast, we dive deep into the fascinating world of Graph Neural Networks.
First, we explore Hierarchical Networks, which allow for the efficient representation and analysis of complex graph structures by breaking them down into smaller, more manageable components.
Next, we turn our attention to Generative Graph Models, which enable the creation of new graph structures that are similar to those in a given dataset. We discuss the inner workings of these models and their potential applications in fields such as drug discovery and social network analysis.
Finally, we delve into the essential Pooling Mechanism, which allows for the efficient passing of information across different parts of the graph neural network. We examine the various types of pooling mechanisms and their advantages and disadvantages.
Whether you're a seasoned graph neural network expert or just starting to explore the field, this episode has something for you. So join us for a deep dive into the power and potential of Graph Neural Networks.
References
Machine Learning with Graphs - http://web.stanford.edu/class/cs224w/
A Comprehensive Survey on Graph Neural Networks - https://arxiv.org/abs/1901.00596
Tuesday Apr 11, 2023
Tuesday Apr 11, 2023
In this episode, I explore the cutting-edge technology of graph neural networks (GNNs) and how they are revolutionizing the field of artificial intelligence. I break down the complex concepts behind GNNs and explain how they work by modeling the relationships between data points in a graph structure.
I also delve into the various real-world applications of GNNs, from drug discovery to recommendation systems, and how they are outperforming traditional machine learning models.
Join me and demystify this exciting area of AI research and discover the power of graph neural networks.
Tuesday Apr 04, 2023
Leveling Up AI: Reinforcement Learning with Human Feedback (Ep. 222)
Tuesday Apr 04, 2023
Tuesday Apr 04, 2023
In this episode, we dive into the not-so-secret sauce of ChatGPT, and what makes it a different model than its predecessors in the field of NLP and Large Language Models.
We explore how human feedback can be used to speed up the learning process in reinforcement learning, making it more efficient and effective.
Whether you're a machine learning practitioner, researcher, or simply curious about how machines learn, this episode will give you a fascinating glimpse into the world of reinforcement learning with human feedback.
Sponsors
This episode is supported by How to Fix the Internet, a cool podcast from the Electronic Frontier Foundation and Bloomberg, global provider of financial news and information, including real-time and historical price data, financial data, trading news, and analyst coverage.
References
Learning through human feedback
https://www.deepmind.com/blog/learning-through-human-feedback
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
https://arxiv.org/abs/2204.05862
Thursday Mar 30, 2023
The promise and pitfalls of GPT-4 (Ep. 221)
Thursday Mar 30, 2023
Thursday Mar 30, 2023
In this episode, we explore the potential of the highly anticipated GPT-4 language model and the challenges that come with its development. From its ability to generate highly coherent and creative text to concerns about ethical considerations and the potential misuse of such technology, we delve into the promise and pitfalls of GPT-4. Join us as we speak with experts in the field to gain insights into the latest developments and the impact that GPT-4 could have on the future of natural language processing.
Tuesday Mar 14, 2023
AI’s Impact on Software Engineering: Killing Old Principles? (Ep. 220)
Tuesday Mar 14, 2023
Tuesday Mar 14, 2023
In this episode, we dive into the ways in which AI and machine learning are disrupting traditional software engineering principles. With the advent of automation and intelligent systems, developers are increasingly relying on algorithms to create efficient and effective code. However, this reliance on AI can come at a cost to the tried-and-true methods of software engineering. Join us as we explore the pros and cons of this paradigm shift and discuss what it means for the future of software development.
Thursday Mar 09, 2023
Edge AI applications for military and space [RB] (Ep. 219)
Thursday Mar 09, 2023
Thursday Mar 09, 2023
Wednesday Feb 15, 2023
[RB] Online learning is better than batch, right? Wrong! (Ep. 216)
Wednesday Feb 15, 2023
Wednesday Feb 15, 2023
In this episode I speak about online learning systems and why blindly choosing such a paradigm can lead to very unpredictable and expensive outcomes.Also in this episode, I have to deal with an intruder :)
Links
Birman, K.; Joseph, T. (1987). "Exploiting virtual synchrony in distributed systems". Proceedings of the Eleventh ACM Symposium on Operating Systems Principles - SOSP '87. pp. 123–138. doi:10.1145/41457.37515. ISBN 089791242X. S2CID 7739589.