A machine learning podcast that explores more than just algorithms and data: Life lessons from the experts. Welcome to "Learning from Machine Learning," a podcast about the insights gained from a career in the field of Machine Learning and Data Science. In each episode, industry experts, entrepreneurs and practitioners will share their experiences and advice on what it takes to succeed in this rapidly-evolving field. But this podcast is not just about the technical aspects of ML. It will also delve into the ways machine learning is changing the world around us. From the implications of artificial intelligence to the ways machine learning is being applied in various sectors, a wide range of topics will be covered that are relevant to anyone interested in the intersection of technology and society. All interviews available on YouTube: https://www.youtube.com/@learningfrommachinelearning
In this episode of Learning from Machine Learning, we explore the intersection of pure mathematics and modern data science with Leland McInnes, the mind behind an ecosystem of tools for unsupervised learning including UMAP, HDBSCAN, PyNN Descent and DataMapPlot. As a researcher at the Tutte Institute for Mathematics and Computing, McInnes has fundamentally shaped how we approach and understand complex data.McInnes views data through a unique geometric lens, drawing from his background in algebraic topology to uncover hidden patterns and relationships within complex datasets. This perspective led to the creation of UMAP, a breakthrough in dimensionality reduction that preserves both local and global data structure to allow for incredible visualizations and clustering. Similarly, his clustering algorithm HDBSCAN tackles the messy reality of real-world data, handling varying densities and noise with remarkable effectiveness.But perhaps what's most striking about McInnes isn't just his technical achievements – it's his philosophy toward algorithm development. He champions the concept of "decomposing black box algorithms," advocating for transparency and understanding over blind implementation. By breaking down complex algorithms into their fundamental components, McInnes argues, we gain the power to adapt and innovate rather than simply consume.For those entering the field, McInnes offers poignant advice: resist the urge to chase the hype. Instead, find your unique angle, even if it seems unconventional. His own journey – applying concepts from algebraic topology and fuzzy simplicial sets to data science – demonstrates how breakthrough innovations often emerge from unexpected connections.Throughout our conversation, McInnes's passion for knowledge and commitment to understanding shine through. His approach reminds us that the most powerful advances in data science often come not from following the crowd, but from diving deep into fundamentals and drawing connections across disciplines.There's immense value in understanding the tools you use, questioning established approaches, and bringing your unique perspective to the field. As McInnes shows us, sometimes the most significant breakthroughs come from seeing familiar problems through a new lens.Resources for Leland McInnesLeland's GithubUMAPHDBSCANPyNN DescentDataMapPlotEVoCReferencesMaarten GrootendorstLearning from Machine Learning Episode 1Vincent Warmerdam - CalmcodeLearning from Machine Learning Episode 2Matt RocklinEmily Riehl - Category Theory in ContextLorena BarbaDavid Spivak - Fuzzy Simplicial SetsImproving Mapper's Robustness by Varying Resolution According to Lens-Space DensityLearning from Machine LearningYoutubehttps://mindfulmachines.substack.com/
In this episode, we are joined by Chris Van Pelt, co-founder of Weights & Biases and Figure Eight/CrowdFlower. Chris has played a pivotal role in the development of MLOps platforms and has dedicated the last two decades to refining ML workflows and making machine learning more accessible.Throughout the conversation, Chris provides valuable insights into the current state of the industry. He emphasizes the significance of Weights & Biases as a powerful developer tool, empowering ML engineers to navigate through the complexities of experimentation, data visualization, and model improvement. His candid reflections on the challenges in evaluating ML models and addressing the gap between AI hype and reality offer a profound understanding of the field's intricacies.Drawing from his entrepreneurial experience co-founding two machine learning companies, Chris leaves us with lessons in resilience, innovation, and a deep appreciation for the human dimension within the tech landscape. As a Weights & Biases user for five years, witnessing both the tool and the company's growth, it was a genuine honor to host Chris on the show.References and Resourceshttps://wandb.ai/https://www.youtube.com/c/WeightsBiaseshttps://x.com/weights_biaseshttps://www.linkedin.com/company/wandb/https://twitter.com/vanpeltResources to learn more about Learning from Machine Learninghttps://www.youtube.com/@learningfrommachinelearninghttps://www.linkedin.com/company/learning-from-machine-learninghttps://mindfulmachines.substack.com/https://www.linkedin.com/in/sethplevine/https://medium.com/@levine.seth.p
This episode features Dr. Michelle Gill, Tech Lead and Applied Research Manager at NVIDIA, working on transformative projects like BioNemo to accelerate drug discovery through AI. Her team explores Biofoundation models to enable researchers to better perform tasks like protein folding and small molecule binding.Michelle shares her incredible journey from wet lab biochemist to driving cutting edge AI at NVIDIA. Michelle discusses the overlap and differences between NLP and AI in biology. She outlines the critical need for better machine learning representations that capture the intricate dynamics of biology.Michelle provides advice for beginners and early career professionals in the field of machine learning, emphasizing the importance of continuous learning and staying up to date with the latest tools and techniques. She also shares insights on building successful multidisciplinary teamsAfter hearing her fascinating PyData NYC keynote, it was such an honor to have her on the show to discuss innovations at the intersection of biochemistry and AI.References and Resourceshttps://michellelynngill.com/Michelle Gill - Keynote - PyData NYC https://www.youtube.com/watch?v=ATo2SzA1Pp4AlexNetAlphaFold - https://www.nature.com/articles/s41586-021-03819-2OpenFold - https://www.biorxiv.org/content/10.1101/2022.11.20.517210v1BioNemo - https://www.nvidia.com/en-us/clara/bionemo/NeurIPS - https://nips.cc/Art Palmer - https://www.biochem.cuimc.columbia.edu/profile/arthur-g-palmer-iii-phdPatrick Loria - https://chem.yale.edu/faculty/j-patrick-loriaScott Strobel - https://chem.yale.edu/faculty/scott-strobelAlexander Rives - https://www.forbes.com/sites/kenrickcai/2023/08/25/evolutionaryscale-ai-biotech-startup-meta-researchers-funding/?sh=648f1a1140cfDeborah Marks - https://sysbio.med.harvard.edu/debora-marksResources to learn more about Learning from Machine Learninghttps://www.linkedin.com/company/learning-from-machine-learninghttps://mindfulmachines.substack.com/https://www.linkedin.com/in/sethplevine/https://medium.com/@levine.seth.p
This episode features co-founder and CEO of Explosion, Ines Montani. Listen in as we discuss the evolution of the web and machine learning, the development of SpaCy, Natural Language Processing vs. Natural Language Understanding, the misconceptions of starting a software company, and so much more! Ines is a software developer working on Artificial Intelligence and Natural Language Processing technologies.She's the co-founder and CEO of Explosion, the company behind SpaCy, one of the leading open-source libraries for NLP in Python and Prodigy, an annotation tool to help create training data for Machine Learning Models. Ines has an academic background in Communication Science, Media Studies and Linguistics and has been coding and designing websites since she was 11. She's been the keynote speaker at Python and Data Science conferences around the world.Learning from Machine Learning, a podcast that explores more than just algorithms and data: Life lessons from the experts.Listen on YouTube: https://youtu.be/XNFqFT-DZwo?si=Aj75TmsCyBQTyWqqListen on your favorite podcast platform:https://rss.com/podcasts/learning-from-machine-learning/1190862/References in the Episodehttps://explosion.ai/https://spacy.io/https://ines.io/Applied NLP ThinkingInes Montani - How to Ignore Most Startup Advice and Build a Decent Software Business Ines Montani: Incorporating LLMs into practical NLP workflowsInes Montani (spaCy) - Large Language Models from Prototype to Production [PyData Südwest] Confectionhttps://github.com/explosion/confectionResources to learn more about Learning from Machine Learninghttps://www.linkedin.com/company/learning-from-machine-learninghttps://mindfulmachines.substack.com/https://www.linkedin.com/in/sethplevine/https://medium.com/@levine.seth.p
This episode features Lewis Tunstall, machine learning engineer at Hugging Face and author of the best selling book Natural Language Processing with Transformers. He currently focuses on one of the hottest topic in NLP right now reinforcement learning from human feedback (RLHF). Lewis holds a PhD in quantum physics and his research has taken him around the world and into some of the most impactful projects including the Large Hadron Collider, the world's largest and most powerful particle accelerator. Lewis shares his unique story from Quantum Physicist to Data Scientist to Machine Learning Engineer. Resources to learn more about Lewis Tunstallhttps://www.linkedin.com/in/lewis-tunstall/https://github.com/lewtunReferences from the Episodehttps://www.fast.ai/https://jeremy.fast.ai/SetFit - https://arxiv.org/abs/2209.11055Proximal Policy OptimizationInstructGPTRAFT BenchmarkBidirectional Language Models are Also Few-Shot LearnersNils Reimers - Sentence TransformersJay Alammar - Illustrated TransformerAnnotated TransformerMoshe Wasserblat, Intel, NLP, Research ManagerLeandro von Werra, Co-Author of NLP with Transformers, Hugging Face ResearcherLLMSys - https://lmsys.org/LoRA - Low-Rank Adaptation of Large Language ModelsResources to learn more about Learning from Machine Learninghttps://www.linkedin.com/company/learning-from-machine-learninghttps://mindfulmachines.substack.com/https://www.linkedin.com/in/sethplevine/https://medium.com/@levine.seth.p
The episode features Paige Bailey, the lead product manager for generative models at Google DeepMind. Paige's work has helped transform the way that people work and design software using the power of machine learning. Her current work is pushing the boundaries of innovation with Bard and the soon to be released Gemini. Learning from Machine Learning, a podcast that explores more than just algorithms and data: Life lessons from the experts. Resources to learn more about Paige Baileyhttps://twitter.com/DynamicWebPaigehttps://github.com/dynamicwebpaige References from the Episode Diamond Age - Neal Stephenson - https://amzn.to/3BCwk4n Google Deepmind - https://www.deepmind.com/ Google Research - https://research.google/ Jax - https://jax.readthedocs.io/en/latest/ Jeff Dean - https://research.google/people/jeff/ Oriol Vinyals - https://research.google/people/OriolVinyals/ Roy Frostig - https://cs.stanford.edu/~rfrostig/ Matt Johnson - https://www.linkedin.com/in/matthewjamesjohnson/ Peter Hawkins - https://github.com/hawkinsp Skye Wanderman-Milne - https://www.linkedin.com/in/skye-wanderman-milne-73887b29/ Yash Katariya - https://www.linkedin.com/in/yashkatariya/ Andrej Karpathy - https://karpathy.ai/ Resources to learn more about Learning from Machine Learninghttps://www.linkedin.com/company/learning-from-machine-learninghttps://www.linkedin.com/in/sethplevine/https://medium.com/@levine.seth.p
This episode we welcome Sebastian Raschka, Lead AI Educator at Lightning and author of Machine Learning with Pytorch and Scikit-Learn to discuss the best ways to learn machine learning, his open source work, how to use chatGPT, AGI, responsible AI and so much more. Sebastian is a fountain of knowledge and it was a pleasure to get his insights on this fast moving industry. Learning from Machine Learning, a podcast that explores more than just algorithms and data: Life lessons from the experts. Resources to learn more about Sebastian Raschka and his work:https://sebastianraschka.com/https://lightning.ai/Machine Learning with Pytorch and Scikit-LearnMachine Learning Q and AIResources to learn more about Learning from Machine Learning and the host: https://www.linkedin.com/company/learning-from-machine-learninghttps://www.linkedin.com/in/sethplevine/https://medium.com/@levine.seth.ptwitterReferences from Episodehttps://scikit-learn.org/stable/http://rasbt.github.io/mlxtend/https://github.com/BioPandas/biopandasUnderstanding and Coding the Self-Attention Mechanism of Large Language Models From ScratchAndrew Ng - https://www.andrewng.org/Andrej Karpathy - https://karpathy.ai/Paige Bailey - https://github.com/dynamicwebpaigeContents01:15 - Career Background05:18 - Industry vs. Academia08:18 - First Project in ML15:04 - Open Source Projects Involvement20:00 - Machine Learning: Q&AI24:18 - ChatGPT as Brainstorm Assistant25:38 - Hype vs. Reality27:55 - AGI31:00 - Use Cases for Generative Models34:01 - Should the goal to be to replicate human intelligence?39:18 - Delegating Tasks using LLM42:26 - ML Models are overconfident on Out of Distribution44:54 - Responsible AI and ML45:59 - Complexity of ML Systems47:26 - Trend for ML Practitioners to move to AI Ethics49:27 - What advice would you give to someone just starting out?52:20 - Advice that you've received that has helped you54:08 - Andrew Ng Advice55:20 - Exercise of Implementing Algorithms from Scratch59:00 - Who else has influenced you?01:01:18 - Production and Real-World Applications - Don't reinvent the wheel01:03:00 - What has a career in ML taught you about life?
This episode welcomes Nils Reimers, Director of Machine Learning at Cohere and former research at Hugging Face, to discuss Natural Language Processing, Sentence Transformers and the future of Machine Learning. Nils is best known as the creator of Sentence Transformers, a powerful framework for generating high-quality sentence embeddings that has become increasingly popular in the ML community with over 9K stars on Github. With Sentence Transformers, Nils has enabled researchers and developers (including me) to train state-of-the-art models for a wide range of NLP tasks, including text classification, semantic similarity, and question-answering. His contributions have been recognized by numerous awards and publications in top-tier conferences and journals.Resources to learn more about Nils Reimers and his work:https://www.nils-reimers.de/https://www.sbert.net/https://scholar.google.com/citations?...https://cohere.ai/Resources to learn more about Learning from Machine Learning:https://www.linkedin.com/company/learning-from-machine-learninghttps://www.linkedin.com/in/sethplevine/https://medium.com/@levine.seth.pYoutube Clips02:29 What attracted you to Machine Learning?06:32 What is sentence transformers?28:02 Benchmarks and P-Hacking33:53 What's an important question that remains unanswered in Machine Learning?38:41 How do you view the gap between the hype and the reality in Machine Learning?50:45 What advice would you give to someone just starting out?52:30 What advice would you give yourself when you were just starting out in your career?57:22 What has a career in ML taught you about life?
Learning from Machine Learning, a podcast that explores more than just algorithms and data: Life lessons from the experts. This episode we welcome Vincent Warmerdam, creator of calmcode, and machine learning engineer at SpaCy to discuss Data Science, models and much more. @learningfrommachinelearningResources to learn more about Vincent Warmerdam:https://calmcode.io/https://youtu.be/kYMfE9u-lMohttps://youtu.be/S7vhi6RjBZAhttps://github.com/koaningReferences from the Episode:You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place https://amzn.to/3Jt1qjXThe Future of Operational Research is Past https://ackoffcenter.blogs.com/files/the-future-of-operational-research-is-past.pdfSupervised Learning is great - it's data collection that's broken https://explosion.ai/blog/supervised-learning-data-collectionDeon - An ethics checklist for data scientists https://deon.drivendata.org/Hadley Wickham - https://hadley.nz/Katharine Jarmul - https://www.linkedin.com/in/katharinejarmul/?originalSubdomain=deVicki Boykis - https://vickiboykis.com/Brett Victor - https://youtu.be/8pTEmbeENF4Resources to learn more about Learning from Machine Learning:https://www.linkedin.com/company/learning-from-machine-learning/https://www.linkedin.com/in/sethplevine/https://medium.com/@levine.seth.p
The inaugural episode of Learning from Machine Learning, a podcast that explores more than just algorithms and data: Life lessons from the experts.This episode we welcome Maarten Grootendorst to discuss BERTopic, Data Science, Psychology and the future of Machine Learning and Natural Language Processing.Towards Data Science Article featuring this interviewResources to learn more about Maarten Grootendorst:https://www.maartengrootendorst.com/https://maartengr.github.io/BERTopic/https://www.linkedin.com/in/mgrootendorst/https://twitter.com/MaartenGrhttps://medium.com/@maartengrootendorstResources to learn more about Learning from Machine Learning:https://www.linkedin.com/company/learning-from-machine-learning/https://www.linkedin.com/in/sethplevine/https://medium.com/@levine.seth.p