POPULARITY
Today I had the pleasure of interviewing Ruchi Bhatia. She is a 3x kaggle grandmaster and grad student at Carnegie Mellon studying Information Systems Management. Ruchi is also a fellow Z by HP global data science ambassador. In this episode we talk about her experience becoming a kaggle grandmaster, how she was able to kickstart her career through sharing her work, and the implication of ChatGPT on students, teachers, and kagglers. Ruchi's Links:LinkedIn - https://www.linkedin.com/in/ruchi798/Kaggle - https://www.kaggle.com/ruchi798Twitter - https://twitter.com/ruchi798?lang=en
YT Channel: https://www.youtube.com/@ChaiTimeDataScience Learn more about H2O's Hydrogen Torch app here: https://h2o.ai/platform/ai-cloud/make/hydrogen-torch/ In this "episode", Sanyam Bhutani will host the the world's top kagglers and best data scientists to learn and debate the best practises for training ml models. We will learn from their experience of having won multiple competitions and built many incredible products at H2O: what are the best practises for taming your ml model Panelists: Dmitry Gordeev: https://www.kaggle.com/dott1718 https://twitter.com/dott1718 https://www.linkedin.com/in/dmitry-gordeev-50116023/?originalSubdomain=at Gabor Fodor: https://www.kaggle.com/gaborfodor https://www.linkedin.com/in/gábor-fodor-6a081548/?originalSubdomain=hu Pascal Pfeiffer: https://www.kaggle.com/ilu000 https://www.linkedin.com/in/pascal-pfeiffer/ Philipp Singer: https://www.kaggle.com/philippsinger https://twitter.com/ph_singer https://www.linkedin.com/in/philippsinger/ Yauhen Babakhin: https://www.kaggle.com/ybabakhin https://www.linkedin.com/in/yauhenbabakhin/ Sanyam Bhutani: https://www.kaggle.com/init27 https://twitter.com/bhutanisanyam1 https://www.linkedin.com/in/sanyambhutani/ A show for interviews with Practitioners, Kagglers & Researchers hosted by Sanyam.
Channel: http://youtube.com/c/ChaiTimeDataScience/ Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this Episode, Sanyam Bhutani interviews Amed Coulibaly about his journey and reflection on reaching Kaggle Competitions Grandmaster. They also understand his team's 3rd place solution to recently ended Feedback competition Links: Solution: https://www.kaggle.com/competitions/feedback-prize-english-language-learning/discussion/369609 Follow: Amed Coulibaly: Twitter: https://twitter.com/Amedprof Linkedin: https://www.linkedin.com/in/amed-coulibaly-94150610a/ Kaggle: https://www.kaggle.com/amedprof Sanyam Bhutani: Twitter: https://twitter.com/bhutanisanyam1 LinkedIn: https://www.linkedin.com/in/sanyambhutani/ Kaggle: https://www.kaggle.com/init27 Blog: sanyambhutani.com A show for Interviews with Practitioners, Kagglers & Researchers, and all things Data Science hosted by Sanyam Bhutani.
Video Version: https://youtu.be/qObfeWYbrPM In this episode, Sanyam Bhutani interviews the hosts from AI Today Podcast: Kathleen Walch, Ronald Schmelzer They talk about their journey into AI, creating AI Content, and the AI Today Podcast. Links: AI Today Podcast: https://www.cognilytica.com/aitoday/ Cognalytica: https://www.cognilytica.com Follow: Kathleen Walch: Twitter: https://twitter.com/kath0134 Linkedin: https://www.linkedin.com/in/kathleen-walch-50185112/ Ronald Schmelzer: Twitter: https://twitter.com/rschmelzer Linkedin: https://www.linkedin.com/in/rschmelzer/ Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani.
Video Version: https://youtu.be/W3aWEXqIkWk Blog Overview: http://sanyambhutani.com/interview-with-the-nvidia-acm-recsys-2021-winning-team Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this Episode, Sanyam Bhutani interviews a panel from the ACM RecSys Winning competition team at NVIDIA. They explain why are RecSys systems such a hard problem, how can GPUs accelerate these, how do we productize such solutions. The team also does a ground basic to a complete overview of their solution. They understand the team's approaches to the problem, how did they arrive at the solution, and the tricks that they discovered and very generously shared in this interview Links: Interview with Even Oldridge: https://youtu.be/-WzXIV8P_Jk Interview with Chris Deotte: https://youtu.be/QGCvycOXs2M Open Source Solution: https://github.com/NVIDIA-Merlin/competitions/tree/main/RecSys2021_Challenge Paper Link: https://github.com/NVIDIA-Merlin/competitions/blob/main/RecSys2021_Challenge/GPU-Accelerated-Boosted-Trees-and-Deep-Neural-Networks-for-Better-Recommender-Systems.pdf Follow: Benedikt Schifferer: Linkedin: https://www.linkedin.com/in/benedikt-schifferer/ Bo Liu: Twitter: https://twitter.com/boliu0 Kaggle: https://www.kaggle.com/boliu0 Chris Deotte: Twitter: https://twitter.com/ChrisDeotte Kaggle: https://www.kaggle.com/cdeotte Even Oldridge Twitter: https://twitter.com/even_oldridge Linkedin: https://www.linkedin.com/in/even-oldridge/ Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers, and all things Data Science hosted by Sanyam Bhutani.
Personal Note: This was a huge honor for me to meet Harrison and have him on the podcast! In this episode, Sanyam Bhutani interviews THE SentDex about his journey as an entrepreneur, YouTube content creator, Educator and Author. They talk about his journey on and off the platform and the R&D that happens behind the scenes for the incredible videos that we get to see. They also discuss how the NVIDIA DGX A-100 box has been helping Harrison for the few months of usage. Thanks to our friends at NVIDIA for helping make this conversation happen! Link: NNFS.io: https://nnfs.io DGX-A100: https://www.nvidia.com/en-in/data-center/dgx-a100/ Interview with Charlie Boyle: https://www.youtube.com/watch?v=SiUnKGD90uI Follow: Harrison Kinsley: YouTube: https://www.youtube.com/user/sentdex Twitter: https://twitter.com/Sentdex Website: https://hkinsley.com Linkedin: https://www.linkedin.com/in/hkinsley/ Instagram : https://www.instagram.com/sentdex/ Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. #Python #machinelearning #SentDex
Personal Note: I'm so happy to release a new interview after a really long break! Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Clair Sullivan, Graph Data Science Advocate at Neo4j They talk about Clair's journey from taking up the challenge and becoming an engineer to later transitioning from academia back into the industry Links: Neo4j: https://neo4j.com NODES 2021: https://neo4j.com/event/nodes-2021/ Follow: Clair Sullivan: https://twitter.com/CJLovesData1 https://www.linkedin.com/in/dr-clair-sullivan-09914342/ Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani.
Spence shares his experience creating a product around human-in-the-loop machine translation, and explains how machine translation has evolved over the years. --- Spence Green is co-founder and CEO of Lilt, an AI-powered language translation platform. Lilt combines human translators and machine translation in order to produce high-quality translations more efficiently. ---
Roger and DJ share some of the history behind data science as we know it today, and reflect on their experiences working on California's COVID-19 response. --- Roger Magoulas is Senior Director of Data Strategy at Astronomer, where he works on data infrastructure, analytics, and community development. Previously, he was VP of Research at O'Reilly and co-chair of O'Reilly's Strata Data and AI Conference. DJ Patil is a board member and former CTO of Devoted Health, a healthcare company for seniors. He was also Chief Data Scientist under the Obama administration and the Head of Data Science at LinkedIn. Roger and DJ recently volunteered for the California COVID-19 response, and worked with data to understand case counts, bed capacities and the impact of intervention. Connect with Roger and DJ:
Amelia and Filip give insights into the recommender systems powering Pandora, from developing models to balancing effectiveness and efficiency in production. --- Amelia Nybakke is a Software Engineer at Pandora. Her team is responsible for the production system that serves models to listeners. Filip Korzeniowski is a Senior Scientist at Pandora working on recommender systems. Before that, he was a PhD student working on deep neural networks for acoustic and language modeling applied to musical audio recordings. Connect with Amelia and Filip:
From Apache TVM to OctoML, Luis gives direct insight into the world of ML hardware optimization, and where systems optimization is heading. --- Luis Ceze is co-founder and CEO of OctoML, co-author of the Apache TVM Project, and Professor of Computer Science and Engineering at the University of Washington. His research focuses on the intersection of computer architecture, programming languages, machine learning, and molecular biology. Connect with Luis:
Matthew explains how combining machine learning and computational biology can provide mainstream medicine with better diagnostics and insights. --- Matthew Davis is Head of AI at Invitae, the largest and fastest growing genetic testing company in the world. His research includes bioinformatics, computational biology, NLP, reinforcement learning, and information retrieval. Matthew was previously at IBM Research AI, where he led a research team focused on improving AI systems. Connect with Matthew:
Clem explains the virtuous cycles behind the creation and success of Hugging Face, and shares his thoughts on where NLP is heading. --- Clément Delangue is co-founder and CEO of Hugging Face, the AI community building the future. Hugging Face started as an open source NLP library and has quickly grown into a commercial product used by over 5,000 companies. Connect with Clem: Twitter: https://twitter.com/ClementDelangue LinkedIn: https://www.linkedin.com/in/clementdelangue/ --- Topics Discussed: 0:00 Sneak peek and intro 0:56 What is Hugging Face? 4:15 The success of Hugging Face Transformers 7:53 Open source and virtuous cycles 10:37 Working with both TensorFlow and PyTorch 13:20 The "Write With Transformer" project 14:36 Transfer learning in NLP 16:43 BERT and DistilBERT 22:33 GPT 26:32 The power of the open source community 29:40 Current applications of NLP 35:15 The Turing Test and conversational AI 41:19 Why speech is an upcoming field within NLP 43:44 The human challenges of machine learning Transcript: http://wandb.me/gd-clement-delangue Links Discussed: Write With Transformer, Hugging Face Transformer's text generation demo: https://transformer.huggingface.co/ "Attention Is All You Need" (Vaswani et al., 2017): https://arxiv.org/abs/1706.03762 EleutherAI and GPT-Neo: https://github.com/EleutherAI/gpt-neo] Rasa, open source conversational AI: https://rasa.com/ --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google Podcasts: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Wojciech joins us to talk the principles behind OpenAI, the Fermi Paradox, and the future stages of developments in AGI. --- Wojciech Zaremba is a co-founder of OpenAI, a research company dedicated to discovering and enacting the path to safe artificial general intelligence. He was also Head of Robotics, where his team developed general-purpose robots through new approaches to transfer learning, and taught robots complex behaviors. Connect with Sean: Personal website: https://wojzaremba.com// Twitter: https://twitter.com/woj_zaremba --- Topics Discussed: 0:00 Sneak peek and intro 1:03 The people and principles behind OpenAI 6:31 The stages of future AI developments 13:42 The Fermi paradox 16:18 What drives Wojciech? 19:17 Thoughts on robotics 24:58 Dota and other projects at OpenAI 33:42 What would make an AI conscious? 41:31 How to be succeed in robotics Transcript: http://wandb.me/gd-wojciech-zaremba Links: Fermi paradox: https://en.wikipedia.org/wiki/Fermi_paradox OpenAI and Dota: https://openai.com/projects/five/ --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google Podcasts: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Phil shares some of the approaches, like sparsity and low precision, behind the breakthrough performance of Graphcore's Intelligence Processing Units (IPUs). --- Phil Brown leads the Applications team at Graphcore, where they're building high-performance machine learning applications for their Intelligence Processing Units (IPUs), new processors specifically designed for AI compute. Connect with Phil: LinkedIn: https://www.linkedin.com/in/philipsbrown/ Twitter: https://twitter.com/phil_s_brown --- 0:00 Sneak peek, intro 1:44 From computational chemistry to Graphcore 5:16 The simulations behind weather prediction 10:54 Measuring improvement in weather prediction systems 15:35 How high performance computing and ML have different needs 19:00 The potential of sparse training 31:08 IPUs and computer architecture for machine learning 39:10 On performance improvements 44:43 The impacts of increasing computing capability 50:24 The ML chicken and egg problem 52:00 The challenges of converging at scale and bringing hardware to market Links Discussed: Rigging the Lottery: Making All Tickets Winners (Evci et al., 2019): https://arxiv.org/abs/1911.11134 Graphcore MK2 Benchmarks: https://www.graphcore.ai/mk2-benchmarks Check out the transcription and discover more awesome ML projects: http://wandb.me/phil-brown-podcast --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google Podcasts: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Check out our Gallery, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/gallery
From working on COVID-19 vaccine rollout to writing a book on responsible ML, Alyssa shares her thoughts on meaningful projects and the importance of teamwork. --- Alyssa Simpson Rochwerger is as a Director of Product at Blue Shield of California, pursuing her dream of using technology to improve healthcare. She has over a decade of experience in building technical data-driven products and has held numerous leadership roles for machine learning organizations, including VP of AI and Data at Appen and Director of Product at IBM Watson. Connect with Sean: Personal website: https://seanjtaylor.com/ Twitter: https://twitter.com/seanjtaylor LinkedIn: https://www.linkedin.com/in/seanjtaylor/ --- Topics Discussed: 0:00 Sneak peak, intro 1:17 Working on COVID-19 vaccine rollout in California 6:50 Real World AI 12:26 Diagnosing bias in models 17:43 Common challenges in ML 21:56 Finding meaningful projects 24:28 ML applications in health insurance 31:21 Longitudinal health records and data cleaning 38:24 Following your interests 40:21 Why teamwork is crucial Transcript: http://wandb.me/gd-alyssa-s-rochwerger Links Discussed: My Turn: https://myturn.ca.gov/ "Turn the Ship Around!": https://www.penguinrandomhouse.com/books/314163/turn-the-ship-around-by-l-david-marquet/ --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google Podcasts: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Sean joins us to chat about ML models and tools at Lyft Rideshare Labs, Python vs R, time series forecasting with Prophet, and election forecasting. --- Sean Taylor is a Data Scientist at (and former Head of) Lyft Rideshare Labs, and specializes in methods for solving causal inference and business decision problems. Previously, he was a Research Scientist on Facebook's Core Data Science team. His interests include experiments, causal inference, statistics, machine learning, and economics. Connect with Sean: Personal website: https://seanjtaylor.com/ Twitter: https://twitter.com/seanjtaylor LinkedIn: https://www.linkedin.com/in/seanjtaylor/ --- Topics Discussed: 0:00 Sneak peek, intro 0:50 Pricing algorithms at Lyft 07:46 Loss functions and ETAs at Lyft 12:59 Models and tools at Lyft 20:46 Python vs R 25:30 Forecasting time series data with Prophet 33:06 Election forecasting and prediction markets 40:55 Comparing and evaluating models 43:22 Bottlenecks in going from research to production Transcript: http://wandb.me/gd-sean-taylor Links Discussed: "How Lyft predicts a rider’s destination for better in-app experience"": https://eng.lyft.com/how-lyft-predicts-your-destination-with-attention-791146b0a439 Prophet: https://facebook.github.io/prophet/ Andrew Gelman's blog post "Facebook's Prophet uses Stan": https://statmodeling.stat.columbia.edu/2017/03/01/facebooks-prophet-uses-stan/ Twitter thread "Election forecasting using prediction markets": https://twitter.com/seanjtaylor/status/1270899371706466304 "An Updated Dynamic Bayesian Forecasting Model for the 2020 Election": https://hdsr.mitpress.mit.edu/pub/nw1dzd02/release/1 --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google Podcasts: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Polly explains how microfluidics allow bioengineering researchers to create high throughput data, and shares her experiences with biology and machine learning. --- Polly Fordyce is an Assistant Professor of Genetics and Bioengineering and fellow of the ChEM-H Institute at Stanford. She is the Principal Investigator of The Fordyce Lab, which focuses on developing and applying new microfluidic platforms for quantitative, high-throughput biophysics and biochemistry. Twitter: https://twitter.com/fordycelab Website: http://www.fordycelab.com/ --- Topics Discussed: 0:00 Sneak peek, intro 2:11 Background on protein sequencing 7:38 How changes to a protein's sequence alters its structure and function 11:07 Microfluidics and machine learning 19:25 Why protein folding is important 25:17 Collaborating with ML practitioners 31:46 Transfer learning and big data sets in biology 38:42 Where Polly hopes bioengineering research will go 42:43 Advice for students Transcript: http://wandb.me/gd-polly-fordyce Links Discussed: "The Weather Makers": https://en.wikipedia.org/wiki/The_Wea... --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google Podcasts: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
Adrien Gaidon shares his approach to building teams and taking state-of-the-art research from conception to production at Toyota Research Institute. --- Adrien Gaidon is the Head of Machine Learning Research at the Toyota Research Institute (TRI). His research focuses on scaling up ML for robot autonomy, spanning Scene and Behavior Understanding, Simulation for Deep Learning, 3D Computer Vision, and Self-Supervised Learning. Connect with Adrien: Twitter: https://twitter.com/adnothing LinkedIn: https://www.linkedin.com/in/adrien-gaidon-63ab2358/ Personal website: https://adriengaidon.com/ --- Topics Discussed: 0:00 Sneak peek, intro 0:48 Guitars and other favorite tools 3:55 Why is PyTorch so popular? 11:40 Autonomous vehicle research in the long term 15:10 Game-changing academic advances 20:53 The challenges of bringing autonomous vehicles to market 26:05 Perception and prediction 35:01 Fleet learning and meta learning 41:20 The human aspects of machine learning 44:25 The scalability bottleneck Transcript: http://wandb.me/gd-adrien-gaidon Links Discussed: TRI Global Research: https://www.tri.global/research/ todoist: https://todoist.com/ Contrastive Learning of Structured World Models: https://arxiv.org/abs/2002.05709 SimCLR: https://arxiv.org/abs/2002.05709 --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google Podcasts: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Check out Fully Connected, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/fully-connected
A look at how Nimrod and the team at Nanit are building smart baby monitor systems, from data collection to model deployment and production monitoring. --- Nimrod Shabtay is a Senior Computer Vision Algorithm Developer at Nanit, a New York-based company that's developing better baby monitoring devices. Connect with Nimrod: LinkedIn: https://www.linkedin.com/in/nimrod-shabtay-76072840/ --- Links Discussed: Guidelines for building an accurate and robust ML/DL model in production: https://engineering.nanit.com/guideli... Careers at Nanit: https://www.nanit.com/jobs --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud --- Join our community of ML practitioners where we host AMAs, share interesting projects, and more: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Chris shares some of the incredible work and innovations behind deep space exploration at NASA JPL and reflects on the past, present, and future of machine learning. --- Chris Mattman is the Chief Technology and Innovation Officer at NASA Jet Propulsion Laboratory, where he focuses on organizational innovation through technology. He's worked on space missions such as the Orbiting Carbon Observatory 2 and Soil Moisture Active Passive satellites. Chris is also a co-creator of Apache Tika, a content detection and analysis framework that was one of the key technologies used to uncover the Panama Papers, and is the author of "Machine Learning with TensorFlow, Second Edition" and "Tika in Action". Connect with Chris: Twitter: https://twitter.com/chrismattmann Personal website: https://www.mattmann.ai/ --- Timestamps: 0:00 Sneak peek, intro 0:52 On Perseverance and Ingenuity 8:40 Machine learning applications at NASA JPL 11:51 Innovation in scientific instruments and data formats 18:26 Data processing levels: Level 1 vs Level 2 vs Level 3 22:20 Competitive data processing 27:38 Kerbal Space Program 30:19 The ideas behind "Machine Learning with Tensorflow, Second Edition" 35:37 The future of MLOps and AutoML 38:51 Machine learning at the edge Transcription: http://wandb.me/chris-mattmann-podcast Links Discussed: Perseverance and Ingenuity: https://mars.nasa.gov/mars2020/ Data processing levels at NASA: https://earthdata.nasa.gov/collaborate/open-data-services-and-software/data-information-policy/data-levels OCO-2: https://www.jpl.nasa.gov/missions/orbiting-carbon-observatory-2-oco-2 "Machine Learning with TensorFlow, Second Edition" (2020): https://www.manning.com/books/machine-learning-with-tensorflow-second-edition "Tika in Action" (2011): https://www.manning.com/books/tika-in-action --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google Podcasts: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Check out our Gallery, which features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, industry leaders sharing best practices, and more: https://wandb.ai/gallery
From legged locomotion to autonomous driving, Vladlen explains how simulation and abstraction help us understand embodied intelligence. --- Vladlen Koltun is the Chief Scientist for Intelligent Systems at Intel, where he leads an international lab of researchers working in machine learning, robotics, computer vision, computational science, and related areas. Connect with Vladlen: Personal website: http://vladlen.info/ LinkedIn: https://www.linkedin.com/in/vladlenkoltun/ --- 0:00 Sneak peek and intro 1:20 "Intelligent Systems" vs "AI" 3:02 Legged locomotion 9:26 The power of simulation 14:32 Privileged learning 18:19 Drone acrobatics 20:19 Using abstraction to transfer simulations to reality 25:35 Sample Factory for reinforcement learning 34:30 What inspired CARLA and what keeps it going 41:43 The challenges of and for robotics Links Discussed Learning quadrupedal locomotion over challenging terrain (Lee et al., 2020): https://robotics.sciencemag.org/content/5/47/eabc5986.abstract Deep Drone Acrobatics (Kaufmann et al., 2020): https://arxiv.org/abs/2006.05768 Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning (Petrenko et al., 2020): https://arxiv.org/abs/2006.11751 CARLA: https://carla.org/ --- Check out the transcription and discover more awesome ML projects: http://wandb.me/vladlen-koltun-podcast Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud --- Join our community of ML practitioners where we host AMAs, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Dominik shares the story and principles behind Vega and Vega-Lite, and explains how visualization and machine learning help each other. --- Dominik is a co-author of Vega-Lite, a high-level visualization grammar for building interactive plots. He's also a professor at the Human-Computer Interaction Institute Institute at Carnegie Mellon University and an ML researcher at Apple. Connect with Dominik Twitter: https://twitter.com/domoritz GitHub: https://github.com/domoritz Personal website: https://www.domoritz.de/ --- 0:00 Sneak peek, intro 1:15 What is Vega-Lite? 5:39 The grammar of graphics 9:00 Using visualizations creatively 11:36 Vega vs Vega-Lite 16:03 ggplot2 and machine learning 18:39 Voyager and the challenges of scale 24:54 Model explainability and visualizations 31:24 Underrated topics: constraints and visualization theory 34:38 The challenge of metrics in deployment 36:54 In between aggregate statistics and individual examples Links Discussed Vega-Lite: https://vega.github.io/vega-lite/ Data analysis and statistics: an expository overview (Tukey and Wilk, 1966): https://dl.acm.org/doi/10.1145/1464291.1464366 Slope chart / slope graph: https://vega.github.io/vega-lite/examples/line_slope.html Voyager: https://github.com/vega/voyager Draco: https://github.com/uwdata/draco Check out the transcription and discover more awesome ML projects: http://wandb.me/domink-moritz --- Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud --- Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
How Cade got access to the stories behind some of the biggest advancements in AI, and the dynamic playing out between leaders at companies like Google, Microsoft, and Facebook. Cade Metz is a New York Times reporter covering artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas. Previously, he was a senior staff writer with Wired magazine and the U.S. editor of The Register, one of Britain’s leading science and technology news sites. His first book, "Genius Makers", tells the stories of the pioneers behind AI. Get the book: http://bit.ly/GeniusMakers Follow Cade on Twitter: https://twitter.com/CadeMetz/ And on Linkedin: https://www.linkedin.com/in/cademetz/ Topics discussed: 0:00 sneak peek, intro 3:25 audience and charachters 7:18 *spoiler alert* AGI 11:01 book ends, but story goes on 17:31 overinflated claims in AI 23:12 Deep Mind, OpenAI, building AGI 29:02 neuroscience and psychology, outsiders 34:35 Early adopters of ML 38:34 WojNet, where is credit due? 42:45 press covering AI 46:38 Aligning technology and need Read the transcript and discover awesome ML projects: http://wandb.me/cade-metz Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Learn why traditional home security systems tend to fail and how Dave’s love of tinkering and deep learning are helping him and the team at Deep Sentinel avoid those same pitfalls. He also discusses the importance of combatting racial bias by designing race-agnostic systems and what their approach is to solving that problem. Dave Selinger is the co-founder and CEO of Deep Sentinel, an intelligent crime prediction and prevention system that stops crime before it happens using deep learning vision techniques. Prior to founding Deep Sentinel, Dave co-founded RichRelevance, an AI recommendation company. https://www.deepsentinel.com/ https://www.meetup.com/East-Bay-Tri-Valley-Machine-Learning-Meetup/ https://twitter.com/daveselinger Topics covered: 0:00 Sneak peek, smart vs dumb cameras, intro 0:59 What is Deep Sentinel, how does it work? 6:00 Hardware, edge devices 10:40 OpenCV Fork, tinkering 16:18 ML Meetup, Climbing the AI research ladder 20:36 Challenge of Safety critical applications 27:03 New models, re-training, exhibitionists and voyeurs 31:17 How do you prove your cameras are better? 34:24 Angel investing in AI companies 38:00 Social responsibility with data 43:33 Combatting bias with data systems 52:22 Biggest bottlenecks production Get our podcast on these platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Read the transcript and discover more awesome machine learning material here: http://wandb.me/Dave-selinger-podcast Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Since reinforcement learning requires hefty compute resources, it can be tough to keep up without a serious budget of your own. Find out how the team at Facebook AI Research (FAIR) is looking to increase access and level the playing field with the help of NetHack, an archaic rogue-like video game from the late 80s. Links discussed: The NetHack Learning Environment: https://ai.facebook.com/blog/nethack-learning-environment-to-advance-deep-reinforcement-learning/ Reinforcement learning, intrinsic motivation: https://arxiv.org/abs/2002.12292 Knowledge transfer: https://arxiv.org/abs/1910.08210 Tim Rocktäschel is a Research Scientist at Facebook AI Research (FAIR) London and a Lecturer in the Department of Computer Science at University College London (UCL). At UCL, he is a member of the UCL Centre for Artificial Intelligence and the UCL Natural Language Processing group. Prior to that, he was a Postdoctoral Researcher in the Whiteson Research Lab, a Stipendiary Lecturer in Computer Science at Hertford College, and a Junior Research Fellow in Computer Science at Jesus College, at the University of Oxford. https://twitter.com/_rockt Heinrich Kuttler is an AI and machine learning researcher at Facebook AI Research (FAIR) and before that was a research engineer and team lead at DeepMind. https://twitter.com/HeinrichKuttler https://www.linkedin.com/in/heinrich-kuttler/ Topics covered: 0:00 a lack of reproducibility in RL 1:05 What is NetHack and how did the idea come to be? 5:46 RL in Go vs NetHack 11:04 performance of vanilla agents, what do you optimize for 18:36 transferring domain knowledge, source diving 22:27 human vs machines intrinsic learning 28:19 ICLR paper - exploration and RL strategies 35:48 the future of reinforcement learning 43:18 going from supervised to reinforcement learning 45:07 reproducibility in RL 50:05 most underrated aspect of ML, biggest challenges? Get our podcast on these other platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
From teaching at Stanford to co-founding Coursera, insitro, and Engageli, Daphne Koller reflects on the importance of education, giving back, and cross-functional research. Daphne Koller is the founder and CEO of insitro, a company using machine learning to rethink drug discovery and development. She is a MacArthur Fellowship recipient, member of the National Academy of Engineering, member of the American Academy of Arts and Science, and has been a Professor in the Department of Computer Science at Stanford University. In 2012, Daphne co-founded Coursera, one of the world's largest online education platforms. She is also a co-founder of Engageli, a digital platform designed to optimize student success. https://www.insitro.com/ https://www.insitro.com/jobs https://www.engageli.com/ https://www.coursera.org/ Follow Daphne on Twitter: https://twitter.com/DaphneKoller https://www.linkedin.com/in/daphne-koller-4053a820/ Topics covered: 0:00 Giving back and intro 2:10 insitro's mission statement and Eroom's Law 3:21 The drug discovery process and how ML helps 10:05 Protein folding 15:48 From 2004 to now, what's changed? 22:09 On the availability of biology and vision datasets 26:17 Cross-functional collaboration at insitro 28:18 On teaching and founding Coursera 31:56 The origins of Engageli 36:38 Probabilistic graphic models 39:33 Most underrated topic in ML 43:43 Biggest day-to-day challenges Get our podcast on these other platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
Piero shares the story of how Ludwig was created, as well as the ins and outs of how Ludwig works and the future of machine learning with no code. Piero is a Staff Research Scientist in the Hazy Research group at Stanford University. He is a former founding member of Uber AI, where he created Ludwig, worked on applied projects (COTA, Graph Learning for Uber Eats, Uber’s Dialogue System), and published research on NLP, Dialogue, Visualization, Graph Learning, Reinforcement Learning, and Computer Vision. Topics covered: 0:00 Sneak peek and intro 1:24 What is Ludwig, at a high level? 4:42 What is Ludwig doing under the hood? 7:11 No-code machine learning and data types 14:15 How Ludwig started 17:33 Model performance and underlying architecture 21:52 On Python in ML 24:44 Defaults and W&B integration 28:26 Perspective on NLP after 10 years in the field 31:49 Most underrated aspect of ML 33:30 Hardest part of deploying ML models in the real world Learn more about Ludwig: https://ludwig-ai.github.io/ludwig-docs/ Piero's Twitter: https://twitter.com/w4nderlus7 Follow Piero on Linkedin: https://www.linkedin.com/in/pieromolino/?locale=en_US Get our podcast on these other platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
How Rosanne is working to democratize AI research and improve diversity and fairness in the field through starting a non-profit after being a founding member of Uber AI Labs, doing lots of amazing research, and publishing papers at top conferences. Rosanne is a machine learning researcher, and co-founder of ML Collective, a nonprofit organization for open collaboration and mentorship. Before that, she was a founding member of Uber AI. She has published research at NeurIPS, ICLR, ICML, Science, and other top venues. While at school she used neural networks to help discover novel materials and to optimize fuel efficiency in hybrid vehicles. ML Collective: http://mlcollective.org/ Controlling Text Generation with Plug and Play Language Models: https://eng.uber.com/pplm/ LCA: Loss Change Allocation for Neural Network Training: https://eng.uber.com/research/lca-loss-change-allocation-for-neural-network-training/ Topics covered 0:00 Sneak peek, Intro 1:53 The origin of ML Collective 5:31 Why a non-profit and who is MLC for? 14:30 LCA, Loss Change Allocation 18:20 Running an org, research vs admin work 20:10 Advice for people trying to get published 24:15 on reading papers and Intrinsic Dimension paper 36:25 NeurIPS - Open Collaboration 40:20 What is your reward function? 44:44 Underrated aspect of ML 47:22 How to get involved with MLC Get our podcast on these other platforms: Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts YouTube: http://wandb.me/youtube Tune in to our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices: https://wandb.ai/gallery
In this episode of Gradient Dissent, Primer CEO Sean Gourley and Lukas Biewald sit down to talk about NLP, working with vast amounts of information, and how crucially it relates to national defense. They also chat about their experience of being second-time founders coming from a data science background and how it affects the way they run their companies. We hope you enjoy this episode! Sean Gourley is the founder and CEO Primer, a natural language processing startup in San Francisco. Previously, he was CTO of Quid an augmented intelligence company that he cofounded back in 2009. And prior to that, he worked on self-repairing nano circuits at NASA Ames. Sean has a PhD in physics from Oxford, where his research as a road scholar focused on graph theory, complex systems, and the mathematical patterns underlying modern war. Follow Sean on Twitter: https://primer.ai/ https://twitter.com/sgourley Topics Covered: 0:00 Sneak peek, intro 1:42 Primer's mission and purpose 4:29 The Diamond Age – How do we train machines to observe the world and help us understand it 7:44 a self-writing Wikipedia 9:30 second-time founder 11:26 being a founder as a data scientist 15:44 commercializing algorithms 17:54 Is GPT-3 worth the hype? The mind-blowing scale of transformers 23:00 AI Safety, military/defense 29:20 disinformation, does ML play a role? 34:55 Establishing ground truth and informational provenance 39:10 COVID misinformation, Masks, division 44:07 most underrated aspect of ML 45:09 biggest bottlenecks in ML? Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on these other platforms: YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices. https://wandb.ai/gallery
Peter Wang talks about his journey of being the CEO of and co-founding Anaconda, his perspective on the Python programming language, and its use for scientific computing. Peter Wang has been developing commercial scientific computing and visualization software for over 15 years. He has extensive experience in software design and development across a broad range of areas, including 3D graphics, geophysics, large data simulation and visualization, financial risk modeling, and medical imaging. Peter’s interests in the fundamentals of vector computing and interactive visualization led him to co-found Anaconda (formerly Continuum Analytics). Peter leads the open source and community innovation group. As a creator of the PyData community and conferences, he devotes time and energy to growing the Python data science community and advocating and teaching Python at conferences around the world. Peter holds a BA in Physics from Cornell University. Follow peter on Twitter: https://twitter.com/pwang https://www.anaconda.com/ Intake: https://www.anaconda.com/blog/intake-... https://pydata.org/ Scientific Data Management in the Coming Decade paper: https://arxiv.org/pdf/cs/0502008.pdf Topics covered: 0:00 (intro) Technology is not value neutral; Don't punt on ethics 1:30 What is Conda? 2:57 Peter's Story and Anaconda's beginning 6:45 Do you ever regret choosing Python? 9:39 On other programming languages 17:13 Scientific Data Management in the Coming Decade 21:48 Who are your customers? 26:24 The ML hierarchy of needs 30:02 The cybernetic era and Conway's Law 34:31 R vs python 42:19 Most underrated: Ethics - Don't Punt 46:50 biggest bottlenecks: open-source, python Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on these other platforms: YouTube: http://wandb.me/youtube Soundcloud: http://wandb.me/soundcloud Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices. https://wandb.ai/gallery
Chris shares his journey starting from playing in R.E.M, becoming interested in physics to leading WIRED Magazine for 11 years. His robot fascination lead to starting a company that manufactures drones, and creating a community democratizing self-driving cars. Chris Anderson is the CEO of 3D Robotics, founder of the Linux Foundation Dronecode Project and founder of the DIY Drones and DIY Robocars communities. From 2001 through 2012 he was the Editor in Chief of Wired Magazine. He's also the author of the New York Times bestsellers `The Long Tail` and `Free` and `Makers: The New Industrial Revolution`. In 2007 he was named to "Time 100," most influential men and women in the world. Links discussed in this episode: DIY Robocars: diyrobocars.com Getting Started with Robocars: https://diyrobocars.com/2020/10/31/getting-started-with-robocars/ DIY Robotics Meet Up: https://www.meetup.com/DIYRobocars Other Works 3DRobotics: https://www.3dr.com/ The Long Tail by Chris Anderson: https://www.amazon.com/Long-Tail-Future-Business-Selling/dp/1401309666/ref=sr_1_1?dchild=1&keywords=The+Long+Tail&qid=1610580178&s=books&sr=1-1 Interesting links Chris shared OpenMV: https://openmv.io/ Intel Tracking Camera: https://www.intelrealsense.com/tracking-camera-t265/ Zumi Self-Driving Car Kit: https://www.robolink.com/zumi/ Possible Minds: Twenty-Five Ways of Looking at AI: https://www.amazon.com/Possible-Minds-Twenty-Five-Ways-Looking/dp/0525557997 Topics discussed: 0:00 sneak peek and intro 1:03 Battle of the REM's 3:35 A brief stint with Physics 5:09 Becoming a journalist and the woes of being a modern physicis 9:25 WIRED in the aughts 12:13 perspectives on "The Long Tail" 20:47 getting into drones 25:08 "Take a smartphone, add wings" 28:07 How did you get to autonomous racing cars? 33:30 COVID and virtual environments 38:40 Chris's hope for Robocars 40:54 Robocar hardware, software, sensors 53:49 path to Singularity/ regulations on drones 58:50 "the golden age of simulation" 1:00:22 biggest challenge in deploying ML models Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on these other platforms: YouTube: http://wandb.me/youtube Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices. https://wandb.ai/gallery
Personal Note: Season 1 Finale had to be my interview with Emil. Huge thanks everyone for being a part of this journey! Video Version: https://youtu.be/ENbKecYgITA Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Emil Wallner, Artist in Residence at Google. They talk about Emil's journey from being a kind of a village in Africa to travelling and transitioning into AI. They discuss Emil's tweet storms, his self-taught journey and beyond Link: Meta thread to all of Emil's tweetstorms: https://twitter.com/bhutanisanyam1/status/1278036597523406849?s=20 Follow: Emil Wallner: https://twitter.com/EmilWallner https://github.com/emilwallner Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Personal Note: This was one my favourite Kaggle related interviews. Personally, I found Andrada's journey to be very relatable. Video Version: https://youtu.be/nshTx_EfRKU Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Andrada Olteanu about her journey to transitioning into Data Science and Learning on Kaggle. They discuss the struggles of learning something new and how to approach Kaggle. Follow: Andrada Olteanu: https://twitter.com/andradaolteanuu https://www.linkedin.com/in/andrada-olteanu-3806a2132/ https://www.kaggle.com/andradaolteanu Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/xDhVLLc4pUk Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Laura Leal Taixe. They talk about Laura's journey into Academia and her research at the Dynamic Vision & Learning Group. Follow: Laura Leal Taixé: https://twitter.com/lealtaixe https://www.youtube.com/channel/UCQVCsX1CcZQr0oUMZg6szIQ https://dvl.in.tum.de/team/lealtaixe/ Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/xbcGj_mtTB0 Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews the creator of PyTorch Lightning: William Falcon They talk about William's journey from being in the military to the financial world, learning how to code and eventually transitioning into Data Science. They discuss the PyTorch lightning story and William's research, Grid.ai Links: Grid.ai: https://www.grid.ai PyTorch Lightning: https://www.pytorchlightning.ai Blog: https://www.williamfalcon.com/accessible-ai-blog Follow: William Falcon: https://twitter.com/_willfalcon https://www.williamfalcon.com Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/ODSgwUAzJj4 Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Katy Warr from Roke Manor Research, author of Strengthening DNNs book. They talk about Katy's journey into AI, her work at Roke and the story of the book. They also talk about Fooling AI, DNNs, adversarial attacks and how to prevent these Book: https://www.oreilly.com/library/view/strengthening-deep-neural/9781492044949/ Follow: Katy Warr: https://www.linkedin.com/in/katywarr Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video version: https://youtu.be/g_lBMheQcpw Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Kaggle Grandmaster and Data Scientist: Laura Fink. They talk about Laura's journey from Physics into Data Science, her journey into creating the amazing storytelling notebooks on Kaggle and general tips & suggestions to Kaggle Noobs Follow: Laura Fink: https://www.linkedin.com/in/laura-f-ab2010170 https://www.kaggle.com/allunia Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/-Iem6tUnm2A Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews co-founder & CEO of MonteCarlo Data: Barr Moses. MonteCarlo Data is trying to solve the issues of reliability and downtime of data. They talk a lot about Data Downtime, observability and all of the issues that originate out of having "bad" data or issues with data. They also talk about Barr's journey into Data. Follow: https://www.montecarlodata.com Barr Moses: https://twitter.com/bm_datadowntime Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/PaaKh7Tpzxg Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Dr David Luebke, VP of Graphics Research at NVIDIA. They talk about the fascinating world of Graphics and the research that goes on behind the scenes. Ray-Tracing and its promise. They also discuss NVIDIA's recent improvements in GANs: "Training GANs with Limited Data" paper Links: Interview with Bryan Catanzzaro: https://youtu.be/guJT5GOiNjA Paper Link: https://arxiv.org/abs/2006.06676 Follow: David Luebke: https://twitter.com/davedotluebke Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/Ki3SiI0nzF8 Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Senior CV Researcher at CTU: Torsten Sattler. Torsten's research interests span: 3D Computer Vision, Localization, AR, VR, self driving cars. Like all research conversations, this was really an insightful one They also dive into his recent research as a proxy to understanding how it Torsten approaches the search problems. Links: http://openaccess.thecvf.com/content_cvpr_2018/papers/Schonberger_Semantic_Visual_Localization_CVPR_2018_paper.pdf https://openaccess.thecvf.com/content_CVPR_2019/papers/Dusmanu_D2-Net_A_Trainable_CNN_for_Joint_Description_and_Detection_of_CVPR_2019_paper.pdf Follow: Torsten Sattler: https://twitter.com/sattlertorsten Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd #GoogleAI #Research #MultiModal
Video Version: https://youtu.be/35Gd8bBDwdo Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews THE PROFESSOR, Zachary Mueller. This is Part 2 of the conversation and they discuss Zach's journey from learning fastai to actively contributing to it and moderating the forums. They also discuss Zach's blogposts, his course and learnings from a recent internship. Follow: Zachary Mueller: https://twitter.com/thezachmueller https://muellerzr.github.io/fastblog/ https://www.youtube.com/channel/UCmKoQOD8uBqsRS8XDdSgrlQ Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Adrien shares his journey from making games that advance science (Eterna, Foldit) to creating a Streamlit, an open-source app framework enabling ML/Data practitioners to easily build powerful and interactive apps in a few hours. Adrien is co-founder and CEO of Streamlit, an open-source app framework that helps create beautiful data apps in hours in pure Python. Dr. Treuille has been a Zoox VP, Google X project lead, and Computer Science faculty at Carnegie Mellon. He has won numerous scientific awards, including the MIT TR35. Adrien has been featured in the documentaries What Will the Future Be Like by PBS/NOVA, and Lo and Behold by Werner Herzog. https://twitter.com/myelbows https://www.linkedin.com/in/adrien-treuille-52215718/ https://www.streamlit.io/ https://eternagame.org/ https://fold.it/ Topics covered: 0:00 sneak peek/Streamlit 0:47 intro 1:21 from aspiring guitar player to machine learning 4:16 Foldit - games that train humans 10:08 Eterna - another game and its relation to ML 16:15 Research areas as a professor at Carnegie Mellon 18:07 the origin of Streamlit 23:53 evolution of Streamlit: data science-ing a pivot 30:20 on programming languages 32:20 what’s next for Streamlit 37:34 On meditation and work/life 41:40 Underrated aspect of Machine Learning 443:07 Biggest challenge in deploying ML in the real world Visit our podcasts homepage for transcripts and more episodes! www.wandb.com/podcast Get our podcast on YouTube, Apple, Spotify, and Google! YouTube: http://wandb.me/youtube Apple Podcasts: http://wandb.me/apple-podcasts Spotify: http://wandb.me/spotify Google: http://wandb.me/google-podcasts Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their work: http://wandb.me/salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: http://wandb.me/slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.
Video Version: https://youtu.be/2tLNfR2whio Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Dr. Arsha Nagrani. Arsha has been working as a researcher at the intersection of audio, video, or audio and computer vision. They try to uncover what the field is really about. She has also co organised challenges for problems in this domain. They talk talk about all of these works along with her approach to research. Links: Interview with Dima: https://youtu.be/GXqq_hj-UuY Follow: Arsha Nagrani: https://twitter.com/nagraniarsha https://www.linkedin.com/in/arsha-nagrani-601a726b Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/2MT7bYZsiV4 Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews Ekaterina Kochmar: Lecturer and Researcher at Cambridge University, Co-Founder & CSO at Korbit.ai As you might know, Sanyam is a fan of learning to learn - The topic in general, they bring back the conversation in this episode. Ekaterina has been working on automated language teaching and assessment, which is using machine learning or different tools to augment teaching as an intelligent tutor to build intelligent systems for teaching different concepts for language, specifically English language, and even beyond. They have a deeper dive into this conversation discuss: - What does building a system like this take? - What research goes into it? - What are the interesting trends here? They also dive into Katrina's approach of research, and what does a research pipeline for her look like. Links: https://www.manning.com/books/getting-started-with-natural-language-processing Follow: Ekaterina Kochmar: https://www.linkedin.com/in/ekaterina-kochmar-0a655b14/ https://www.cl.cam.ac.uk/~ek358/ Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/H0NfDIDcu84 Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews a founder from South Africa, of the ranks of Elon Musk's Genius, Founder & CEO at Numerai: Richard Craib They talk about Richard's journey into the world of Finance, Trading and Data Science. They also discuss the Numerai story, the hedge fund, the competition and recently announced signals, along with the "Master Plan" Links: Signals: https://signals.numer.ai https://medium.com/numerai/numerais-master-plan-1a00f133dba9 Follow: Richard Craib: https://twitter.com/richardcraib https://www.linkedin.com/in/richardcraib/ http://twitter.com/numerai Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/pgzEqhuGBd0 Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani talks Python to the amazing podcaster, content creator and educator: Michael Kennedy, the host of talk Python podcast and Python Bytes podcast, In this episode, they talk about Michael's journey in programming and with Python. Michael has been hosting the podcast for five years and has been in the programming world for even longer, they dive into what he's learned through this and his perspective, how it has evolved through creating content, and to creating these courses and, and eventually a business around it as well. Links: https://talkpython.fm/home Automating the saw: https://www.youtube.com/watch?v=JEImn7s7x1o The ML leads to 50 exoplanet discovery: https://www.techrepublic.com/article/machine-learning-algorithm-confirms-50-new-exoplanets-in-historic-first/ Follow: Michael Kennedy: https://twitter.com/mkennedy https://www.linkedin.com/in/mkennedy/ Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
We're thrilled to have Peter Norvig join us to talk about the evolution of deep learning, his industry-defining book, his work at Google, and what he thinks the future holds for machine learning research. Peter Norvig is a Director of Research at Google Inc; previously he directed Google's core search algorithms group. He is co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field, and co-teacher of an Artificial Intelligence class that signed up 160,000. Prior to his work at Google, Norvig was NASA's chief computer scientist. Peter's website: https://norvig.com/ Topics covered: 0:00 singularity is in the eye of the beholder 0:32 introduction 1:09 project Euler 2:42 advent of code/pytudes 4:55 new sections in the new version of his book 10:32 unreasonable effectiveness of data Paper 15 years later 14:44 what advice would you give to a young researcher? 16:03 computing power in the evolution of deep learning 19:19 what's been surprising in the development of AI? 24:21 from alpha go to human-like intelligence 28:46 What in AI has been surprisingly hard or easy? 32:11 synthetic data and language 35:16 singularity is in the eye of the beholder 38:43 the future of python in ML and why he used it in his book 43:00 underrated topic in ML and bottlenecks in production Visit our podcasts homepage for transcripts and more episodes! https://www.wandb.com/podcast Get our podcast on Apple, Spotify, and Google! Apple Podcasts: https://bit.ly/2WdrUvI Spotify: https://bit.ly/2SqtadF Google: https://tiny.cc/GD_Google We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it! Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research: https://tiny.cc/wb-salon Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning: https://bit.ly/wb-slack Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices. https://wandb.ai/gallery
Video Version: https://youtu.be/SiUnKGD90uI Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews the VP, GM of DGX Systems at NVIDIA. NVIDIA recently unveiled their new DGX A100 systems and new A100 GPUs, that are really pushing the envelope of super computing. In this episode, they discuss the engineering that goes behind the scenes, how the systems are designed and their origin, how these have evolved over the years and NVIDIA's design philosophy as a proxy to understanding how these might bleed into consumer GPUs over time. Links: Links: https://nvda.ws/32HaMCm https://nvidianews.nvidia.com/news/nvidia-dgx-station-a100-offers-researchers-ai-data-center-in-a-box https://www.youtube.com/watch?v=TKtN04z7Q5Q Follow: Charlie Boyle: https://www.linkedin.com/in/charlie-boyle-0201a8/ Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd
Video Version: https://youtu.be/RvwynqDUoQE Subscribe here to the newsletter: https://tinyletter.com/sanyambhutani In this episode, Sanyam Bhutani interviews PhD Student: Tim Dettmers for the second time on the show! In this episode, they talk a lot about research in general, and the personal aspect of research, which isn't covered as much: How you should pick a grad school, Is creativity important in academia They talk a lot about personal side of things while you're going through the process of exploration in research, or in life in general They also talked about the RTX 3000 series, and discussed a few FAQs based on these. Links: https://timdettmers.com/2020/03/10/how-to-pick-your-grad-school/ https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/ Follow: Tim Dettmers: https://twitter.com/tim_dettmers https://timdettmers.com Sanyam Bhutani: https://twitter.com/bhutanisanyam1 Blog: sanyambhutani.com About: https://sanyambhutani.com/tag/chaitimedatascience/ A show for Interviews with Practitioners, Kagglers & Researchers and all things Data Science hosted by Sanyam Bhutani. You can expect weekly episodes every available as Video, Podcast, and blogposts. Intro track: Flow by LiQWYD https://soundcloud.com/liqwyd #GoogleAI #Research #MultiModal