Talking Machines is your window into the world of machine learning. Your hosts, Katherine Gorman and Neil Lawrence, bring you clear conversations with experts in the field, insightful discussions of industry news, and useful answers to your questions. Machine learning is changing the questions we c…
machine learning, audio quality, concepts, side, deep, like, great, content, show, good.
Listeners of Talking Machines that love the show mention:In this episode of the podcast we shake things up! Neil is on the guest side of the table with his partner Rabbi Laura Janner-Klausner to discuss their upcoming project Gods and Robots. Katherine is joined on the host side by friend of the show professor Michael Littman. See omnystudio.com/listener for privacy information.
On this episode we feature an interview with Madhulika Shrikumar of the Partnership on AI about their recent work Managing Risk and Responsible Publication See omnystudio.com/listener for privacy information.
Neil and Katherine chat about ICML and the timely award winner of this years test of time award! Bayesian Learning via Stochastic Gradient Langevin Dynamics See omnystudio.com/listener for privacy information.
Devin Guillory of UC Berkeley, is our guest on this episode. We talk about his love of robotics, working at the center of a new hype (learning with less labels) and his paper Combatting Anti-Blackness in the AI Community. He recently gave a talk on the subject the University of Toronto See omnystudio.com/listener for privacy information.
We're not bringing you an episode this week. We're taking some time to think about the systems we take part in and how those perpetuate anti black racism and the effects of that on the work in this field. We'd like to bring you meaningful conversations around those systems and how we can change them and ourselves. We encourage everyone to explore the amazing work of Black in AI, Data Science Africa and Shut Down STEM. Take care of yourselves, take care of each other, and stay tuned.
In this episode of Talking Machines we talk with Sella Nevo of Google Research about the Google Flood Forecasting Project,what they'vebeen doing, and what is means to really move the needle on AI for Good.
In episode eight of season six we talk with Alexander Rush and Shakir Mohamed about their work on ICLR this year which was first to take place in Ethiopia and then became totally virtual!
In episode seven of season six we talk with Michael Littman about his work in reinforcement learning, on scientific communication, and in the classroom.
In episode six of season six we chat with Professor Terry Sejnowski about his work, the evolution of the field, and the development of the NeurIPS conference. We taped this episode live and took questions from the audience. Want to join our "studio audience"? Check out @tlkngmchns on Twitter.
Episode five of season six is our first live episode! We talk with Elaine Nsoesie of Boston University about modeling disease and Covid 19 in the African context. plus we take listen questions live! Want to join our "studio audience" check out our twitter feed for how to sign up!
Episode four of season six is our 100th episode! (Well it's Katherine's). We take a break from our regular format for Neil and Katherine to chat about the current situation around Covid-19, understanding exponentials, and what impact this might have on how problems get prioritized.
In this episode we talk about the Great AI Fallacy, take a listener question about Federated Learning, and catch up with Ross Goodwin and Oscar Sharp
in episode two of season six we hear Ziad Obermeyer's talk from TedX Boston entitled If a Machine Could Predict Your Death, Should it?
In episode one of season six we make some predictions about what will happen in the field in the next decade and talk with Margot Gerritsen about her work and WiDS You can listen to the WiDS podcast here!
In our last episode for season five Katherine and Neil debate his debating project debater and talk about whats coming up at NeurIPS. Hope to see you there!
in episode twenty two of season five we hear a talk from Kenneth Anderson on how the field of AI and the law can work together to form regulation from TedX Boston
In Episode 21 of Season five we sit down with Marzyeh Ghassemi to talk about her work and how she's refined her focus.
In episode twenty of season five we talk with Neil about a discussion he had about the impact of ML tools on children talk about the new Diversity Dashboard from the Turing Institute in response to a question about cool things for Ada Lovelace day plus we sit down with Corinna Cortes of Google AI
In episode eighteen of season five we talk about DALI, get some big news about the next thing for Neil and talk with Benjamin Akera.
In episode eighteen of season five we hear Michael Littman's talk A Cooperative Path to Artificial Intelligence
In episode seventeen of season five we talk about Why Red Doesn't Sound Like a Bell, take a listener question about our Turing brackets (and Invent the Very Good Sort Awards) and listen to a chat with Tewodros Abebe
In this episode of Talking Machines we take a listen to Professor Engelhardt's TedX Boston talk, Not What But Why: Machine Learning for Understanding Genomics
in episode 15 of season five of Talking Machines we' chat about the recently announced workshops at NeurIPS 2019, find ourselves in the middle of an I Love Lucy Episode about technical term usage and talk with Randy Goebel of the Alberta Machine Intelligence Institute
In episode 14 of season five we talk about On the marginal likelihood and cross-validation, Katherine is STILL excited about PosterSession.ai, we invent Deep Quaggles and listen to a conversation with professor Elaine Nsoesie of BU
In episode thirteen of season five we bring you a the rest of our conversation with Michael Melese from Addis Ababa University and Charles Saidu of Baze University Abuja
In episode twelve of season five we bring you a rundown of Data Science Africa's latest workshop answer a listener question about what got us excited at ICML and hear the first part of our conversation with Michael Melese from Addis Ababa University and Charles Saidu of Baze University Abuja
In episode eleven of season five, we dig in to just what a data trust actually is, take a look at citation trends and other places (PMLR) you can dig up data to understand the field and talk with Raia Hadsell of DeepMind.
In episode ten of season five we talk about reproducibility, take a listener question on re understanding the history of the field given where we are now and how other fields are reviewing their own history and listen to a conversation with Graham Taylor of the Vector Institute.
In episode nine of season five we talk about some interesting work from AISTATS, dive into unbiased implicit variational inference, and chat with Jon McAuliffe CIO of Voleon
In this episode as we prep for ICLR we take a break from our usual format to bring you a talk from Hugo LaRochelle at TedX Boston on Deep Learning.
In episode seven of season five of we chat about MARS and Re: MARSOpenAI's status changes and We talk with Jasper Snoek of Google Brain
In episode six of season five we talk about Richard Sutton's A Bitter Lesson. Chat about IEEE's new Ethical Guidelines and talk with Andrew Beam Senior Fellownn at Flagship Pioneering, Head of Machine Learning for Flagship VL57 and Assistant Professor, Department of Epidemiology, Harvard T.H. Chan School of Public Health. Here are some of the papers we got to chat about! Also, VL57 is hiring! Adversarial attacks on Medical ML Science paper Finlayson, S.G., Bowers, J.D., Ito, J., Zittrain, J.L., Beam, A.L. and Kohane, I.S., 2019. Adversarial attacks on medical machine learning. Science, 363(6433), pp.1287-1289. Link: https://cyber.harvard.edu/story/2019-03/adversarial-attacks-medical-ai-health-policy-challenge JAMA Papers Beam, A.L. and Kohane, I.S., 2016. Translating artificial intelligence into clinical care. Jama, 316(22), pp.2368-2369. Link: https://www.dropbox.com/s/4o1va07tqwvrxsn/Beam_TranslatingAI_2016.pdf?dl=0 Beam, A.L. and Kohane, I.S., 2018. Big data and machine learning in health care. Jama, 319(13), pp.1317-1318. Link: https://www.dropbox.com/s/q1cixzmsdugq3vy/Beam_BigData_ML.pdf?dl=0 Opportunities in machine learning for healthcare: Ghassemi, M., Naumann, T., Schulam, P., Beam, A.L. and Ranganath, R., 2018. Opportunities in machine learning for healthcare. arXiv preprint arXiv:1806.00388. Link: https://arxiv.org/abs/1806.00388
In episode five of season five we talk about the Stu Hunter conference, Summer schools options (DLRLSS!) and chat with Adrian Weller of the Alan Turing Institute
In episode four of season five we talk about Jupyter Notebooks and Neil's dream of a world craft software and devices, we take a listener question about the conversation surrounding Open AI's GPT-2 its announcement and the coverage and we hear an interview with Brooks Paige of the Alan Turing Instiute
In season five episode three we chat about take a listener question about Five Papers for Mike Tipping, take a listener question on AIAI and chat with Eoin O'Mahony of Uber Here are Neil's five papers. What are yours? Stochastic variational inference by Hoffman, Wang, Blei and Paisley http://arxiv.org/abs/1206.7051 A way of doing approximate inference for probabilistic models with potentially billions of data ... need I say more? Austerity in MCMC Land: Cutting the Metropolis Hastings by Korattikara, Chen and Welling http://arxiv.org/abs/1304.5299 Oh ... I do need to say more ... because these three are at it as well but from the sampling perspective. Probabilistic models for big data ... an idea so important it needed to be in the list twice. Practical Bayesian Optimization of Machine Learning Algorithms by Snoek, Larochelle and Adams http://arxiv.org/abs/1206.2944 This paper represents the rise in probabilistic numerics, I could also have chosen papers by Osborne, Hennig or others. There are too many papers out there already. Definitely an exciting area, be it optimisation, integration, differential equations. I chose this paper because it seems to have blown the field open to a wider audience, focussing as it did on deep learning as an application, so it let's me capture both an area of developing interest and an area that hits the national news. Kernel Bayes Rule by Fukumizu, Song, Gretton http://arxiv.org/abs/1009.5736 One of the great things about ML is how we have different (and competing) philosophies operating under the same roof. But because we still talk to each other (and sometimes even listen to each other) these ideas can merge to create new and interesting things. Kernel Bayes Rule makes the list. http://www.cs.toronto.edu/~hinton/absps/imagenet.pdf An obvious choice, but you don't leave the Beatles off lists of great bands just because they are an obvious choice.
In episode two of season five we unpack the Bezos Paradox (TM Neil Lawrence) take a listener question about best papers and chat with Dougal Maclaurin of Google Brain.
In episode one of season five we talk about Bit by Bit, take a listener question on machine learning gatherings on the African continent (Deep Learning INDABA!DSA!) and hear an interview with Daphne Koller recorded at ODSC West
For the end of season four we take a break from our regular format and bring you a talk from Professor Finale Doshi Velez of Harvard University on the possibility of explanation Tune in next season!
In episode twenty one of season four we talk about distributed intelligence systems (mainly those internal to humans), talk about what were excited to see at the Conference on Neural Information Processing Systems and in advance of our trek to Canada we chat with Garth Gibson president and CEO of the Vector Institute.
In episode twenty of season four we talk about the importance of crediting your data, answer a listener question about internships vs salaried positions and talk with Matt Kusner of the Alan Turing institute the UK’s national institute for data science and AI.
In episode nineteen of season four we talk about causality in the real world, take a question about being surprised by the elephant in the room and talk with Kush Varshney of IBM.
In episode 18 of season four we talk about systems design, (remember the 3 d's!), tools for transparency and fairness and we talk with Adria Gascon of The Alan Turing Institute, the UK’s national institute for data science and AI.
In episode 17 of season four we talk about how to research in a time of hype (and other lessons from Tom Griffiths book) Neil's love of variational methods, and with Chat with Elissa Strome director of the Pan-Canadian AI Strategy for CIFAR
In this episode we talk about an article Troubling Trends in Machine learning Scholarship the difference between engineering and science (and the mountains you climb to span the distance) plus we talk with David Duvenaud of the University of Toronto
In episode thirteen of season four we chat about simulations, reinforcement learning, and Philippa Foot. We take a listener question about the update to the ACM code of ethics (first time since 1992!) and We talk with professor Mike Jordan.
Season four episode twelve finds us at ICML! We bring you a special episode with Jennifer Dy, co-program chair of the conference.
In season four episode eleven we talk about the possibility of the NIPS conference changing its name, what to do at ICML, And we talk with Bernhard Schölkopf.
In episode 10 of season 4 we chat about Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR, take a listener question about how reviews of papers work at NIPS and we hear from Sven Strohband, CTO of Khosla Ventures.
In episode 9 of season 4 we talk about the Statement on Nature Machine Intelligence. We reached out to Nature for a statement on the statement and received the following: “At Springer Nature we are very clear in our mission to advance discovery and help researchers share their work. Having an extensive, and growing, open access portfolio is one important way we do this but it is important to remember that while open access has been around for 20 years now it still only accounts for a small percentage of overall global research output with demand for subscription content remaining high. This is because the move to open access is complex, and for many, simply not a viable option. Nature Machine Intelligence is a new subscription journal that aims to stimulate cross-disciplinary interactions, reach broad audiences and explore the impact that AI research has on other fields by publishing high-quality research, reviews and commentary on machine learning, robotics and AI. It involves substantial editorial development, offers high levels of author service and publishes informative, accessible content beyond primary research all of which requires considerable investment. At present, we believe that the fairest way of producing highly selective journals like this one and ensuring their long-term sustainability as a resource for the widest possible community, is to spread these costs among many readers — instead of having them borne by a few authors. We also offer multiple open access options for AI authors. We already publish AI papers in Scientific Reports and Nature Communications, which are the largest open access journal in the world and the most cited open access journal respectively. We offer hybrid publishing options and are set to launch a new AI multidisciplinary, open access journal later this year. We help all researchers to freely share their discoveries by encouraging preprint posting and data- and code-sharing and continue to extend access to all Nature journals in various ways, including our free SharedIt content-sharing initiative, which provides authors and subscribers with shareable links to view-only versions of published papers.” We also get a chance to talk with Maithra Raghu from the Google Brain team about her work.