POPULARITY
Welcome to Episode 12 of Season 7 everyone! Everything Under The Sun has been nominated for Best British Podcast in the kids category, has moved to Bali, Indonesia, and the paperback book of Everything Under The Sun is OUT NOW!! We're going to be having lots of fun answering kids' questions from all over the world. This week we have some super exciting questions coming up: How are robots made? Raia Hadsell from Deep Mind is answering this one for us! She tells us all about the amazing ways in which robots come to life... Why is the immortal jellyfish immortal? This incredible sea creature can supposedly live forever! We find out how. And finally: can babies talk to other babies? We've all seen babies making crazy noises and speaking in baby-talk, but can they really communicate with eachother? Find out all about robots, immortal jellyfish and the mysterious language of babies in this week's fantastic episode! And do buy the brand new PAPERBACK edition of Everything Under The Sun - a year of curious questions - out now! Amazon: https://www.amazon.co.uk/Everything-Under-Sun-curious-question/dp/0241433460 Target Australia: https://www.target.com.au/p/everything-under-the-sun-molly-oldfield/65704592 And order it in any beautiful bookshop! Thank you! Hope you love it. Instagram: @mollyoldfieldwrites Pod Instagram: @everythingunderthesunpod Do check out our website www.mollyoldfield.com for more info about how to send in questions. Have a lovely listen and a great week!See omnystudio.com/listener for privacy information.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind: The Podcast - Excerpts on AGI, published by WilliamKiely on April 7, 2022 on LessWrong. DeepMind: The Podcast - Season 2 was released over the last ~1-2 months. The two episodes most relevant to AGI are: The road to AGI - DeepMind: The Podcast (S2, Ep5) and The promise of AI with Demis Hassabis - DeepMind: The Podcast (S2, Ep9) I found a few quotes noteworthy and thought I'd share them here for anyone who didn't want to listen to the full episodes: The road to AGI (S2, Ep5) (Published February 15, 2022) Shane Legg's AI Timeline Shane Legg (4:03): If you go back 10-12 years ago the whole notion of AGI was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [I had that happen] multiple times. I have met quite a few of them since. There have even been cases where some of these people have applied for jobs at DeepMind years later. But yeah, it was a field where you know there were little bits of progress happening here and there, but powerful AGI and rapid progress seemed like it was very, very far away. [...] Every year [the number of people who roll their eyes at the notion of AGI] becomes less. Hannah Fry (5:02): For over 20 years, Shane has been quietly making predictions of when he expects to see AGI. Shane Legg (5:09): I always felt that somewhere around 2030-ish it was about a 50-50 chance. I still feel that seems reasonable. If you look at the amazing progress in the last 10 years and you imagine in the next 10 years we have something comparable, maybe there's some chance that we will have an AGI in a decade. And if not in a decade, well I don't know, say three decades or so. Hannah Fry (5:33): And what do you think [AGI] will look like? [Shane answers at length.] David Silver on it being okay to have AGIs with different goals (??) Hannah Fry (16:45): Last year David co-authored a provocatively titled paper called Reward is Enough. He believes reinforcement learning alone could lead all the way to artificial general intelligence. [...] (21:37) But not everyone at DeepMind is convinced that reinforcement learning on its own will be enough for AGI. Here's Raia Hadsell, Director of Robotics. Raia Hadsell (21:44): The question I usually have is where do we get that reward from. It's hard to design rewards and it's hard to imagine a single reward that's so all-consuming that it would drive learning everything else. Hannah Fry (21:59): I put this question about the difficulty of designing an all-powerful reward to David Silver. David Silver (22:05): I actually think this is just slightly off the mark–this question–in the sense that maybe we can put almost any reward into the system and if the environment's complex enough amazing things will happen just in maximizing that reward. Maybe we don't have to solve this "What's the right thing for intelligence to really emerge at the end of it?" kind of question and instead embrace the fact that there are many forms of intelligence, each of which is optimizing for its own target. And it's okay if we have AIs in the future some of which are trying to control satellites and some of which are trying to sail boats and some of which are trying to win games of chess and they may all come up with their own abilities in order to allow that intelligence to achieve its end as effectively as possible. [...] (26:14) But of course this is a hypothesis. I cannot offer any guarantee that reinforcement learning algorithms do exist which are powerful enough to just get all the way there. And yet the fact that if we can do it it would provide a path all the way to AGI should be enough for us to try really really hard. Promise of AI with Demis Hassabis (Ep9) (Published March 15, 2022) Demis Hassabis' AI Timeline Dennis Hassabis (6:23): From what we've seen so far [the development of AGI]...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: DeepMind: The Podcast - Excerpts on AGI, published by WilliamKiely on April 7, 2022 on LessWrong. DeepMind: The Podcast - Season 2 was released over the last ~1-2 months. The two episodes most relevant to AGI are: The road to AGI - DeepMind: The Podcast (S2, Ep5) and The promise of AI with Demis Hassabis - DeepMind: The Podcast (S2, Ep9) I found a few quotes noteworthy and thought I'd share them here for anyone who didn't want to listen to the full episodes: The road to AGI (S2, Ep5) (Published February 15, 2022) Shane Legg's AI Timeline Shane Legg (4:03): If you go back 10-12 years ago the whole notion of AGI was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [I had that happen] multiple times. I have met quite a few of them since. There have even been cases where some of these people have applied for jobs at DeepMind years later. But yeah, it was a field where you know there were little bits of progress happening here and there, but powerful AGI and rapid progress seemed like it was very, very far away. [...] Every year [the number of people who roll their eyes at the notion of AGI] becomes less. Hannah Fry (5:02): For over 20 years, Shane has been quietly making predictions of when he expects to see AGI. Shane Legg (5:09): I always felt that somewhere around 2030-ish it was about a 50-50 chance. I still feel that seems reasonable. If you look at the amazing progress in the last 10 years and you imagine in the next 10 years we have something comparable, maybe there's some chance that we will have an AGI in a decade. And if not in a decade, well I don't know, say three decades or so. Hannah Fry (5:33): And what do you think [AGI] will look like? [Shane answers at length.] David Silver on it being okay to have AGIs with different goals (??) Hannah Fry (16:45): Last year David co-authored a provocatively titled paper called Reward is Enough. He believes reinforcement learning alone could lead all the way to artificial general intelligence. [...] (21:37) But not everyone at DeepMind is convinced that reinforcement learning on its own will be enough for AGI. Here's Raia Hadsell, Director of Robotics. Raia Hadsell (21:44): The question I usually have is where do we get that reward from. It's hard to design rewards and it's hard to imagine a single reward that's so all-consuming that it would drive learning everything else. Hannah Fry (21:59): I put this question about the difficulty of designing an all-powerful reward to David Silver. David Silver (22:05): I actually think this is just slightly off the mark–this question–in the sense that maybe we can put almost any reward into the system and if the environment's complex enough amazing things will happen just in maximizing that reward. Maybe we don't have to solve this "What's the right thing for intelligence to really emerge at the end of it?" kind of question and instead embrace the fact that there are many forms of intelligence, each of which is optimizing for its own target. And it's okay if we have AIs in the future some of which are trying to control satellites and some of which are trying to sail boats and some of which are trying to win games of chess and they may all come up with their own abilities in order to allow that intelligence to achieve its end as effectively as possible. [...] (26:14) But of course this is a hypothesis. I cannot offer any guarantee that reinforcement learning algorithms do exist which are powerful enough to just get all the way there. And yet the fact that if we can do it it would provide a path all the way to AGI should be enough for us to try really really hard. Promise of AI with Demis Hassabis (Ep9) (Published March 15, 2022) Demis Hassabis' AI Timeline Dennis Hassabis (6:23): From what we've seen so far [the development of AGI]...
AI doesn't just exist in the lab, it's already solving a range of problems in the real world. In this episode, Hannah encounters a realistic recreation of her voice by WaveNet, the voice synthesising system that powers the Google Assistant and helps people with speech difficulties and illnesses regain their voices. Hannah also discovers how ‘deepfake' technology can be used to improve weather forecasting and how DeepMind researchers are collaborating with Liverpool Football Club, aiming to take sports to the next level. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind's Demis Hassabis, Raia Hadsell, Karl Tuyls, Zach Gleicher & Jackson Broshear; Niall Robinson of the UK Met Office CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: A generative model for raw audio, DeepMind: https://deepmind.com/blog/article/wavenet-generative-model-raw-audioWaveNet case study, DeepMind: https://deepmind.com/research/case-studies/wavenetUsing WaveNet technology to reunite speech-impaired users with their original voices, DeepMind:| https://deepmind.com/blog/article/Using-WaveNet-technology-to-reunite-speech-impaired-users-with-their-original-voicesProject Euphonia, Google Research: https://sites.research.google/euphonia/about/Nowcasting the next hour of rain, DeepMind: https://deepmind.com/blog/article/nowcastingNow DeepMind is using AI to transform football, WIRED: https://www.wired.co.uk/article/deepmind-football-liverpool-aiAdvancing sports analytics through AI, DeepMind: https://deepmind.com/blog/article/advancing-sports-analytics-through-aiMetOffice, BBC: https://www.metoffice.gov.uk/The village ‘washed on to the map', BBC: https://www.bbc.co.uk/news/uk-england-cornwall-28523053Michael Fish got the storm of 1987 wrong, Sky News: https://news.sky.com/story/michael-fish-got-the-storm-of-1987-wrong-but-modern-supercomputers-may-have-missed-it-too-11076659#:~:text=In%20a%20lunchtime%20broadcast%20on,%2C%22%20he%20confidently%20told%20viewers.
Do you need a body to have intelligence? And can one exist without the other? Hannah takes listeners behind the scenes of DeepMind's robotics lab in London where she meets robots that are trying to independently learn new skills, and explores why physical intelligence is a necessary part of intelligence. Along the way, she finds out how researchers trained their robots at home during lockdown, uncovers why so many robotics demonstrations are faking it, and what it takes to train a robotic football team. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com. Interviewees: DeepMind's Raia Hadsell, Viorica Patraucean, Jan Humplik, Akhil Raju & Doina Precup CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible! Further reading: Stacking our way to more general robots, DeepMind: https://deepmind.com/blog/article/stacking-our-way-to-more-general-robotsResearchers Propose Physical AI As Key To Lifelike Robots, Forbes: https://www.forbes.com/sites/simonchandler/2020/11/11/researchers-propose-physical-ai-as-key-to-lifelike-robots/The robots going where no human can, BBC: https://www.bbc.co.uk/news/av/technology-41584738The Robot Assault On Fukushima, WIRED: https://www.wired.com/story/fukushima-robot-cleanup/Leaps, Bounds, and Backflips, Boston Dynamics: http://blog.bostondynamics.com/atlas-leaps-bounds-and-backflipsNow DeepMind is using AI to transform football, WIRED: https://www.wired.co.uk/article/deepmind-football-liverpool-ai
AI researchers around the world are trying to create a general purpose learning system that can learn to solve a broad range of problems without being taught how. Koray Kavukcuoglu, DeepMind’s Director of Research, describes the journey to get there, and takes Hannah on a whistle-stop tour of DeepMind’s HQ and its research. If you have a question or feedback on the series, message us on Twitter (@DeepMindAI using the hashtag #DMpodcast) or emailing us at podcast@deepmind.com. Further reading: OpenAI: An overview of neural networks and the progress that has been made in AI Shane Legg, DeepMind co-founder: Measuring machine intelligence at the 2010 Singularity Summit Shane Legg and Marcus Hutter: Paper on defining machine intelligence Demis Hassabis: Talk on the history, frontiers and capabilities of AI Robert Wiblin: Positively shaping the development of artificial intelligence Asilomar AI Principles Richard S. Sutton and Andrew G. Barto: Reinforcement Learning: An Introduction Interviewees: Koray Kavukcuoglu, Director of Research; Trevor Back, Product Manager for DeepMind’s science research; research scientists Raia Hadsell and Murray Shanahan; and DeepMind CEO and co-founder, Demis Hassabis. Credits: Presenter: Hannah Fry Editor: David Prest Senior Producer: Louisa Field Producers: Amy Racs, Dan Hardoon Binaural Sound: Lucinda Mason-Brown Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet) Commissioned by DeepMind
Raia and I discuss her work at DeepMind figuring out how to build robots using deep reinforcement learning to do things like navigate cities and generalize intelligent behaviors across different tasks. We also talk about challenges specific for embodied AI (robots), how much of it takes inspiration from neuroscience, and lots more.
Forget what sci-fi has told you about superintelligent robots that are uncannily human-like; the reality is more prosaic. Inside DeepMind’s robotics laboratory, Hannah explores what researchers call ‘embodied AI’: robot arms that are learning tasks like picking up plastic bricks, which humans find comparatively easy. Discover the cutting-edge challenges of bringing AI and robotics together, and learning from scratch how to perform tasks. She also explores some of the key questions about using AI safely in the real world. If you have a question or feedback on the series, message us on Twitter (@DeepMindAI using the hashtag #DMpodcast) or emailing us at podcast@deepmind.com. Further reading: Blogs on AI safety and further resources from Victoria Krakovna The Future of Life Institute: The risks and benefits of AI The Wall Street Journal: Protecting Against AI’s Existential Threat TED Talks: Max Tegmark - How to get empowered, not overpowered, by AI Royal Society lecture series sponsored by DeepMind: You & AI Nick Bostrom: Superintelligence: Paths, Dangers and Strategies (book) OpenAI: Learning from Human Preferences DeepMind blog: Learning from human preferences DeepMind blog: Learning by playing - how robots can tidy up after themselves DeepMind blog: AI safety Interviewees: Software engineer Jackie Kay and research scientists Murray Shanahan, Victoria Krakovna, Raia Hadsell and Jan Leike. Credits: Presenter: Hannah Fry Editor: David Prest Senior Producer: Louisa Field Producers: Amy Racs, Dan Hardoon Binaural Sound: Lucinda Mason-Brown Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet) Commissioned by DeepMind
Video games have become a favourite tool for AI researchers to test the abilities of their systems. In this episode, Hannah sits down to play StarCraft II - a challenging video game that requires players to control the onscreen action with as many as 800 clicks a minute. She is guided by Oriol Vinyals, an ex-professional StarCraft player and research scientist at DeepMind, who explains how the program AlphaStar learnt to play the game and beat a top professional player. Elsewhere, she explores systems that are learning to cooperate in a digital version of the playground favourite ‘Capture the Flag’. If you have a question or feedback on the series, message us on Twitter (@DeepMindAI using the hashtag #DMpodcast) or emailing us at podcast@deepmind.com. Further reading The Economist: Why AI researchers like video games DeepMind blogs: Capture the Flag and Alphastar Professional StarCraft II player MaNa gives his impressions of AlphaStar and DeepMind Open AI’s work on Dota 2 The New York Times: DeepMind can now beat us at multiplayer games, too Royal Society: Machine Learning resources DeepMind: The Inside Story of AlphaStar Andrej Karpathy: Deep Reinforcement Learning: Pong from Pixels Interviewees: Research scientists Max Jaderberg and Raia Hadsell; Lead researchers David Silver and Oriol Vinyals, and Director of Research Koray Kavukcuoglu. Credits: Presenter: Hannah Fry Editor: David Prest Senior Producer: Louisa Field Producers: Amy Racs, Dan Hardoon Binaural Sound: Lucinda Mason-Brown Music composition: Eleni Shaw (with help from Sander Dieleman and WaveNet) Commissioned by DeepMind
In episode eleven of season five, we dig in to just what a data trust actually is, take a look at citation trends and other places (PMLR) you can dig up data to understand the field and talk with Raia Hadsell of DeepMind.
Episode 10 – Dr Raia Hadsell In this final (sniff) episode of series 1, we were lucky enough to catch up with Dr Raia Hadsell, senior research scientist with world-renowned artificial intelligence research company DeepMind. DeepMind describe their mission as being ‘to push the boundaries of AI, developing programmes that can learn to solve a complex problem without needing to be taught how’. Artificial intelligence is an increasingly important part of our day to day lives and, whatever your feelings on it, it’s only going to become more important over the coming decades. So we were pretty chuffed that Raia was up for a chat! Due in no small part to the Terminator films, there are sci-fi myths aplenty surrounding the world of AI research. We decided to use today to demystify the subject and get a better insight into what day to day AI research actually looks like for those carrying it out. What we found was that, in many ways, working with AI is like working with a clever and slightly mischievous child… Happy listening, and we’ll be back later this year with series 2! Welcome back to the Pint of Science podcast. Each week, we meet scientists in pubs around the UK to find out about their lives, their universe, and everything. From *how* fruit flies love to *why* humans love, via jumping into volcanoes, winning Olympic medals, where we came from and more! Like what we do? Let us know using the hashtag #pintcast19. And be sure to subscribe to us and rate us on your favourite podcasting platform! Subscribe: Spotify | TuneIn | Stitcher | Apple The Pint of Science podcast is a part of the Pint of Science Festival, the world's largest science communication festival. Thousands of guests and speakers descend on pubs in hundreds of cities worldwide to introduce science in a fun, engaging, and usually pint-fuelled way. This podcast is made possible with the help of our sponsors Brilliant.org. Do check them out, and visit www.brilliant.org/pintofscience/ where the first 200 people who sign up will get 20% off a Premium plan! About Raia Hadsell, this week's guest: Originally from California, Raia’s undergraduate degree was in religion and philosophy, but she made the transition to computer science at PhD level, with a thesis entitled ‘Learning Long-range vision for off-road robots’. She worked as a postdoc at Carnegie Mellon University and a research scientist at SRI International, both in the US, before moving to London in 2014 to join the DeepMind team. Follow Raia on Twitter (@RaiaHadsell) Subscribe: Spotify | TuneIn | Stitcher | Apple
Victoria Carr chats to Dr Raia Hadsell, a senior research scientist working on deep learning at Google DeepMind. After completing an undergraduate degree in religion and philosophy, Raia decided to pursue research in artificial intelligence - similarly intellectually challenging and thought-provoking, but more concrete in method. Since then, she has forged a successful career in artificial intelligence, pushing the boundaries of knowledge in AI navigation and making significant scientific contributions to deep learning algorithms, and mammalian navigation.
Jeff Dean, the lead of Google AI, is on the podcast this week to talk with Melanie and Mark about AI and machine learning research, his upcoming talk at Deep Learning Indaba and his educational pursuit of parallel processing and computer systems was how his career path got him into AI. We covered topics from his team’s work with TPUs and TensorFlow, the impact computer vision and speech recognition is having on AI advancements and how simulations are being used to help advance science in areas like quantum chemistry. We also discussed his passion for the development of AI talent in the content of Africa and the opening of Google AI Ghana. It’s a full episode where we cover a lot of ground. One piece of advice he left us with, “the way to do interesting things is to partner with people who know things you don’t.” Listen for the end of the podcast where our colleague, Gabe Weiss, helps us answer the question of the week about how to get data from IoT core to display in real time on a web front end. Jeff Dean Jeff Dean joined Google in 1999 and is currently a Google Senior Fellow, leading Google AI and related research efforts. His teams are working on systems for speech recognition, computer vision, language understanding, and various other machine learning tasks. He has co-designed/implemented many generations of Google’s crawling, indexing, and query serving systems, and co-designed/implemented major pieces of Google’s initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google’s distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, the open-source TensorFlow system for machine learning, and a variety of internal and external libraries and developer tools. Jeff received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on whole-program optimization techniques for object-oriented languages. He received a B.S. in computer science & economics from the University of Minnesota in 1990. He is a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences, a Fellow of the Association for Computing Machinery (ACM), a Fellow of the American Association for the Advancement of Sciences (AAAS), and a winner of the ACM Prize in Computing. Cool things of the week Google Dataset Search is in beta site Expanding our Public Datasets for geospatial and ML-based analytics blog Zip Code Tabulation Area (ZCTA) site Google AI and Kaggle Inclusive Images Challenge site We are rated in the top 100 technology podcasts on iTunes site What makes TPUs fine-tuned for deep learning? blog Interview Jeff Dean on Google AI profile Deep Learning Indaba site Google AI site Google AI in Ghana blog Google Brain site Google Cloud site DeepMind site Cloud TPU site Google I/O Effective ML with Cloud TPUs video Liquid cooling system article DAWNBench Results site Waymo (Alphabet’s Autonomous Car) site DeepMind AlphaGo site Open AI Dota 2 blog Moustapha Cisse profile Sanjay Ghemawat profile Neural Information Processing Systems Conference site Previous Podcasts GCP Podcast Episode 117: Cloud AI with Dr. Fei-Fei Li podcast GCP Podcast Episode 136: Robotics, Navigation, and Reinforcement Learning with Raia Hadsell podcast TWiML & AI Systems and Software for ML at Scale with Jeff Dean podcast Additional Resources arXiv.org site Chris Olah blog Distill Journal site Google’s Machine Learning Crash Course site Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville book and site NAE Grand Challenges for Engineering site Senior Thesis Parallel Implementations of Neural Network Training: Two Back-Propagation Approaches by Jeff Dean paper and tweet Machine Learning for Systems and Systems for Machine Learning slides Question of the week How do I get data from IoT core to display in real time on a web front end? Building IoT Applications on Google Cloud video MQTT site Cloud Pub/Sub site Cloud Functions site Cloud Firestore site Where can you find us next? Melanie is at Deep Learning Indaba and Mark is at Tokyo NEXT. We’ll both be at Strangeloop end of the month. Gabe will be at Cloud Next London and the IoT World Congress.
On this episode of the podcast, Mark and Melanie delve into the fascinating world of robotics and reinforcement learning. We discuss advances in the field, including how robots are learning to navigate new surroundings and how machine learning is helping us understand the human mind better. Raia Hadsell Raia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for the past 15 years. After completing a PhD at New York University, which featured a self-supervised deep learning vision system for a mobile robot, her research continued at Carnegie Mellon’s Robotics Institute and SRI International, and in early 2014 she joined DeepMind in London to develop artificial general intelligence. Her current research focuses on the challenge of interactive learning for AI agents and robots, including subjects such as neural memory for real world navigation and lifelong learning. Cool things of the week AI Adventures How to Make a Data Science Project with Kaggle site Predict your future costs with Google Cloud Billing cost forecast blog and site Kaggle Competition Winning Solutions site Google Cloud Platform Podcast Episode 84: Kaggle with Wendy Kan podcast Introducing Jib — build Java Docker images better blog Google Container Tools site Interview Raia Hadsell site Learning to Navigate Cities Without a Map research paper and blog Unsupervised Predictive Memory in a Goal-Directed Agent | MERLIN research paper Nature: Vector-based navigation using grid-like representations in AI research paper DeepMind has trained an AI to unlock the mysteries of your brain site Navigating with grid-like representations in artificial agents blog DeepMind site and blog Boston Dynamics site Google Brain Robotics site Transylvanian Machine Learning Summer School site IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures research paper Edward Mozer - Grid Cells and the Brain’s Spatial Mapping System video The Nobel Prize in Physiology or Medicine 2014 site TensorFlow site Question of the week How do you connect a Google Cloud Source repository to an existing Git repository? site and blog Where can you find us next? We’ll both be at Cloud NEXT! Mark will be talking about Agones blog Melanie will speak at PyCon Russia July 22nd
IFE Distinguished Visitor Lecture, recorded 10 August 2017 at QUT
IFE Distinguished Visitor Lecture, recorded 10 August 2017 at QUT