POPULARITY
Claire chatted to Emma Hart from Edinburgh Napier University about algorithms that 'evolve' better robot designs and control systems. Emma Hart is a computer scientist working in the field of evolutionary computation. Her work takes inspiration from the natural world, in particular biological evolution, and uses this to develop algorithms that 'evolve' both the design and control systems of a robot, customised to a specific application. She was elected as a Fellow of the Royal Society of Edinburgh in 2022, and was awarded the ACM SIGEVO Award for Outstanding Contribution to Evolutionary Computation in 2023. She was invited to give a TED Talk on her work in 2021 that has over 1.8 million views. Get tickets for Robot Talk live at the Great Exhibition Road Festival: https://www.eventbrite.co.uk/e/why-are-we-building-humanoid-robots-tickets-1315475706249 Join the Robot Talk community on Patreon: https://www.patreon.com/ClaireAsher
Carola Doerr, formerly Winzen, is a CNRS research director in the Computer Science department LIP6 of Sorbonne Université in Paris, France. Carola's main research activities are in the analysis of black-box optimization algorithms, both by mathematical and by empirical means. Specifically, she is very interested in controlling the choice and the configuration of black-box optimization algorithms all along the optimization process -- with and without Machine Learning techniques. She is equally interested in complexity results, running time bounds, good benchmarking practices, empirical evaluations, and practical applications of self-adjusting black-box optimization algorithms. Carola is associate editor of IEEE Transactions on Evolutionary Computation, ACM Transactions on Evolutionary Learning and Optimization (TELO) and Evolutionary Computation. She is/was program chair for the BBSR track at GECCO 2024, the GECH track at GECCO 2023, for PPSN 2020, FOGA 2019 and for the theory tracks of GECCO 2015 and 2017. She has organized Dagstuhl seminars and Lorentz Center workshops. Together with Pascal Kerschke, Carola leads the 'Algorithm selection and configuration' working group of COST action CA22137. Carola's works have received several awards, among them the CNRS bronze medal, the Otto Hahn Medal of the Max Planck Society, best paper awards at GECCO, CEC, and EvoApplications.
Thomas Stützle is a research director of the Belgian F.R.S.-FNRS (National Science Foundation) working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received the Diplom (German equivalent of MSc. degree) in business engineering from the Universität Karlsruhe (TH), Germany in 1994, and his PhD and habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He has co-authored three books among which are “Stochastic Local Search: Foundations and Applications” (Morgan Kaufmann) and “Ant Colony Optimization” (MIT Press), both being the main references in their respective areas. His other publications include more than 250 articles in journals, international conferences or edited books many of which are highly cited. In fact, his research contributions received so far more than 60,000 citations in Google Scholar and his h-index is 84. His main research interests are in stochastic local search algorithm, swarm intelligence, multi-objective optimization, and automatic design of algorithms. He is probably best known (i) for his contributions to early advancements in ant colony optimization including algorithms such as Max-Min Ant System, (ii) the establishment of algorithmic frameworks for iterated local search and iterated greedy, and (iii) as a driving force in the advancement of automatic algorithm configuration techniques and their usage in the automatic design of high-performing algorithms. He received seven best paper awards from conferences and his 2002 GECCO paper on “A Racing Algorithm for Configuring Metaheuristics” received the 2012 SIGEVO impact award. He is an Associate Editor of Applied Mathematics and Computation, Computational Intelligence, Evolutionary Computation, International Transactions in Operational Research, and Swarm Intelligence and on the editorial board of seven other journals. He is also frequently involved in international conferences and workshops with program or organizational responsibilities. In 2018, Thomas suffered a stroke that affected, among other things, his ability to remember words, but he has improved a lot and he is now working full time again.
In this interview, we are continuing our series on legal review of AWS, and speaking with two of the Law and Future of war research team, about an issue that impacts the design approaches to AWS: the alignment problem. In May 2023, there were reports of an AWS being tested, that turned upon its operator, and eventually cut its communications links so it could go after its originally planned mission... this prompted discussion about the alignment problem with AWS, impacting future TEVV strategies and regulatory approaches to this technology.The conference referred to in the episode can be found in the notes to the attached link, with relevant excerpts extracted below: - Highlights from the RAeS Future Combat Air & Space Capabilities Summit (aerosociety.com):'Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". ] Col Tucker ‘Cinco' Hamilton, the Chief of AI Test and Operations, USAF, ... cautioned against relying too much on AI noting how easy it is to trick and deceive.... Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”Dr Brendan Walker-Munro is a Senior Research Fellow with the University of Queensland's Law and the Future of War research group. Brendan's research focus is on criminal and civil aspects of national security law, and the role played by intelligence agencies, law enforcement and the military in investigating and responding to critical incidents. He is also interested in the national security impacts of law on topics such as privacy, identity crime and digital security.Dr Sam Hartridge is a post-doctoral researcher at the University of Queensland. His research is currently examining the interplay between technical questions of AI safety, AI risk management frameworks and standards, and foundational international and domestic legal doctrine. Additional Resources:Autonomy in weapons systems: playing catch up with technology - Humanitarian Law & Policy Blog (icrc.org)Striking Blind | The Forge (defence.gov.au)Concrete Problems in AI Safety (arxiv.org)The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents (researchgate.net)The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities (arxiv.org)
In this episode, we explore the fascinating world of Evolutionary Computation and Evolutionary Algorithms (EAs) and their real-world applications. We dive into the fundamental concepts of EAs, such as natural selection, mutation, and recombination, while discussing various types of algorithms, including Genetic Algorithms, Evolutionary Programming, and Genetic Programming. Learn how these powerful optimization techniques have been applied to diverse domains such as function optimization, evolutionary art and music, and neural network evolution. Join us in this captivating journey to understand how EAs can be used to solve complex problems and unlock new possibilities.Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details
Gustavo Olague es investigador del Centro de Investigación Científica y de Educación Superior de Ensenada. Gustavo nos comparte su experiencia en reconocimiento de patrones, computación genética y evolutiva y el concepto de “Brain programming” que su equipo ha acuñado. Basado en estas técnicas Gustavo ha realizado importantes contribuciones en fotogrametría por medio de la transformación proyectiva y la geometría no euclidiana. Igualmente, Gustavo nos comparte la relación entre la evolución artificial y la teleología.Gustavo Olague (Senior Member, IEEE) nació en Chihuahua, México, en 1969. Obtuvo la licenciatura y la maestría en ingeniería industrial y electrónica en el Instituto Tecnológico de Chihuahua (ITCH) en 1992 y 1995, respectivamente, y el doctorado en visión por computadora, gráficos y robótica en el Institut Polytechnique de Grenoble (INPG) y el Institut National de Recherche en Informatique et en Automatique (INRIA), Francia, en 1998. Es profesor del Departamento de Informática del CICESE (Centro de Investigación Científica y de Educación Superior de Ensenada) en México, y director del equipo de investigación EvoVisión. Fue profesor adjunto de Ingeniería en la UACH (Universidad Autonóma de Chihuahua). Es autor de más de 150 ponencias en actas de congresos y artículos en revistas y coeditó tres números especiales en Pattern Recognition Letters, Evolutionary Computation y Applied Optics. Ha sido copresidente de la sección de aplicaciones en el mundo real en la principal conferencia internacional de computación evolutiva, GECCO (ACM SIGEVO Genetic and Evolutionary Computation Conference), y actualmente es editor asociado en Engineering Applications of Artificial Intelligence, Axioms, Neural Computing and Applications y IEEE Access. El prof. Olague ha recibido numerosas distinciones, entre ellas el Premio Talbert Abrams otorgado por la Sociedad Americana de Fotogrametría y Teledetección (ASPRS) por la autoría y el registro de los desarrollos actuales e históricos de la ingeniería y la ciencia en la fotogrametría; Editor Asociado Destacado IEEE Access 2021; premios al mejor artículo en las principales conferencias, como GECCO, EvoIASP (European Workshop on Evolutionary Computation in Image Analysis, Signal Processing, and Pattern Recognition) y EvoHOT (European Workshop on Evolutionary Hardware Optimization); y dos veces la medalla de bronce en los Humies (premio GECCO a los resultados competitivos en humanos producidos por la computación genética y evolutiva). Sus principales intereses de investigación son la computación evolutiva y la visión por ordenador. Gustavo Olague es autor del libro Evolutionary Computer Vision publicado por Springer en la serie Natural Computing.
The Matrix, The Truman Show, and now more recently Westworld. Popular culture has long been captivated by the notion that our lives and the world we inhabit in are nothing more than an advanced computer simulation. But it's also an argument that is being given more credence by world renowned philosophers and scientists. The leading proponents of the “simulation hypothesis” believe that the mathematical nature of the universe is itself the strongest proof we exist in an artificial reality. They point to human DNA and string theory in particle physics as but two of a growing number of so-called naturally occurring phenomena that behave remarkably similar to computer code - too close to be an accident. The mainstream scientific community is taking exception to these claims. They say the simulation hypothesis is based on overly complicated hypotheses that verge on circular reasoning. They argue the universe can be beautiful, even harmonious, mathematically and empirically down to the smallest atom or strand of DNA. Occam's Razor or the maxim that the simplest explanation is usually the right one, is all the proof we need that the universe is real and not a computer program. Arguing for the motion is Rich Terrile, Director of the Center for Evolutionary Computation and Automated Design at NASA's Jet Propulsion Laboratory. He is a voyager scientist and has discovered moons on Saturn, Uranus, and Neptune. Arguing against the motion is David Kipping, Assistant Professor of Astronomy at Columbia University where he leads the Cool Worlds Lab. His research focuses on extrasolar planets, the search for life in the universe, and astrostatistics. Sources: HBO, Space.com, The New York Academy of Sciences, Google Zeitgeist, IGN Entertainment Inc., Gave Dev Guide, FragHero The host of the Munk Debates is Rudyard Griffiths - @rudyardg. For detailed show notes on the episode, head to https://munkdebates.com/podcast. Tweet your comments about this episode to @munkdebate or comment on our Facebook page https://www.facebook.com/munkdebates/ To sign up for a weekly email reminder for this podcast, send an email to podcast@munkdebates.com. To support civil and substantive debate on the big questions of the day, consider becoming a Munk Member at https://munkdebates.com/membership Members receive access to our 10+ year library of great debates in HD video, a free Munk Debates book, newsletter and ticketing privileges at our live events. This podcast is a project of the Munk Debates, a Canadian charitable organization dedicated to fostering civil and substantive public dialogue - https://munkdebates.com/ The Munk Debates podcast is produced by Antica, Canada's largest private audio production company - https://www.anticaproductions.com/ Executive Producer: Stuart Coxe, CEO Antica Productions Senior Producer: Christina Campbell Editor: Kieran Lynch Associate Producer: Abhi Raheja
WATCH: https://youtu.be/NbGkWjyyLKc Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and AVP of Evolutionary Intelligence at Cognizant AI Labs. He received an M.S. in Engineering from the Helsinki University of Technology (now Aalto University) in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His current research focuses on methods and applications of neuroevolution, evolutionary computation, machine learning, cognitive science as well as neural network models of natural language processing and vision; he is an author of over 450 articles in these research areas. In 2016, he was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for contributions to techniques and applications for neural and evolutionary computation. EPISODE LINKS: - Risto's Website https://www.cs.utexas.edu/users/risto/ - Risto's Publications: https://scholar.google.com/citations?user=2SmbjHAAAAAJ&hl=en CONNECT: - Website: https://tevinnaidu.com/podcast - Instagram: https://instagram.com/drtevinnaidu - Facebook: https://facebook.com/drtevinnaidu - Twitter: https://twitter.com/drtevinnaidu - LinkedIn: https://linkedin.com/in/drtevinnaidu TIMESTAMPS: (0:00) - Introduction (1:13) - Evolution of Future Minds (4:37) - Sci-fi Influences (6:29) - Computational Models in Medicine (12:04) - AI & Human Minds (16:45) - Consciousness Models (20:53) - Similarities Between Biology & Computers (24:25) - Computing Human Behaviour (32:18) - Computer Algorithms vs Human Psychology (39:43) - Human Augmentation (Superhuman) (45:11) - Neuroevolution & Evolutionary Computation (1:00:15) - Cognizant AI Labs (1:03:18) - Ethical Dilemmas in AI (1:11:25) - Risto's AI Vision For The Future (1:15:41) - Conclusion Website · YouTube · YouTube
WATCH: https://youtu.be/NbGkWjyyLKc Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and AVP of Evolutionary Intelligence at Cognizant AI Labs. He received an M.S. in Engineering from the Helsinki University of Technology (now Aalto University) in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His current research focuses on methods and applications of neuroevolution, evolutionary computation, machine learning, cognitive science as well as neural network models of natural language processing and vision; he is an author of over 450 articles in these research areas. In 2016, he was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for contributions to techniques and applications for neural and evolutionary computation. EPISODE LINKS: - Risto's Website https://www.cs.utexas.edu/users/risto/ - Risto's Publications: https://scholar.google.com/citations?user=2SmbjHAAAAAJ&hl=en CONNECT: - Website: https://tevinnaidu.com/podcast - Instagram: https://instagram.com/drtevinnaidu - Facebook: https://facebook.com/drtevinnaidu - Twitter: https://twitter.com/drtevinnaidu - LinkedIn: https://linkedin.com/in/drtevinnaidu TIMESTAMPS: (0:00) - Introduction (1:13) - Evolution of Future Minds (4:37) - Sci-fi Influences (6:29) - Computational Models in Medicine (12:04) - AI & Human Minds (16:45) - Consciousness Models (20:53) - Similarities Between Biology & Computers (24:25) - Computing Human Behaviour (32:18) - Computer Algorithms vs Human Psychology (39:43) - Human Augmentation (Superhuman) (45:11) - Neuroevolution & Evolutionary Computation (1:00:15) - Cognizant AI Labs (1:03:18) - Ethical Dilemmas in AI (1:11:25) - Risto's AI Vision For The Future (1:15:41) - Conclusion Website · YouTube
Risto Miikkulainen is a computer scientist at UT Austin. Please support this podcast by checking out our sponsors: – The Jordan Harbinger Show: https://jordanharbinger.com/lex/ – Grammarly: https://grammarly.com/lex to get 20% off premium – Belcampo: https://belcampo.com/lex and use code LEX to get 20% off first order – Indeed: https://indeed.com/lex to get $75 credit EPISODE LINKS: Risto’s Website: https://www.cs.utexas.edu/users/risto/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: – Check out the sponsors above, it’s the best way to support this podcast – Support on Patreon: https://www.patreon.com/lexfridman – Twitter:
Computer scientist, Dr Risto Miikkulainen, shows us how we can come up with novel solutions in science by simulating evolution using computers. From bioinformatics to webpage design, the applications of this field are huge.Image with thanks to Helsingin Sanomat https://www.hs.fi/ If you are interested in helping The Biotech Podcast please take 30 seconds to take the following survey: https://harry852843.typeform.com/to/caV6cMzGPaper on surprising anecdotes of evolution: https://direct.mit.edu/artl/article/26/2/274/93255/The-Surprising-Creativity-of-Digital-Evolution-AMicrosite on ESP (Evolutionary Surrogate-Assisted Prescription): https://evolution.ml/esp/Evolutionary Computation software:ECJ: https://cs.gmu.edu/~eclab/projects/ecj/DEAP: https://github.com/DEAP/deap
Kai Arulkumaran on AlphaStar and Evolutionary Computation, Domain Randomisation, Upside-Down Reinforcement Learning, Araya, NNAISENSE, and more!
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we’re joined by Penousal Machado, Associate Professor and Head of the Computational Design and Visualization Lab in the Center for Informatics at the University of Coimbra. In our conversation with Penousal, we explore his research in Evolutionary Computation, and how that work coincides with his passion for images and graphics. We also discuss the link between creativity and humanity, and have an interesting sidebar about the philosophy of Sci-Fi in popular culture. Finally, we dig into Penousals evolutionary machine learning research, primarily in the context of the evolution of various animal species mating habits and practices. The complete show notes for this episode can be found at twimlai.com/go/459.
The Matrix, The Truman Show, and now more recently Westworld. Popular culture has long been captivated by the notion that our lives and the world we inhabit in are nothing more than an advanced computer simulation. But it's also an argument that is being given more credence by world renowned philosophers and scientists. The leading proponents of the “simulation hypothesis” believe that the mathematical nature of the universe is itself the strongest proof we exist in an artificial reality. They point to human DNA and string theory in particle physics as but two of a growing number of so-called naturally occurring phenomena that behave remarkably similar to computer code - too close to be an accident. The mainstream scientific community is taking exception to these claims. They say the simulation hypothesis is based on overly complicated hypotheses that verge on circular reasoning. They argue the universe can be beautiful, even harmonious, mathematically and empirically down to the smallest atom or strand of DNA. Occam's Razor or the maxim that the simplest explanation is usually the right one, is all the proof we need that the universe is real and not a computer program. Arguing for the motion is Rich Terrile, Director of the Center for Evolutionary Computation and Automated Design at NASA's Jet Propulsion Laboratory. He is a voyager scientist and has discovered moons on Saturn, Uranus, and Neptune. Arguing against the motion is David Kipping, Assistant Professor of Astronomy at Columbia University where he leads the Cool Worlds Lab. His research focuses on extrasolar planets, the search for life in the universe, and astrostatistics. Sources: HBO, Space.com, The New York Academy of Sciences, Google Zeitgeist, IGN Entertainment Inc., Gave Dev Guide, FragHero The host of the Munk Debates is Rudyard Griffiths - @rudyardg. For detailed show notes on the episode, head to https://munkdebates.com/podcast. Tweet your comments about this episode to @munkdebate or comment on our Facebook page https://www.facebook.com/munkdebates/ To sign up for a weekly email reminder for this podcast, send an email to podcast@munkdebates.com. To support civil and substantive debate on the big questions of the day, consider becoming a Munk Member at https://munkdebates.com/membership Members receive access to our 10+ year library of great debates in HD video, a free Munk Debates book, newsletter and ticketing privileges at our live events. This podcast is a project of the Munk Debates, a Canadian charitable organization dedicated to fostering civil and substantive public dialogue - https://munkdebates.com/ The Munk Debates podcast is produced by Antica, Canada's largest private audio production company - https://www.anticaproductions.com/ Executive Producer: Stuart Coxe, CEO Antica Productions Senior Producer: Christina Campbell Editor: Kieran Lynch Associate Producer: Abhi Raheja
Covid-19 has disrupted higher education in many ways. 1.2 billion students were out of school due to the Covid crisis. Virtual learning has become a real challenge for many educators and students. From finding the right virtual platform to hold classes, adopting new technology tools for remote learning to creating a new curriculum that can be as effective as face-to-face classes, many educators across the globe have been struggling to adopt this new reality of virtual learning. Narine Manukyan was one of those educators who had to move to virtual instruction and like everybody had to somehow figure out how to exist virtually professionally. In late August, she came up with the idea to create a new technology tool that can behave the same way as in a physical space and create the ability to connect with students. In this Episode, Narine Manukyan, the Founder and CEO of InSpace Proximity of a new virtual learning platform joins me to discuss some of the challenges the educators and students face with remote learning and the solutions InSpace brings to those problems. Bio: Narine Manukayn, Founder and CEO of InSpace Proximity. Narine holds a Ph.D. in Computer Science and she is a Program Director for Data Science Center in Champlain college. Her passion is digging deep into Big Data Design, Machine Learning, Artificial Intelligence, Evolutionary Computation, Data Mining, and Social Contingency. Episode Notes Narine's startup journey Challenges of virtual learning Importance of social cues during Covid era learning process Women in tech Resources https://inspace.chat/ RSVP for Demo HyeTech Minds Website Instagram @hyetechminds Facebook @hyetechminds How to be a guest on HyeTech Subscribe to Newsletter --- Send in a voice message: https://podcasters.spotify.com/pod/show/hyetechminds/message
Babak Hodjat, VP of Evolutionary Computation at Cognizant, talks about his start in AI developing natural language processing technology that lead to inventions like Siri and what he is working on now in the realm of evolutionary computation.
In this sixth episode, we discuss evolutionary computation and review a paper with guest host Dr. Moshe Sipper from Ben-Gurion University in Israel. In our training advice segment, we discuss tips for asking questions at scientific talks. We also present several interesting news items and mention some upcoming conferences and deadlines. For show notes and links, see http://bmipodcast.org/podcast/evolutionary-computation/
Mohammad does research in Swarm Intelligence, Evolutionary Computation, Artificial Neural Network, Optimisation, Digital Art, Medical Imaging and Tomographic Reconstruction.Computational Creativity: Automation or Collaborationhttps://www.tandfonline.com/toc/ccos20/31/1?nav=tocList
This week we interview Risto Miikkulainen, CTO of Sentient AI, to discuss evolutionary computing and the relevance it has for AI. We cover Sentient's work in evolving stock traders to questions on neuroevolution and reinforcement learning. Links: Evolution is the New Deep Learning Experts Weigh in on the Future of AI and Evolutionary Algorithms A Java-Based Evolutionary Computational System The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities An Introduction to Evolutionary Computing Genetic Algorithms with Python Follow us and leave us a rating! iTunes Homepage Twitter @artlyintelly Facebook artificiallyintelligent1@gmail.com
Reader's Room pulls the most fascinating writing from speculative fiction, science, and technology. In this edition we talk about coming to terms with things, including glitter, Shakespeare, and loss. Show links: Margaret Atwood's Hag-Seed Which is part of the Hogarth Shakespeare collection. UK exam board fined$250,000 for confusing characters from Romeo and Juliet. Clark County inmates learning to engineer, produce, and play music. (Autoplay video warning. Sorry.) 10 minute documentary about a very unlikley soul album.YouTube Link. Here's one of their songs. Paper: The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities PDF link Suggestions, comments, or subscribe to the newsletter at ReadSteven.com
In this week’s episode, Kyle is joined by Risto Miikkulainen, a professor of computer science and neuroscience at the University of Texas at Austin. They talk about evolutionary computation, its applications in deep learning, and how it’s inspired by biology. They also discuss some of the things Sentient Technologies is working on in stock and finances, retail, e-commerce and web design, as well as the technology behind it-- evolutionary algorithms.
The O'Reilly Radar Podcast: Evolutionary computation, its applications in deep learning, and how it's inspired by biology.In this week’s episode, David Beyer, principal at Amplify Partners, co-founder of Chart.io, and part of the founding team at Patients Know Best, chats with Risto Miikkulainen, professor of computer science and neuroscience at the University of Texas at Austin. They chat about evolutionary computation, its applications in deep learning, and how it’s inspired by biology. Also note, David Beyer's new free report "The Future of Machine Intelligence" is now available for download.Here are some highlights from their conversation: Finding optimal solutions We talk about evolutionary computation as a way of solving problems, discovering solutions that are optimal or as good as possible. In these complex domains like, maybe, simulated multi-legged robots that are walking in challenging conditions—a slippery slope or a field with obstacles—there are probably many different solutions that will work. If you run the evolution multiple times, you probably will discover some different solutions. There are many paths of constructing that same solution. You have a population and you have some solution components discovered here and there, so there are many different ways for evolution to run and discover roughly the same kind of a walk, where you may be using three legs to move forward and one to push you up the slope if it's a slippery slope. You do (relatively) reliably discover the same solutions, but also, if you run it multiple times, you will discover others. This is also a new direction or recent direction in evolutionary computation—that the standard formulation is that you are running a single run of evolution and you try to, in the end, get the optimum. Everything in the population supports finding that optimum. Biological inspiration Some machine learning is simply statistics. It's not simple, obviously, but it is really based on statistics and it's mathematics-based, but some of the inspiration in evolutionary computation and neural networks and reinforcement learning really comes from biology. It doesn't mean that we are trying to systematically replicate what we see in biology. We take the components we understand, or maybe even misunderstand, but we take the components that make sense and put them together into a computational structure. That's what's happening in evolution, too. Some of the core ideas at the very high level of instruction are the same. In particular, there's selection acting on variation. That's the main principle of evolution in biology, and it's also in computation. If you take a little bit more detailed view, we have a population, and everyone is evaluated, and then we select the best ones, and those are the ones that reproduce the most, and we get a new population that's more likely to be better than the previous population. Modeling biology? Not quite yet. There's also developmental processes that most biological systems adapt and learn during their lifetime as well. In humans, the genes specify, really, a very weak starting point. When a baby is born, there's very little behavior that they can perform, but over time, they interact with the environment and that neural network gets set into a system that actually deals with the world. Yes, there's actually some work in trying to incorporate some of these ideas, but that is very difficult. We are very far from actually saying that we really model biology. OSCAR-6 innovates What got us really hooked in this area was that there are these demonstrations where evolution not only optimizes something that you know pretty well, but also comes up with something that's truly novel, something that you don't anticipate. For us, it was this one application where we were evolving a controller for a robot arm, OSCAR-6. It was six degrees of freedom, but you only needed three to really control it. One of the dimensions is that the robot can turn around its vertical axis, the main axis. The goal is to get the fingers of the robot to a particular location in 3D space that's reachable. It's pretty easy to do. We were working on putting obstacles in the way and accidentally disabled the main motor, the one that turns the robot around its main axis. We didn't know it. We ran evolution anyway, and evolution learned and evolved, found a solution that would get the fingers in the goal, but it took five times longer. We only understood what was going on when we put it on screen and looked at the visualization. What the robot was able to do was that when the target was, say, all the way to the left and it needed to turn around the main axis to get the arm close to it, it couldn't do it because it couldn't turn. Instead, it turned the arm from the elbow or shoulder, the other direction, away from the goal, then swung it back real hard; because of inertia, the whole robot would turn around its main axis, even when there was no motor. This was a big surprise. We caused big problems to the robot. We disabled a big, important component of it, but it still found a solution of dealing with it: utilizing inertia, utilizing the physical simulation to get where it needed to go. This is exactly what you would like in a machine learning system. It innovates. It finds things that you did not think about. If you have a robot stuck in a rock in Mars or it loses a wheel, you'd still like it to complete its mission. Using these techniques, we can figure out ways for it to do so.
The O'Reilly Radar Podcast: Evolutionary computation, its applications in deep learning, and how it's inspired by biology.In this week’s episode, David Beyer, principal at Amplify Partners, co-founder of Chart.io, and part of the founding team at Patients Know Best, chats with Risto Miikkulainen, professor of computer science and neuroscience at the University of Texas at Austin. They chat about evolutionary computation, its applications in deep learning, and how it’s inspired by biology. Also note, David Beyer's new free report "The Future of Machine Intelligence" is now available for download.Here are some highlights from their conversation: Finding optimal solutions We talk about evolutionary computation as a way of solving problems, discovering solutions that are optimal or as good as possible. In these complex domains like, maybe, simulated multi-legged robots that are walking in challenging conditions—a slippery slope or a field with obstacles—there are probably many different solutions that will work. If you run the evolution multiple times, you probably will discover some different solutions. There are many paths of constructing that same solution. You have a population and you have some solution components discovered here and there, so there are many different ways for evolution to run and discover roughly the same kind of a walk, where you may be using three legs to move forward and one to push you up the slope if it's a slippery slope. You do (relatively) reliably discover the same solutions, but also, if you run it multiple times, you will discover others. This is also a new direction or recent direction in evolutionary computation—that the standard formulation is that you are running a single run of evolution and you try to, in the end, get the optimum. Everything in the population supports finding that optimum. Biological inspiration Some machine learning is simply statistics. It's not simple, obviously, but it is really based on statistics and it's mathematics-based, but some of the inspiration in evolutionary computation and neural networks and reinforcement learning really comes from biology. It doesn't mean that we are trying to systematically replicate what we see in biology. We take the components we understand, or maybe even misunderstand, but we take the components that make sense and put them together into a computational structure. That's what's happening in evolution, too. Some of the core ideas at the very high level of instruction are the same. In particular, there's selection acting on variation. That's the main principle of evolution in biology, and it's also in computation. If you take a little bit more detailed view, we have a population, and everyone is evaluated, and then we select the best ones, and those are the ones that reproduce the most, and we get a new population that's more likely to be better than the previous population. Modeling biology? Not quite yet. There's also developmental processes that most biological systems adapt and learn during their lifetime as well. In humans, the genes specify, really, a very weak starting point. When a baby is born, there's very little behavior that they can perform, but over time, they interact with the environment and that neural network gets set into a system that actually deals with the world. Yes, there's actually some work in trying to incorporate some of these ideas, but that is very difficult. We are very far from actually saying that we really model biology. OSCAR-6 innovates What got us really hooked in this area was that there are these demonstrations where evolution not only optimizes something that you know pretty well, but also comes up with something that's truly novel, something that you don't anticipate. For us, it was this one application where we were evolving a controller for a robot arm, OSCAR-6. It was six degrees of freedom, but you only needed three to really control it. One of the dimensions is that the robot can turn around its vertical axis, the main axis. The goal is to get the fingers of the robot to a particular location in 3D space that's reachable. It's pretty easy to do. We were working on putting obstacles in the way and accidentally disabled the main motor, the one that turns the robot around its main axis. We didn't know it. We ran evolution anyway, and evolution learned and evolved, found a solution that would get the fingers in the goal, but it took five times longer. We only understood what was going on when we put it on screen and looked at the visualization. What the robot was able to do was that when the target was, say, all the way to the left and it needed to turn around the main axis to get the arm close to it, it couldn't do it because it couldn't turn. Instead, it turned the arm from the elbow or shoulder, the other direction, away from the goal, then swung it back real hard; because of inertia, the whole robot would turn around its main axis, even when there was no motor. This was a big surprise. We caused big problems to the robot. We disabled a big, important component of it, but it still found a solution of dealing with it: utilizing inertia, utilizing the physical simulation to get where it needed to go. This is exactly what you would like in a machine learning system. It innovates. It finds things that you did not think about. If you have a robot stuck in a rock in Mars or it loses a wheel, you'd still like it to complete its mission. Using these techniques, we can figure out ways for it to do so.
A College of Charleston research team has developed Monterey Mirror, a new interactive music performance system with artificial intelligence capabilities. The Monterey Mirror is an electronic music generator, powered by computer programming, that mirrors a performer and takes the place of a human in a live performance. Like all mirrors, it reflects back aspects of the performer, so that the performer can objectively hear what others hear. It is different from a recording, in that it does not repeat musical material verbatim, but instead captures deeper patterns and makes them apparent. Monterey Mirror has been developed with funding from the National Science Foundation secured through computer science professor Bill Manaris. This spring, world-renowned composer and College of Charleston music professor Yiorgos Vassilandonakis used Monterey Mirror to compose a new piece for a mixed "ensemble" that consists of two human performers and two Monterey Mirror systems (one per performer). The Monterey Mirrors learn from the human performers and play back aesthetically similar musical variations. Vassilandonakis and Manaris along with Dana Hughes, a graduate student in the computer science department, just returned from presenting Monterey Mirror at the 2011 Congress on Evolutionary Computation. The Congress is one of the leading international events in the area of evolutionary computation.