POPULARITY
Today GoogleDeepMind released AlphaEvolve: a Gemini coding agent for algorithm discovery. It beat the famous Strassen algorithm for matrix multiplication set 56 years ago. Google has been killing it recently. We had early access to the paper and interviewed the researchers behind the work.AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithmshttps://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/Authors: Alexander Novikov*, Ngân Vũ*, Marvin Eisenberger*, Emilien Dupont*, Po-Sen Huang*, Adam Zsolt Wagner*, Sergey Shirobokov*, Borislav Kozlovskii*, Francisco J. R. Ruiz, Abbas Mehrabian, M. Pawan Kumar, Abigail See, Swarat Chaudhuri, George Holland, Alex Davies, Sebastian Nowozin, Pushmeet Kohli, Matej Balog*(* indicates equal contribution or special designation, if defined elsewhere)SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***AlphaEvolve works like a very smart, tireless programmer. It uses powerful AI language models (like Gemini) to generate ideas for computer code. Then, it uses an "evolutionary" process – like survival of the fittest for programs. It tries out many different program ideas, automatically tests how well they solve a problem, and then uses the best ones to inspire new, even better programs.Beyond this mathematical breakthrough, AlphaEvolve has already been used to improve real-world systems at Google, such as making their massive data centers run more efficiently and even speeding up the training of the AI models that power AlphaEvolve itself. The discussion also covers how humans work with AlphaEvolve, the challenges of making AI discover things, and the exciting future of AI helping scientists make new discoveries.In short, AlphaEvolve is a powerful new AI tool that can invent new algorithms and solve complex problems, showing how AI can be a creative partner in science and engineering.Guests:Matej Balog: https://x.com/matejbalogAlexander Novikov: https://x.com/SashaVNovikovREFS:MAP Elites [Jean-Baptiste Mouret, Jeff Clune]https://arxiv.org/abs/1504.04909FunSearch [Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming Wang, Omar Fawzi, Pushmeet Kohli & Alhussein Fawzi]https://www.nature.com/articles/s41586-023-06924-6TOC:[00:00:00] Introduction: Alpha Evolve's Breakthroughs, DeepMind's Lineage, and Real-World Impact[00:12:06] Introducing AlphaEvolve: Concept, Evolutionary Algorithms, and Architecture[00:16:56] Search Challenges: The Halting Problem and Enabling Creative Leaps[00:23:20] Knowledge Augmentation: Self-Generated Data, Meta-Prompting, and Library Learning[00:29:08] Matrix Multiplication Breakthrough: From Strassen to AlphaEvolve's 48 Multiplications[00:39:11] Problem Representation: Direct Solutions, Constructors, and Search Algorithms[00:46:06] Developer Reflections: Surprising Outcomes and Superiority over Simple LLM Sampling[00:51:42] Algorithmic Improvement: Hill Climbing, Program Synthesis, and Intelligibility[01:00:24] Real-World Application: Complex Evaluations and Robotics[01:05:39] Role of LLMs & Future: Advanced Models, Recursive Self-Improvement, and Human-AI Collaboration[01:11:22] Resource Considerations: Compute Costs of AlphaEvolveThis is a trial of posting videos on Spotify, thoughts? Email me or chat in our Discord
This episode is sponsored by Thuma. Thuma is a modern design company that specializes in timeless home essentials that are mindfully made with premium materials and intentional details. To get $100 towards your first bed purchase, go to http://thuma.co/eyeonai Can AI Ever Reach AGI? Pedro Domingos Explains the Missing Link In this episode of Eye on AI, renowned computer scientist and author of The Master Algorithm, Pedro Domingos, breaks down what's still missing in our race toward Artificial General Intelligence (AGI) — and why the path forward requires a radical unification of AI's five foundational paradigms: Symbolists, Connectionists, Bayesians, Evolutionaries, and Analogizers. Topics covered: Why deep learning alone won't achieve AGI How reasoning by analogy could unlock true machine creativity The role of evolutionary algorithms in building intelligent systems Why transformers like GPT-4 are impressive—but incomplete The danger of hype from tech leaders vs. the real science behind AGI What the Master Algorithm truly means — and why we haven't found it yet Pedro argues that creativity is easy, reliability is hard, and that reasoning by analogy — not just scaling LLMs — may be the key to Einstein-level breakthroughs in AI. Whether you're an AI researcher, machine learning engineer, or just curious about the future of artificial intelligence, this is one of the most important conversations on how to actually reach AGI.
In a convergence of machine learning and biology, we reveal that diffusion models are evolutionary algorithms. By considering evolution as a denoising process and reversed evolution as diffusion, we mathematically demonstrate that diffusion models inherently perform evolutionary algorithms, naturally encompassing selection, mutation, and reproductive isolation. Building on this equivalence, we propose the Diffusion Evolution method: an evolutionary algorithm utilizing iterative denoising -- as originally introduced in the context of diffusion models -- to heuristically refine solutions in parameter spaces. Unlike traditional approaches, Diffusion Evolution efficiently identifies multiple optimal solutions and outperforms prominent mainstream evolutionary algorithms. Furthermore, leveraging advanced concepts from diffusion models, namely latent space diffusion and accelerated sampling, we introduce Latent Space Diffusion Evolution, which finds solutions for evolutionary tasks in high-dimensional complex parameter space while significantly reducing computational steps. This parallel between diffusion and evolution not only bridges two different fields but also opens new avenues for mutual enhancement, raising questions about open-ended evolution and potentially utilizing non-Gaussian or discrete diffusion models in the context of Diffusion Evolution. 2024: Yanbo Zhang, Benedikt Hartl, Hananel Hazan, Michael Levin https://arxiv.org/pdf/2410.02543
Ben Goertzel discusses AGI development, transhumanism, and the potential societal impacts of superintelligent AI. He predicts human-level AGI by 2029 and argues that the transition to superintelligence could happen within a few years after. Goertzel explores the challenges of AI regulation, the limitations of current language models, and the need for neuro-symbolic approaches in AGI research. He also addresses concerns about resource allocation and cultural perspectives on transhumanism. TOC: [00:00:00] AGI Timeline Predictions and Development Speed [00:00:45] Limitations of Language Models in AGI Development [00:02:18] Current State and Trends in AI Research and Development [00:09:02] Emergent Reasoning Capabilities and Limitations of LLMs [00:18:15] Neuro-Symbolic Approaches and the Future of AI Systems [00:20:00] Evolutionary Algorithms and LLMs in Creative Tasks [00:21:25] Symbolic vs. Sub-Symbolic Approaches in AI [00:28:05] Language as Internal Thought and External Communication [00:30:20] AGI Development and Goal-Directed Behavior [00:35:51] Consciousness and AI: Expanding States of Experience [00:48:50] AI Regulation: Challenges and Approaches [00:55:35] Challenges in AI Regulation [00:59:20] AI Alignment and Ethical Considerations [01:09:15] AGI Development Timeline Predictions [01:12:40] OpenCog Hyperon and AGI Progress [01:17:48] Transhumanism and Resource Allocation Debate [01:20:12] Cultural Perspectives on Transhumanism [01:23:54] AGI and Post-Scarcity Society [01:31:35] Challenges and Implications of AGI Development New! PDF Show notes: https://www.dropbox.com/scl/fi/fyetzwgoaf70gpovyfc4x/BenGoertzel.pdf?rlkey=pze5dt9vgf01tf2wip32p5hk5&st=svbcofm3&dl=0 Refs: 00:00:15 Ray Kurzweil's AGI timeline prediction, Ray Kurzweil, https://en.wikipedia.org/wiki/Technological_singularity 00:01:45 Ben Goertzel: SingularityNET founder, Ben Goertzel, https://singularitynet.io/ 00:02:35 AGI Conference series, AGI Conference Organizers, https://agi-conf.org/2024/ 00:03:55 Ben Goertzel's contributions to AGI, Wikipedia contributors, https://en.wikipedia.org/wiki/Ben_Goertzel 00:11:05 Chain-of-Thought prompting, Subbarao Kambhampati, https://arxiv.org/abs/2405.04776 00:11:35 Algorithmic information content, Pieter Adriaans, https://plato.stanford.edu/entries/information-entropy/ 00:12:10 Turing completeness in neural networks, Various contributors, https://plato.stanford.edu/entries/turing-machine/ 00:16:15 AlphaGeometry: AI for geometry problems, Trieu, Li, et al., https://www.nature.com/articles/s41586-023-06747-5 00:18:25 Shane Legg and Ben Goertzel's collaboration, Shane Legg, https://en.wikipedia.org/wiki/Shane_Legg 00:20:00 Evolutionary algorithms in music generation, Yanxu Chen, https://arxiv.org/html/2409.03715v1 00:22:00 Peirce's theory of semiotics, Charles Sanders Peirce, https://plato.stanford.edu/entries/peirce-semiotics/ 00:28:10 Chomsky's view on language, Noam Chomsky, https://chomsky.info/1983____/ 00:34:05 Greg Egan's 'Diaspora', Greg Egan, https://www.amazon.co.uk/Diaspora-post-apocalyptic-thriller-perfect-MIRROR/dp/0575082097 00:40:35 'The Consciousness Explosion', Ben Goertzel & Gabriel Axel Montes, https://www.amazon.com/Consciousness-Explosion-Technological-Experiential-Singularity/dp/B0D8C7QYZD 00:41:55 Ray Kurzweil's books on singularity, Ray Kurzweil, https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology/dp/0143037889 00:50:50 California AI regulation bills, California State Senate, https://sd18.senate.ca.gov/news/senate-unanimously-approves-senator-padillas-artificial-intelligence-package 00:56:40 Limitations of Compute Thresholds, Sara Hooker, https://arxiv.org/abs/2407.05694 00:56:55 'Taming Silicon Valley', Gary F. Marcus, https://www.penguinrandomhouse.com/books/768076/taming-silicon-valley-by-gary-f-marcus/ 01:09:15 Kurzweil's AGI prediction update, Ray Kurzweil, https://www.theguardian.com/technology/article/2024/jun/29/ray-kurzweil-google-ai-the-singularity-is-nearer
In this episode, we explore the fascinating world of Evolutionary Computation and Evolutionary Algorithms (EAs) and their real-world applications. We dive into the fundamental concepts of EAs, such as natural selection, mutation, and recombination, while discussing various types of algorithms, including Genetic Algorithms, Evolutionary Programming, and Genetic Programming. Learn how these powerful optimization techniques have been applied to diverse domains such as function optimization, evolutionary art and music, and neural network evolution. Join us in this captivating journey to understand how EAs can be used to solve complex problems and unlock new possibilities.Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details
4 billion years ago the earth was bombarded with photons, a short while later a Tesla got launched into space. This week I'm speaking to Professor Keith Downing about emergent phenomena; which the above scenario is a prime example of, alongside the formation of a baby from a fertilised egg, economies from individual interactions and for good measure alcohol from respiring yeast! Full synopsis:0:00 - Intro1:20 - Emergence and emergent systems13:20 - Evolution and its algorithms23:40 - Theory of facilitated variation31:30 - Neural Information vs Synthetic Information 41:30 - Evolutionary computation vs Neural networks48:25 - Re-enforcement learning1:02:15 - Building intelligence - neural net architectures1:07:00 - Advice and Book recommendations
Ira Pastor, ideaXme exponential health ambassador, interviews Dr. Josh Bongard, Professor in the Morphology, Evolution & Cognition Laboratory, Department of Computer Science, College of Engineering and Mathematical Sciences, University of Vermont. Ira Pastor Comments: On a recent ideaXme episode, we delved into the fascinating topics of "living architecture" and "living machines" and the principle of evolution in the built environment. Today, we are going continue along this unique area of the life sciences and segue into the area of "living robotics." Xenobots If you’ve been paying attention to the scientific literature over the last few weeks, you may have come across the term "Xenobots" in the press, named after the African clawed frog (Xenopus laevis). Xenobots are defined as self-healing micro-bots that are designed and programmed by a computer (via an "evolutionary algorithm") and built from the ground up using living biological cells. A Xenobot is a biological machine under 1 millimeter wide, made of heart cells (which naturally contract) and skin cells (which don’t), which are derived from stem cells harvested from Xenopus frog embryos (an extremely important model in the world of developmental biology). A team composed of scientists from both University of Vermont & Tufts University recently created these novel living machines, which were capable of moving towards a target, picking up a payload, and healing themselves after being cut, which may help increase our understanding how complex organs are formed for purposes of for regenerative medicine, and which one day, might be able to do things like safely deliver drugs inside the human body or remove artery plaques, clean radioactive wastes, collect micro-plastics in the oceans, and even maybe help colonize and terraform planets. Xenobots can walk and swim, survive for weeks without food and work together in groups, can heal on their own and keep working. Dr. Josh Bongard Today I’m joined by one of the amazing members of this Xenobot team. Dr. Josh Bongard, is Professor in the Morphology, Evolution & Cognition Laboratory, Department of Computer Science, College of Engineering and Mathematical Sciences, University of Vermont. Dr. Bongard completed his bachelors degree in Computer Science from McMaster University, Canada, his M.S. in Evolutionary & Adaptive Systems, University of Sussex, UK, his Ph.D. in Informatics, University of Zurich, Switzerland, and a post-doc in the Computational Synthesis Laboratory at Cornell University. Evolutionary Robotics Of the many fascinating things that go on in his lab, Dr. Bongard's group is focused on the unique domain of evolutionary robotics. In his evolutionary robotics work, the lab has a goal of directing the evolution of increasingly complex, capable, and autonomous machines to perform a widening array of difficult tasks and asking the broad question of "How can we automatically design a robot with little human intervention?" This work is quite cross-disciplinary in nature merges the disciplines of theoretical biology, embodied cognition, computational neuroscience, as well as psychology and philosophy. He is the co-author of the popular science books entitled "How the Body Shapes the Way We Think: A New View of Intelligence” and "Designing Intelligence: Why Brains Aren't Enough." On this show we hear from Dr. Bongard: About his background, how he developed an interest in computer science, and how he developed a passion for the convergent domains of computers and biology. The principles of "Evolutionary Algorithms" and "Artificial Ontogeny" in developing new organisms with AI. How Xenobot research can inform us as to how cells work together to form intricate complex anatomies. Future applications of Xenobots and how they inform us about non-neural intelligence and cognition dynamics. Credits: Ira Pastor interview video, text, and audio. Follow Ira Pastor on Twitter:@IraSamuelPastor If you liked this interview, be sure to check out ourinterview with Professor Dr. Rachel Armstrong, Professor of Experimental Architecture at the School of Architecture, Planning and Landscape, at Newcastle University. Follow ideaXme on Twitter:@ideaxm On Instagram:@ideaxme Find ideaXme across the internet including oniTunes,SoundCloud,Radio Public,TuneIn Radio,I Heart Radio, Google Podcasts, Spotify and more. ideaXme is a global podcast, creator series and mentor programme. Our mission: Move the human story forward!™ ideaXme Ltd.
Harry's guest in this episode is Massimo Buscema, director of the Semieon Research Center in Rome, Italy, and a full professor at the University of Colorado at Denver. Buscema researches and consults internationally on the theory and applications of AI, artificial neural networks, and evolutionary algorithms. The conversation focuses on AI and its applications in healthcare, and how it can enhance what we can see and uncover what we cannot. You can read a full transcript of this episode and browse all of our other episodes at glorikian.com/podcast/. How to rate MoneyBall Medicine on iTunes with an iPhone, iPad, or iPod touch: Launch the "Podcasts" app on your device. If you can't find this app, swipe all the way to the left on your home screen until you're on the Search page. Tap the search field at th top and type in "Podcasts." Apple's Podcasts app should show up in the search results. Tap the Podcasts app icon, and after it opens, tap the Search field at the top, or the little magnifying glass icon in the lower right corner. Type MoneyBall Medicine into the search field and press the Search button. In the search results, click on the MoneyBall Medicine logo. On the next page, scroll down until you see the Ratings & Reviews section. Below that you'll see five purple stars. Tap the stars to rate the show. Scroll down a little farther. You'll see a purple link saying "Write a Review." On the next screen, you'll see the stars again. You can tap them to leave a rating, if you haven't already. In the Title field, type a summary for your review. In the Review field, type your review. When you're finished, click Send. That's it, you're done. Thanks!
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
My guest this week is Risto Miikkulainen, professor of computer science at UT-Austin and vice president of Research at Sentient Technologies. Risto came locked and loaded to discuss a topic that we've received a ton of requests for -- evolutionary algorithms. During our talk we discuss some of the things Sentient is working on in the financial services and retail fields, and we dig into the technology behind it, evolutionary algorithms, which is also the focus of Risto’s research at UT. I really enjoyed this interview and learned a ton, and I’m sure you will too! Notes for this show can be found at twimlai.com/talk/47.
In this episode of the SuperDataScience Podcast, I chat with Vice President of Research at Sentient AI, Risto Miikkulainen. You will discuss about the applications of AI across multiple fields, learn about the 2 types of AI - the Evolutionary Algorithms and Reinforcement Learning Algorithms, and also get valuable insights on how AI is changing the employment landscape. If you enjoyed this episode, check out show notes, resources, and more at http://www.superdatascience.com/67
Lyt med, når IDA Podcast sætter fokus på evolutionære algoritmer og kunstige neurale netværk. Det er kognitive teknologier, som er fuldstændigt centrale for udviklingen indenfor kunstig intelligens i dag - og ikke til at overse, når vi skal forstå det eksponentielt accelererende paradigme, vi lever i. Vi stiller også skarpt på spørgsmålet, om maskiner kan være kreative, og på den vigtige rolle, kunstig intelligens spiller i en verden, hvor datamængden og processorkraften bare vokser og vokser. Podcasten er produceret af Danmarks ingeniørforening, IDA, i samarbejde med Brain Gain Group. Episoden er den tredje i en serie om fremtidsteknologi. Medvirkende Sebastian Risi, Associate Professor ved ITU og co-director for forskningsenheden Robotics, Evolution and Art Laboratory (REAL): http://bit.ly/2kTEKOC Thomas Terney, ph.d. i kunstig intelligens, foredragsholder og iværksætter: http://bit.ly/2kOQlik Vært og tilrettelæggelse: Matias Seidler Producer: Tobias Ankjær Jeppesen Lyddesign: Alexander Clerici SHOW NOTES [00:23] IBMs Deep Blue blev den første computer som slog en stormester, Garry Kasparov, i skak. Det skete i 1997: http://bit.ly/2kIkvkB [01:32] Link til præsentation af Henry Lieberman, MIT Media Lab, som forklarer forskellen mellem symbolsk (klassisk) kunstig intelligens og subsymbolsk kunstig intelligens: http://bit.ly/2k4lN7X [02:05] For en frisk introduktion til Deep Learning, tjek WIREDs artikel, ‘Why We Need To Tame Our Algorithms Like Dogs’: http://bit.ly/2kIIqQV [02:45] Forskningsenheden på ITU, ‘Robotics, Evolution and Arts Lab’, kan du kigge nærmere på her: http://bit.ly/2ksAxAE [03:27] For en illustrativ oversigt over hvad biologisk inspirerede algoritmer har af anvendelsesmuligheder kan du tjekke tag-søgningen på Robohub.org ud: http://bit.ly/2knkZf1 [04:42] Der er flere internationale priser og konkurrencer om at løse ‘General Artificial Intelligence’-udfordringen. Se bl.a. denne på 35M$: http://bit.ly/2knrUVA [06:32] Thomas Terney taler om styrkeforholdet mellem neuroner (eller enheder i netværket). Det betegnes som ‘weights’ på engelsk. Her er der en tråd med en række forskelligartede, uddybende besvarelser: http://bit.ly/2k40vMx [08:51] Baidu er den kinesiske ækvivalent til Google og har i januar 2017 hyret nogle af Microsofts dygtigste AI-udviklere: http://bit.ly/2lrPuS1 [10:34] Se NASAs whitepaper ‘Automated Antenna Design with Evolutionary Algorithms’: http://go.nasa.gov/2llsiYO [11:32] MIT Technology Review har en interessant artikel med løsningsforslag på problematikken: ‘Algorithms That Learn with Less Data Could Expand AI’s Power’: http://bit.ly/2ksG1M2 [14:23] I 2016 trak Googles AlphaGo headlines i hele verden da den slog Lee Sedol i det - i forhold til skak usammenligneligt - komplekse spil, ‘Go’. Og AlphaGo bliver ved med at sejre, se bare her: http://bit.ly/2k4t7jK [19:57] Apple er i fuld gang med at anvende ‘unsupervised learning’-princippet i udviklingen af selvkørende biler: http://bit.ly/2lplN3e [21:23] ‘Do you think computers have minds?’ Det er ikke kun Sebastian Risi, som synes det spørgsmål er godt. Bevidsthedsfilosofien har søgt seriøse svar på det, helt siden Alan Turing formulerede sin berømte test i 1950. Det er blevet omdrejningspunktet for en filosofisk tradition, hvis mest interessante og udfoldede svar elegant beskrives i The Internet Encyclopedia of Philosophy: http://bit.ly/2k4Bqw5