POPULARITY
Today GoogleDeepMind released AlphaEvolve: a Gemini coding agent for algorithm discovery. It beat the famous Strassen algorithm for matrix multiplication set 56 years ago. Google has been killing it recently. We had early access to the paper and interviewed the researchers behind the work.AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithmshttps://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/Authors: Alexander Novikov*, Ngân Vũ*, Marvin Eisenberger*, Emilien Dupont*, Po-Sen Huang*, Adam Zsolt Wagner*, Sergey Shirobokov*, Borislav Kozlovskii*, Francisco J. R. Ruiz, Abbas Mehrabian, M. Pawan Kumar, Abigail See, Swarat Chaudhuri, George Holland, Alex Davies, Sebastian Nowozin, Pushmeet Kohli, Matej Balog*(* indicates equal contribution or special designation, if defined elsewhere)SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***AlphaEvolve works like a very smart, tireless programmer. It uses powerful AI language models (like Gemini) to generate ideas for computer code. Then, it uses an "evolutionary" process – like survival of the fittest for programs. It tries out many different program ideas, automatically tests how well they solve a problem, and then uses the best ones to inspire new, even better programs.Beyond this mathematical breakthrough, AlphaEvolve has already been used to improve real-world systems at Google, such as making their massive data centers run more efficiently and even speeding up the training of the AI models that power AlphaEvolve itself. The discussion also covers how humans work with AlphaEvolve, the challenges of making AI discover things, and the exciting future of AI helping scientists make new discoveries.In short, AlphaEvolve is a powerful new AI tool that can invent new algorithms and solve complex problems, showing how AI can be a creative partner in science and engineering.Guests:Matej Balog: https://x.com/matejbalogAlexander Novikov: https://x.com/SashaVNovikovREFS:MAP Elites [Jean-Baptiste Mouret, Jeff Clune]https://arxiv.org/abs/1504.04909FunSearch [Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming Wang, Omar Fawzi, Pushmeet Kohli & Alhussein Fawzi]https://www.nature.com/articles/s41586-023-06924-6TOC:[00:00:00] Introduction: Alpha Evolve's Breakthroughs, DeepMind's Lineage, and Real-World Impact[00:12:06] Introducing AlphaEvolve: Concept, Evolutionary Algorithms, and Architecture[00:16:56] Search Challenges: The Halting Problem and Enabling Creative Leaps[00:23:20] Knowledge Augmentation: Self-Generated Data, Meta-Prompting, and Library Learning[00:29:08] Matrix Multiplication Breakthrough: From Strassen to AlphaEvolve's 48 Multiplications[00:39:11] Problem Representation: Direct Solutions, Constructors, and Search Algorithms[00:46:06] Developer Reflections: Surprising Outcomes and Superiority over Simple LLM Sampling[00:51:42] Algorithmic Improvement: Hill Climbing, Program Synthesis, and Intelligibility[01:00:24] Real-World Application: Complex Evaluations and Robotics[01:05:39] Role of LLMs & Future: Advanced Models, Recursive Self-Improvement, and Human-AI Collaboration[01:11:22] Resource Considerations: Compute Costs of AlphaEvolveThis is a trial of posting videos on Spotify, thoughts? Email me or chat in our Discord
As soon as the last ice age glaciers melted, Indigenous people occupied this siteA recently discovered archaeological site in Saskatchewan, dated to just less than 11,000 years ago is the oldest settlement in the region by about 1,500 years. It also is evidence that Indigenous people settled there as soon as the environment could support them after the glaciers disappeared. Glenn Stuart, from the University of Saskatchewan, is one of the archaeologists working along with local Indigenous community members to preserve and study the site.Just the right magnetic field will make sea turtles do a ‘happy dance'Researchers investigating how sea turtles navigate the vast and trackless ocean have discovered just how sensitive the reptiles' magnetic sense is, as they can even use it to identify the location of food resources. While feeding the loggerhead turtles in the lab, Kayla Goforth, a postdoctoral researcher at Texas A&M University noticed that the turtles would perform a ‘happy dance' when they recognized the right magnetic signature. She led this research that was published in the journal Nature.Intense exercise causes our bodies to belch out DNA that may reduce inflammationScientists were surprised to discover that the more intensely you exercise, the more certain immune cells belch out fragments of DNA that can form webs to trap pathogens, and lead to fewer pro-inflammatory immune cells circulating in our blood. Canadian researcher Stephen Montgomery, a professor of pathology at Stanford University, said their findings suggest that circulating cell-free DNA may play a role in how exercise lowers inflammation in the body. The study was published in the journal PNAS. An ancient Antarctic duck lived at the time of T-RexBirds are the last surviving lineage of dinosaurs, but modern birds are surprisingly ancient – dating to before the extinction of the rest of their family. An extremely rare, nearly intact bird skull found in Antarctica and dated to about 69 million years ago confirms this. This waterfowl had similarities to ducks and loons. Chris Torres is an assistant professor at the University of the Pacific in Stockton California and was part of the team that analyzed this fossil. Their research was published in the journal Nature.Science is being transformed by the AI revolutionThe stunning advances in artificial intelligence that we see with internet AI apps are just the tip of the iceberg when it comes to science. Researchers from almost every field are experimenting with this powerful new tool to diagnose disease, understand climate change, develop strategies for conservation and discover new kinds of materials. And AI is on the threshold of being able to make discoveries all by itself. Will it put scientists out of a job?Producer Amanda Buckiewicz spoke with:Jeff Clune, a professor of computer science at the University of British Columbia, a Canada CIFAR AI Chair at the Vector Institute, and a senior research advisor to DeepMind. He's also a co-author of The AI Scientist.Allison Noble, a Professor of Biomedical Engineering at the University of Oxford and a Foreign Secretary at the Royal Society, and chair of the Science in the Age of AI working group.Elissa Strome, executive director of the Pan-Canadian Artificial Intelligence Strategy at CIFAR.Cong Lu, postdoctoral research and teaching fellow at the University of British Columbia and the Vector Institute, and a co-author of The AI Scientist.Fred Morstatter, a research assistant professor at the University of Southern California, and a principal scientist at USC's Information Sciences Institute.
AI professor Jeff Clune ruminates on open-ended evolutionary algorithms—systems designed to generate novel and interesting outcomes forever. Drawing inspiration from nature's boundless creativity, Clune and his collaborators aim to build “Darwin Complete” search spaces, where any computable environment can be simulated. By harnessing the power of large language models and reinforcement learning, these AI agents continuously develop new skills, explore uncharted domains, and even cooperate with one another in complex tasks. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? They are hosting an event in Zurich on January 9th with the ARChitects, join if you can. Goto https://tufalabs.ai/ *** A central theme throughout Clune's work is “interestingness”: an elusive quality that nudges AI agents toward genuinely original discoveries. Rather than rely on narrowly defined metrics—which often fail due to Goodhart's Law—Clune employs language models to serve as proxies for human judgment. In doing so, he ensures that “interesting” always reflects authentic novelty, opening the door to unending innovation. Yet with these extraordinary possibilities come equally significant risks. Clune says we need AI safety measures—particularly as the technology matures into powerful, open-ended forms. Potential pitfalls include agents inadvertently causing harm or malicious actors subverting AI's capabilities for destructive ends. To mitigate this, Clune advocates for prudent governance involving democratic coalitions, regulation of cutting-edge models, and global alignment protocols. Jeff Clune: https://x.com/jeffclune http://jeffclune.com/ (Interviewer: Tim Scarfe) TOC: 1. Introduction [00:00:00] 1.1 Overview and Opening Thoughts 2. Sponsorship [00:03:00] 2.1 TufaAI Labs and CentML 3. Evolutionary AI Foundations [00:04:12] 3.1 Open-Ended Algorithm Development and Abstraction Approaches [00:07:56] 3.2 Novel Intelligence Forms and Serendipitous Discovery [00:11:46] 3.3 Frontier Models and the 'Interestingness' Problem [00:30:36] 3.4 Darwin Complete Systems and Evolutionary Search Spaces 4. System Architecture and Learning [00:37:35] 4.1 Code Generation vs Neural Networks Comparison [00:41:04] 4.2 Thought Cloning and Behavioral Learning Systems [00:47:00] 4.3 Language Emergence in AI Systems [00:50:23] 4.4 AI Interpretability and Safety Monitoring Techniques 5. AI Safety and Governance [00:53:56] 5.1 Language Model Consistency and Belief Systems [00:57:00] 5.2 AI Safety Challenges and Alignment Limitations [01:02:07] 5.3 Open Source AI Development and Value Alignment [01:08:19] 5.4 Global AI Governance and Development Control 6. Advanced AI Systems and Evolution [01:16:55] 6.1 Agent Systems and Performance Evaluation [01:22:45] 6.2 Continuous Learning Challenges and In-Context Solutions [01:26:46] 6.3 Evolution Algorithms and Environment Generation [01:35:36] 6.4 Evolutionary Biology Insights and Experiments [01:48:08] 6.5 Personal Journey from Philosophy to AI Research Shownotes: We craft detailed show notes for each episode with high quality transcript and references and best parts bolded. https://www.dropbox.com/scl/fi/fz43pdoc5wq5jh7vsnujl/JEFFCLUNE.pdf?rlkey=uu0e70ix9zo6g5xn6amykffpm&st=k2scxteu&dl=0
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper in Science: Managing extreme AI risks amid rapid progress, published by JanB on May 23, 2024 on The AI Alignment Forum. https://www.science.org/doi/10.1126/science.adn0117 Authors: Yoshua Bengio, Geoffrey Hinton, Andrew Yao, Dawn Song, Pieter Abbeel, Yuval Noah Harari, Ya-Qin Zhang, Lan Xue, Shai Shalev-Shwartz, Gillian Hadfield, Jeff Clune, Tegan Maharaj, Frank Hutter, Atılım Güneş Baydin, Sheila McIlraith, Qiqi Gao, Ashwin Acharya, David Krueger, Anca Dragan, Philip Torr, Stuart Russell, Daniel Kahneman, Jan Brauner*, Sören Mindermann* Abstract: Artificial intelligence (AI) is progressing rapidly, and companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI's impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. Although researchers have warned of extreme risks from AI, there is a lack of consensus about how to manage them. Society's response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts. AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness and barely address autonomous systems. Drawing on lessons learned from other safety-critical technologies, we outline a comprehensive plan that combines technical research and development with proactive, adaptive governance mechanisms for a more commensurate preparation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
AI Generating Algos, Learning to play Minecraft with Video PreTraining (VPT), Go-Explore for hard exploration, POET and Open Endedness, AI-GAs and ChatGPT, AGI predictions, and lots more! Professor Jeff Clune is Associate Professor of Computer Science at University of British Columbia, a Canada CIFAR AI Chair and Faculty Member at Vector Institute, and Senior Research Advisor at DeepMind. Featured References Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos [ Blog Post ] Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, Jeff Clune Robots that can adapt like animals Antoine Cully, Jeff Clune, Danesh Tarapore, Jean-Baptiste Mouret Illuminating search spaces by mapping elites Jean-Baptiste Mouret, Jeff Clune Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions Rui Wang, Joel Lehman, Aditya Rawal, Jiale Zhi, Yulun Li, Jeff Clune, Kenneth O. Stanley Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley First return, then explore Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Are AI-generating algorithms the path to artificial general intelligence(AGI)? Today we're joined by Jeff Clune, an associate professor of computer science at the University of British Columbia, and faculty member at the Vector Institute. In our conversation with Jeff, we discuss the broad ambitious goal of the AI field, artificial general intelligence, where we are on the path to achieving it, and his opinion on what we should be doing to get there, specifically, focusing on AI generating algorithms. With the goal of creating open-ended algorithms that can learn forever, Jeff shares his three pillars to an AI-GA, meta-learning architectures, meta-learning algorithms, and auto-generating learning environments. Finally, we discuss the inherent safety issues with these learning algorithms and Jeff's thoughts on how to combat them, and what the not-so-distant future holds for this area of research. The complete show notes for this episode can be found at twimlai.com/go/602.
In episode 41 of The Gradient Podcast, Andrey Kurenkov speaks to Professor Jeff Clune.Jeff is an Associate Professor of Computer Science at the University of British Columbia and a Faculty Member of the Vector Institute. Previously, he was a Research Team Leader at OpenAI and before that a Senior Research Manager and founding member of Uber AI Labs, and prior to that he was an Associate Professor in Computer Science at the University of Wyoming.Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterThe Gradient is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Outline:(00:00) Intro(01:05) Path into AI(08:05) Studying biology with simulations(10:30) Overview of genetic algorithms(14:00) Evolving gaits with genetic algorithms(20:00) Quality-Diversity Algorithms(27:00) Evolving Soft Robots(32:15) Genetic algorithms for studying Evolution(39:30) Modularity for Catastrophic Forgetting(45:15) Curiosity for Learning Diverse Skills(51:15) Evolving Environments (58:3) The Surprising Creativity of Digital Evolution(1:04:28) Hobbies Outside of Research(1:07:25) Outro Get full access to The Gradient at thegradientpub.substack.com/subscribe
Timestamps(02:14) Alberto briefly shared his upbringing and education at the Bayes Business School in London.(04:01) Alberto shared key learnings from his first entrepreneurial stint at 19 by developing a 3D printing product for ed-tech.(07:48) Alberto described his overall experience participating in Singularity University's Graduate Studies Program at the NASA Ames Research Park under a Google-funded scholarship in 2015.(12:52) Alberto helped develop the Aipoly product to aid the blind and visually impaired.(17:38) Alberto showed his enthusiasm for federated learning applications within mobile devices.(19:53) Alberto talked about the dichotomy between capitalism and social good in entrepreneurship.(22:29) Alberto shared the backstory behind the founding of V7 Labs.(26:40) Alberto discussed the comparison between biological and artificial neural networks.(28:02) Alberto emphasized the importance of having a good co-founder.(30:27) Alberto dissected the notable features developed within V7's Annotation capability.(33:37) Alberto went over things to look for in a video labeling tool, citing his blog post.(37:21) Alberto unpacked key principles behind V7's robust Dataset Management tool.(40:53) Alberto walked through the powerful capabilities of V7 Neurons that power its Model Automation tool.(43:33) Alberto shared fundraising advice for founders seeking the right investors for their startups.(46:07) Alberto shared valuable hiring and culture-setting lessons learned at V7.(50:12) Alberto emphasized the importance of not losing sight of the ‘ideal customer' for young founders in the AI space.(53:01) Alberto shared the hurdles his team has to go through while finding new customers in new industries.(55:10) Alberto walked through labeling challenges dealing with medical imaging datasets.(57:35) Alberto discussed outreach initiatives that helped drive V7's organic growth.(59:49) Alberto mentioned the importance of collaboration between companies within the MLOps ecosystem.(01:02:01) Alberto touched on the scientific hunger of Europe regarding the adoption of AI technologies.(01:03:49) Alberto briefly mentioned what public recognition means to him in the pursuit of democratizing AI for the world.(01:06:07) Closing segment.Alberto's Contact InfoWebsiteLinkedInTwitterMediumV7's ResourcesWebsiteSoftware 2.0 BlogAcademy TutorialsDocumentationLinkedIn | TwitterMentioned ContentArticles“7 Things We Looked for in a Video Labeling Tool” (Aug 2020)“The Biggest Mistake I've Ever Made: Losing Sight of the Ideal Customer” (March 2021)Talks“An AI Narrator for the Blind” (TEDx Geneva 2016)“If The Blind Could See” (TEDx Melbourne 2018)PeopleGeoff Hinton (for rethinking the ML field fundamentally)Chelsea Finn (for her work on meta-learning)Jeff Clune (for making agents that work at scale in the real world)Book“Start With Why” (by Simon Sinek)NotesV7 is hiring across all departments. Take a look at their careers page for the openings!About the showDatacast features long-form, in-depth conversations with practitioners and researchers in the data community to walk through their professional journeys and unpack the lessons learned along the way. I invite guests coming from a wide range of career paths — from scientists and analysts to founders and investors — to analyze the case for using data in the real world and extract their mental models (“the WHY and the HOW”) behind their pursuits. Hopefully, these conversations can serve as valuable tools for early-stage data professionals as they navigate their own careers in the exciting data universe.Datacast is produced and edited by James Le. Get in touch with feedback or guest suggestions by emailing khanhle.1013@gmail.com.Subscribe by searching for Datacast wherever you get podcasts or click one of the links below:Listen on SpotifyListen on Apple PodcastsListen on Google PodcastsIf you're new, see the podcast homepage for the most recent episodes to listen to, or browse the full guest list.
Show notes:- On the Measure of Intelligence by François Chollet - Part 1: Foundations (Paper Explained) [YouTube](https://www.youtube.com/watch?v=3_qGr...)- [2108.07258 On the Opportunities and Risks of Foundation Models](https://arxiv.org/abs/2108.07258)- [2005.11401 Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401)- Negative Data Augmentation: https://arxiv.org/abs/2102.05113- Beyond Accuracy: Behavioral Testing of NLP models with CheckList: [2005.04118 Beyond Accuracy: Behavioral Testing of NLP models with CheckList](https://arxiv.org/abs/2005.04118)- Symbolic AI vs Deep Learning battle https://www.technologyreview.com/2020...- Dense Passage Retrieval for Open-Domain Question Answering https://arxiv.org/abs/2004.04906- Data Augmentation Can Improve Robustness https://arxiv.org/abs/2111.05328- Contrastive Loss Explained. Contrastive loss has been used recently… | by Brian Williams | Towards Data Science https://towardsdatascience.com/contra...- Keras Code examples https://keras.io/examples/- https://you.com/ -- new web search engine by Richard Socher- The Book of Why: The New Science of Cause and Effect: Pearl, Judea, Mackenzie, Dana: 9780465097609: Amazon.com: Books https://www.amazon.com/Book-Why-Scien...- Chelsea Finn: https://twitter.com/chelseabfinn- Jeff Clune: https://twitter.com/jeffclune- Michael Bronstein (Geometric Deep Learning): https://twitter.com/mmbronstein https://arxiv.org/abs/2104.13478- Connor's Twitter: https://twitter.com/CShorten30- Dmitry's Twitter: https://twitter.com/DmitryKan
Evolutionary algorithms can generate surprising, effective solutions to our problems. Evolutionary algorithms are often let loose within a simulated environment. The algorithm is given a function to optimize for, and the engineers expect that algorithm to evolve a solution that optimizes for the objective function given the constraints of the simulated environment. But sometimes these The post Digital Evolution with Joel Lehman, Dusan Misevic, and Jeff Clune appeared first on Software Engineering Daily.
We're becoming more reliant on robots to assist in hostile zones from extinguishing forest fires to bomb disposal to decontaminating nuclear facilities. But whereas humans can quickly adapt to injuries, current robots cannot 'think outside the box' to find a new behaviour when they get damaged. Tracey Logan speaks to computer scientist Jeff Clune who's developed a new way to allow robots to adapt to damage in less than two minutes. It will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury.
This month sees the end of NASA's MESSENGER mission to Mercury. It's been the first mission to the sun's closest planet since Mariner 10 flew by in the mid-1970s. Lucie Green speaks to geologist Professor Pete Schultz of Brown University about the orbiter's 4 year surveillance and how new observations of this under explored world are shedding light on the planet's mysterious dark cratered surface. Virtual experiences are coming closer and closer to reality as both sound and vision, and even smell, become convincing. But without the sense of touch you'll never have the full experience. A team at Bristol University has now managed to generate the feeling of pressure projected directly onto your bare, empty hands. Its system enables you to feel invisible interfaces, textures and virtual objects through the use of ultrasound. Roland Pease gets a hands on experience. One of the biggest challenges in artificial intelligence is conquering a computer's so-called "catastrophic forgetting": as soon as a new skill is learned others get crowded out, which makes artificial computer brains one trick ponies. Jeff Clune of Wyoming University directs the Evolving Artificial Intelligence Lab and has tested the idea that computer brains could evolve to work in the same way as human brains - in a modular fashion. He shows how by doing so, it's possible to learn more and forget less. And there's a visit to the Ion Beam Centre at University of Surrey where, in conjunction with a project to restore the Rosslyn chapel near Edinburgh, scientists have provided a new development in stained glass conservation - scrutinising the glass contents at the subatomic level using a narrow beam of accelerated charged particles, to literally decode the exquisite features lost to the naked eye. Lucie Green caught up with the Centre's director, Roger Webb. Producer Adrian Washbourne.
My guest this week is Anh Nguyen, a PhD student at the University of Wyoming working in the Evolving AI lab. The episode discusses the paper Deep Neural Networks are Easily Fooled [pdf] by Anh Nguyen, Jason Yosinski, and Jeff Clune. It describes a process for creating images that a trained deep neural network will mis-classify. If you have a deep neural network that has been trained to recognize certain types of objects in images, these "fooling" images can be constructed in a way which the network will mis-classify them. To a human observer, these fooling images often have no resemblance whatsoever to the assigned label. Previous work had shown that some images which appear to be unrecognizable white noise images to us can fool a deep neural network. This paper extends the result showing abstract images of shapes and colors, many of which have form (just not the one the network thinks) can also trick the network.
Tom Barbalet and Jeff Clune discuss NEAT, Avida, altruism and MSU's Digital Evolution Laboratory. This is the live internet radio format for the podcast at 8pm Pacific on Friday every-other-week. For more information, http://www.biotacast.org/
Tom Barbalet and Jeff Clune discuss NEAT, Avida, altruism and MSU's Digital Evolution Laboratory. This is the live internet radio format for the podcast at 8pm Pacific on Friday every-other-week. For more information, http://www.biota.org/podcast/
Tom Barbalet and Jeff Clune discuss NEAT, Avida, altruism and MSU's Digital Evolution Laboratory. This is the live internet radio format for the podcast at 8pm Pacific on Friday every-other-week. For more information, http://www.biota.org/podcast/
Tom Barbalet and Jeff Clune discuss NEAT, Avida, altruism and MSU's Digital Evolution Laboratory. This is the live internet radio format for the podcast at 8pm Pacific on Friday every-other-week. For more information, http://www.biotacast.org/