Podcasts about machine intelligence

Intelligence demonstrated by machines

  • 201PODCASTS
  • 324EPISODES
  • 41mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 2, 2025LATEST
machine intelligence

POPULARITY

20172018201920202021202220232024


Best podcasts about machine intelligence

Latest podcast episodes about machine intelligence

The Defense Tech Underground
011: Dr. Craig Martell - The Pentagon's Chief Digital and Artificial Intelligence Office

The Defense Tech Underground

Play Episode Listen Later May 2, 2025 67:26


Dr. Craig Martell served as the Chief Digital and Artificial Intelligence Officer for the DOD from June 2022 through April 2024. While at the Pentagon, he helped the Department of Defense modernize their approach to employing software. He now works as the Chief AI Officer for Cohesity, a cybersecurity startup that helps companies secure, analyze, and manage their data. In this episode of the Defense Tech Underground, we discuss Dr. Martell's path from teaching computer science to leading a major Pentagon office, his early career in big tech at the dawn of AI, his concerns about the use of generative AI in warfare, and how tech startups can be effective by innovating alongside warfighters. This episode is hosted by Jeff Phaneuf and Andrew Paulmeno.   Full Bio:  Dr. Craig Martell is the former Chief Digital and Artificial Intelligence Officer for the United States Department of Defense.  As Chief AI Officer of Cohesity, Craig shapes Cohesity's technical vision—and defines and executes a strategic roadmap for the company's future. Craig brings extensive industry and public sector experience and expertise in artificial intelligence (AI) and machine learning to his role. Most recently, as the first Chief Digital and Artificial Intelligence Officer (CDAO) for the U.S. Department of Defense, Craig accelerated the adoption of data, analytics, digital solutions, and AI functions. Prior to the DoD, he held senior roles at several leading technology companies. He served as Head of Machine Learning at Lyft, Head of Machine Intelligence at Dropbox, and was a leader of numerous AI teams and initiatives at LinkedIn. Craig was also a tenured computer science professor at the Naval Postgraduate School specializing in natural language processing. He holds a Ph.D. in Computer Science from the University of Pennsylvania.  

Chain Reaction
Travis Good: Machine Intelligence as a new world currency: facing down OpenAI with Ambient, a hyperscaled decentralized PoW-powered alternative

Chain Reaction

Play Episode Listen Later Apr 7, 2025 91:23


Join Tom Shaughnessy as he hosts Travis Good, CEO and co-founder of Ambient, for a deep dive into the world's first useful proof-of-work blockchain powered by AI. Fresh out of stealth, Ambient reimagines the intersection of crypto and AI by creating a decentralized network where mining secures the chain through verified AI inference on a 600B+ parameter model.

No Such Thing: K12 Education in the Digital Age
The Learner's Apprentice: AI and the Amplification of Human Creativity

No Such Thing: K12 Education in the Digital Age

Play Episode Listen Later Apr 3, 2025 61:23


Sylvia Martinez was an aerospace engineer before becoming an educational software producer and vice president of a video game company. She spent a decade as the President of Generation YES, the groundbreaking non-profit that provides educators with the tools necessary to place students in leadership roles in their schools and communities. In addition to leading workshops, Sylvia delights and challenges audiences as a keynote speaker at major conferences around the world. She brings her real-world experience in highly innovative work environments to learning organizations that wish to change STEM education to be more inclusive, effective, and engaging.Sylvia is co-author of Invent To Learn: Making, Tinkering, and Engineering in the Classroom, often called the “bible” of the classroom maker movement. She runs the book publishing arm of CMK Futures, Constructing Modern Knowledge Press, to continue to publish books about creative education by educators.Ken Kahn has been interested in Al and education for 50 years. His 1977 paper "Three interactions between Al and education" In E. Elcock and D. Michie, editors, Machine Intelligence 8: Machine Representations of Knowledge may be among the first publications on the topic. He received his doctorate from the MIT Al Lab in 1979. He designed and implemented ToonTalk, a programming language for children that looks and feels like a video game. He has developed a large collection of Al programming resources for school students (https://ecraft2learn.github.io/ai/). He recently retired as a senior researcher from the University of Oxford.Linkshttps://constructingmodernknowledge.com/about-the-cmk-hosts/https://sylviamartinez.comhttps://www.linkedin.com/posts/garystager_ken-kahn-speaks-with-sylvia-martinez-about-activity-7303865110035341313-BcUlhttps://uk.linkedin.com/in/ken-kahn-997a225 Hosted on Acast. See acast.com/privacy for more information.

The MAD Podcast with Matt Turck
Beyond Brute Force: Chollet & Knoop on ARC AGI 2, the Benchmark Breaking LLMs and the Search for True Machine Intelligence

The MAD Podcast with Matt Turck

Play Episode Listen Later Apr 3, 2025 60:45


In this fascinating episode, we dive deep into the race towards true AI intelligence, AGI benchmarks, test-time adaptation, and program synthesis with star AI researcher (and philosopher) Francois Chollet, creator of Keras and the ARC AGI benchmark, and Mike Knoop, co-founder of Zapier and now co-founder with Francois of both the ARC Prize and the research lab Ndea. With the launch of ARC Prize 2025 and ARC-AGI 2, they explain why existing LLMs fall short on true intelligence tests, how new models like O3 mark a step change in capabilities, and what it will really take to reach AGI.We cover everything from the technical evolution of ARC 1 to ARC 2, the shift toward test-time reasoning, and the role of program synthesis as a foundation for more general intelligence. The conversation also explores the philosophical underpinnings of intelligence, the structure of the ARC Prize, and the motivation behind launching Ndea — a ew AGI research lab that aims to build a "factory for rapid scientific advancement." Whether you're deep in the AI research trenches or just fascinated by where this is all headed, this episode offers clarity and inspiration.NdeaWebsite - https://ndea.comX/Twitter - https://x.com/ndeaARC PrizeWebsite - https://arcprize.orgX/Twitter - https://x.com/arcprizeFrançois CholletLinkedIn - https://www.linkedin.com/in/fcholletX/Twitter - https://x.com/fcholletMike KnoopX/Twitter - https://x.com/mikeknoopFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (01:05) Introduction to ARC Prize 2025 and ARC-AGI 2 (02:07) What is ARC and how it differs from other AI benchmarks (02:54) Why current models struggle with fluid intelligence (03:52) Shift from static LLMs to test-time adaptation (04:19) What ARC measures vs. traditional benchmarks (07:52) Limitations of brute-force scaling in LLMs (13:31) Defining intelligence: adaptation and efficiency (16:19) How O3 achieved a massive leap in ARC performance (20:35) Speculation on O3's architecture and test-time search (22:48) Program synthesis: what it is and why it matters (28:28) Combining LLMs with search and synthesis techniques (34:57) The ARC Prize structure: efficiency track, private vs. public (42:03) Open source as a requirement for progress (44:59) What's new in ARC-AGI 2 and human benchmark testing (48:14) Capabilities ARC-AGI 2 is designed to test (49:21) When will ARC-AGI 2 be saturated? AGI timelines (52:25) Founding of NDEA and why now (54:19) Vision beyond AGI: a factory for scientific advancement (56:40) What NDEA is building and why it's different from LLM labs (58:32) Hiring and remote-first culture at NDEA (59:52) Closing thoughts and the future of AI research

Crazy Wisdom
Episode #438: What If AI Is Just the Next Political Revolution?

Crazy Wisdom

Play Episode Listen Later Feb 24, 2025 55:00


On this episode of Crazy Wisdom, host Stewart Alsop speaks with Ivan Vendrov for a deep and thought-provoking conversation covering AI, intelligence, societal shifts, and the future of human-machine interaction. They explore the "bitter lesson" of AI—that scale and compute ultimately win—while discussing whether progress is stalling and what bottlenecks remain. The conversation expands into technology's impact on democracy, the centralization of power, the shifting role of the state, and even the mythology needed to make sense of our accelerating world. You can find more of Ivan's work at nothinghuman.substack.com or follow him on Twitter at @IvanVendrov.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Setting00:21 The Bitter Lesson in AI02:03 Challenges in AI Data and Infrastructure04:03 The Role of User Experience in AI Adoption08:47 Evaluating Intelligence and Divergent Thinking10:09 The Future of AI and Society18:01 The Role of Big Tech in AI Development24:59 Humanism and the Future of Intelligence29:27 Exploring Kafka and Tolkien's Relevance29:50 Tolkien's Insights on Machine Intelligence30:06 Samuel Butler and Machine Sovereignty31:03 Historical Fascism and Machine Intelligence31:44 The Future of AI and Biotech32:56 Voice as the Ultimate Human-Computer Interface36:39 Social Interfaces and Language Models39:53 Javier Malay and Political Shifts in Argentina50:16 The State of Society in the U.S.52:10 Concluding Thoughts on Future ProspectsKey InsightsThe Bitter Lesson Still Holds, but AI Faces Bottlenecks – Ivan Vendrov reinforces Rich Sutton's "bitter lesson" that AI progress is primarily driven by scaling compute and data rather than human-designed structures. While this principle still applies, AI progress has slowed due to bottlenecks in high-quality language data and GPU availability. This suggests that while AI remains on an exponential trajectory, the next major leaps may come from new forms of data, such as video and images, or advancements in hardware infrastructure.The Future of AI Is Centralization and Fragmentation at the Same Time – The conversation highlights how AI development is pulling in two opposing directions. On one hand, large-scale AI models require immense computational resources and vast amounts of data, leading to greater centralization in the hands of Big Tech and governments. On the other hand, open-source AI, encryption, and decentralized computing are creating new opportunities for individuals and small communities to harness AI for their own purposes. The long-term outcome is likely to be a complex blend of both centralized and decentralized AI ecosystems.User Interfaces Are a Major Limiting Factor for AI Adoption – Despite the power of AI models like GPT-4, their real-world impact is constrained by poor user experience and integration. Vendrov suggests that AI has created a "UX overhang," where the intelligence exists but is not yet effectively integrated into daily workflows. Historically, technological revolutions take time to diffuse, as seen with the dot-com boom, and the current AI moment may be similar—where the intelligence exists but society has yet to adapt to using it effectively.Machine Intelligence Will Radically Reshape Cities and Social Structures – Vendrov speculates that the future will see the rise of highly concentrated AI-powered hubs—akin to "mile by mile by mile" cubes of data centers—where the majority of economic activity and decision-making takes place. This could create a stark divide between AI-driven cities and rural or off-grid communities that choose to opt out. He draws a parallel to Robin Hanson's Age of Em and suggests that those who best serve AI systems will hold power, while others may be marginalized or reduced to mere spectators in an AI-driven world.The Enlightenment's Individualism Is Being Challenged by AI and Collective Intelligence – The discussion touches on how Western civilization's emphasis on the individual may no longer align with the realities of intelligence and decision-making in an AI-driven era. Vendrov argues that intelligence is inherently collective—what matters is not individual brilliance but the ability to recognize and leverage diverse perspectives. This contradicts the traditional idea of intelligence as a singular, personal trait and suggests a need for new frameworks that incorporate AI into human networks in more effective ways.Javier Milei's Libertarian Populism Reflects a Global Trend Toward Radical Experimentation – The rise of Argentina's President Javier Milei exemplifies how economic desperation can drive societies toward bold, unconventional leaders. Vendrov and Alsop discuss how Milei's appeal comes not just from his radical libertarianism but also from his blunt honesty and willingness to challenge entrenched power structures. His movement, however, raises deeper questions about whether libertarianism alone can provide a stable social foundation, or if voluntary cooperation and civil society must be explicitly cultivated to prevent libertarian ideals from collapsing into chaos.AI, Mythology, and the Need for New Narratives – The conversation closes with a reflection on the power of mythology in shaping human understanding of technological change. Vendrov suggests that as AI reshapes the world, new myths will be needed to make sense of it—perhaps similar to Tolkien's elves fading as the age of men begins. He sees AI as part of an inevitable progression, where human intelligence gives way to something greater, but argues that this transition must be handled with care. The stories we tell about AI will shape whether we resist, collaborate, or simply fade into irrelevance in the face of machine intelligence.

unSILOed with Greg LaBlanc
506. From Human Logic to Machine Intelligence: Rethinking Decision-Making with Kartik Hosanagar

unSILOed with Greg LaBlanc

Play Episode Listen Later Jan 29, 2025 54:57


The world of decision-making is now dominated by algorithms and automation. But how much has the AI really changed? Haven't, on some level, humans always thought in algorithmic terms? Kartik Hosanagar is a professor of technology at The Wharton School at The University of Pennsylvania. His book, A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control explores how algorithms and AI are increasingly influencing our daily decisions and society, and proposes ways for individuals and organizations to maintain control in this algorithmic world.Kartik and Greg discuss the integration of AI in decision-making, the differences and similarities of human based algorithmic thinking and AI based algorithmic thinking, the significance of AI literacy, and the future of creativity with AI. *unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Herbert A. SimonPedro Domingos“At UPS, the Algorithm Is the Driver” | The Wall Street Journal“(Re)Introducing the AI Bill of Rights: An AI Governance Proposal” by Kartik HosanagarGuest Profile:Faculty Profile at The Wharton School of the University of PennsylvaniaKartik Hosanagar's SubstackProfessional Profile on LinkedInHis Work:A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in ControlEpisode Quotes:What's a good system design for AI?43:02: A good system design for AI systems, would be when there's deviation from the recommended decision to have some feedback loop. It's like in a music recommendation system, and Spotify Discover Weekly or any of these other systems where a recommendation comes in; ideally, you want some feedback on did this person like the song or not. And if there's a way to get that feedback, whether you know one way is it's an explicit feedback thumbs up, thumbs down, sometimes it's implicit; they just skipped it, or they didn't finish the song, they just left it halfway through, or something like that. But you need some way to get that feedback, and that helps the system get better over time.At the end of the day, humans shape the future of AI12:43: This view that it's all automation and we'll have mass human replacement by AI, I think, at the end of the day, we shape that outcome. We need to be actively involved in shaping that future where AI is empowering us and augmenting our work. And we design these human-AI systems in a more deliberate manner.On driving trust in algorithmic systems36:08: What drives trust in an algorithmic system shows that transparency and user control are two extremely important variables. Of course, you care about things like how accurate or good that system is. Those things, of course, matter. But transparency and trust are interesting. So, in transparency, the idea that you have a system making decisions for you or about you, but you have no clue about how the system works, is disturbing for people. And we've seen ample evidence that people reject that system.

StarTalk Radio
Is Consciousness Everywhere? With Anil Seth

StarTalk Radio

Play Episode Listen Later Jan 24, 2025 50:50


Are we on the brink of merging with machines? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O'Reilly dive into the mysteries of consciousness versus intelligence, panpsychism, and AI with neuroscientist and author Anil Seth.NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here:https://startalkmedia.com/show/is-consciousness-everywhere-with-anil-seth/Thanks to our Patrons James Boothe, Vicken Serpakian, John Webb, Doctor Pants, Greg Gralenski, Lost_AI, Bob Lester, kim christensen, Micheal Gannon, Aaron Rosenberg, Shai Kr, Kyle Bullock, JyinxTV, James Myers, victor recabarren, David Pederson, Ted McSheehy, Terena, Tracy Sheckells, Groovemaster24, Sheedrealmusic, David Amicucci, Brian Ridge, M Ranger, Peter Ackerman, Mars Colony AI, DonAlan, Harry Sørensen, G Anthony, Muhammad Umer, and Joshua MacDonald for supporting us this week. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.

Engines of Our Ingenuity
The Engines of our Ingenuity 2544: Face Recognition

Engines of Our Ingenuity

Play Episode Listen Later Jan 14, 2025 3:48


Episode: 2544 How humans and computers recognize faces.  Today, UH math professor Krešo Josić recognizes your face.

RadicalxChange(s)
Joe Edelman: Co-Founder of Meaning Alignment Institute

RadicalxChange(s)

Play Episode Listen Later Dec 6, 2024 81:45


What happens when artificial intelligence starts weighing in on our moral decisions? Matt Prewitt is joined by Meaning Alignment Institute co-founder Joe Edelman to explore this thought-provoking territory in examining how AI is already shaping our daily experiences and values through social media algorithms. They explore the tools developed to help individuals negotiate their values and the implications of AI in moral reasoning – venturing into compelling questions about human-AI symbiosis, the nature of meaningful experiences, and whether machines can truly understand what matters to us. For anyone intrigued by the future of human consciousness and decision-making in an AI-integrated world, this discussion opens up fascinating possibilities – and potential pitfalls – we may not have considered.Links & References: References:CouchSurfing - Wikipedia | CouchSurfing.org | WebsiteTristan Harris: How a handful of tech companies control billions of minds every day | TED TalkCenter for Humane Technology | WebsiteMEANING ALIGNMENT INSTITUTE | WebsiteReplika - AI Girlfriend/BoyfriendWill AI Improve Exponentially At Value Judgments? - by Matt Prewitt | RadicalxChangeMoral Realism (Stanford Encyclopedia of Philosophy)Summa Theologica - WikipediaWhen Generative AI Refuses To Answer Questions, AI Ethics And AI Law Get Deeply Worried | AI RefusalsAmanda Askell: The 100 Most Influential People in AI 2024 | TIME | Amanda Askells' work at AnthropicOvercoming Epistemology by Charles TaylorGod, Beauty, and Symmetry in Science - Catholic Stand | Thomas Aquinas on symmetryFriedrich Hayek - Wikipedia | “Hayekian”Eliezer Yudkowsky - Wikipedia | “AI policy people, especially in this kind Yudkowskyian scene”Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources | Resource rational (cognitive science term)Papers & posts mentioned[2404.10636] What are human values, and how do we align AI to them? | Paper by Oliver Klingefjord, Ryan Lowe, Joe EdelmanModel Integrity - by Joe Edelman and Oliver Klingefjord | Meaning Alignment Institute SubstackBios:Joe Edelman is a philosopher, sociologist, and entrepreneur whose work spans from theoretical philosophy to practical applications in technology and governance. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures.Joe's Social Links:Meaning Alignment Institute | WebsiteMeaning Alignment Institute (@meaningaligned) / XJoe Edelman (@edelwax) / XMatt Prewitt (he/him) is a lawyer, technologist, and writer. He is the President of the RadicalxChange Foundation.Matt's Social Links:ᴍᴀᴛᴛ ᴘʀᴇᴡɪᴛᴛ (@m_t_prewitt) / X Connect with RadicalxChange Foundation:RadicalxChange Website@RadxChange | TwitterRxC | YouTubeRxC | InstagramRxC | LinkedInJoin the conversation on Discord.Credits:Produced by G. Angela Corpus.Co-Produced, Edited, Narrated, and Audio Engineered by Aaron Benavides.Executive Produced by G. Angela Corpus and Matt Prewitt.Intro/Outro music by MagnusMoone, “Wind in the Willows,” is licensed under an Attribution-NonCommercial-ShareAlike 3.0 International License (CC BY-NC-SA 3.0)

Demystifying Science
Machine Intelligence and the End of History - Jeffrey Ladish, Palisades Research - DS Pod #301

Demystifying Science

Play Episode Listen Later Nov 22, 2024 149:52


Jeffrey Ladish is the director of Palisades research, and AI safety organization based in the San Francisco Bay. Our previous conversations about the dangers of AI left us insufficiently concerned. Ladish takes up the mantle of trying to convince us that there's something worth worrying about by detailing the various projects and experiments that Palisades has been undertaking with the goal of demonstrating that AI agents let loose on the world are capable of wreaking far more havoc than we expect. We leave the conversation more wary of the machines than ever - less because we think hyper-intelligent machines are just around the corner, and more because Ladish paints a visceral picture of the cage we're building ourself into. PATREON: get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasB MERCH: Rock some DemystifySci gear : https://demystifysci.myspreadshop.com/ AMAZON: Do your shopping through this link: https://amzn.to/3YyoT98 (00:00) Go! (00:07:36) Risks from Nuclear Wars and Emerging Technologies (00:15:01) Experiments with AI Agents (00:25:11) Enhanced AI as Tools vs. Intelligent Agents (00:34:39) AI Learning Through Games (00:44:04) AI Goal Accomplishment (00:55:01) Intelligence and Reasoning (01:07:11) Technological Arms Race and AI (01:17:16) The Rise of AI in Corporate Roles (01:25:20) Inception and Incentivization Issues in AI (01:35:12) AI Threats and Comparisons to Bioterrorism (01:45:13) Constitutional Analogies and Regulatory Challenges (01:55:11) AI as a Threat to Human Control (02:07:02) Challenges in Managing Technological Advancements (02:16:49) Advancements and Risks in AI Development (02:25:01) Current AI Research and Public Awareness #FutureOfAI, #AlgorithmicControl, #Cybersecurity, #AI, #AISafety, #ArtificialIntelligence, #TechnologyEthics, #FutureTech, #AIRegulation, #AIThreats, #Innovation, #TechRisks, #Cybersecurity, #SyntheticBiology, #TechGovernance, #HumanControl, #AIAlignment, #AIAdvancement, #TechTalk, #Podcast, #TechEthics, #sciencepodcast, #longformpodcast Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671

Brain in a Vat
Will Machines Rule the World? | Barry Smith

Brain in a Vat

Play Episode Listen Later Nov 3, 2024 60:45


Does Searle's Chinese Room show that AI is not intelligent or creative? Does AI learn the way humans do? And could AI ever be capable of truly creative thought? [00:00] Introduction to the Chinese Room Argument [03:55] The Evolution of Human Language [05:58] ChatGPT's Capabilities and Limitations [12:09] Human Understanding vs. AI Responses [15:33] The Complexity of Human Desires [20:24] The Nature of Human and Machine Intelligence [30:58] AI and Creativity: A Writer's Perspective [33:08] The Limits of AI Creativity [35:01] The Future of AI and AGI [36:05] Thermodynamics and Human Creativity [39:13] Live Experiment: AI Poetry [42:40] AI's Impressive Achievements [49:42] The Debate on AGI [59:53] Final Thoughts

Zero to Unicorn
Enhancing Machine Intelligence

Zero to Unicorn

Play Episode Listen Later Aug 2, 2024 76:26


SummaryGuido Meardi, CEO of V-Nova, discusses the importance of data compression in digital technology and its impact on various industries. He explains how V-Nova's compression technology enables more efficient use of data, leading to better quality and cost savings. Meardi also shares the exciting future of volumetric movies and immersive entertainment, where viewers can be inside the movie and experience it firsthand. He highlights the role of AI in data compression and its potential to enhance machine intelligence. Meardi's journey from being a management consultant at McKinsey to becoming an entrepreneur is also explored. Guido Meardi discusses the importance of mentors and self-confidence in entrepreneurship. He shares his experience of transitioning from a successful career at McKinsey to starting his own company, V-Nova. He emphasizes the challenges of securing funding for innovative ventures and the need to balance self-confidence with listening to feedback. Meardi also highlights the importance of surrounding oneself with mentors and a supportive team. He discusses the ongoing challenges of driving adoption and expanding into new markets. His dream is to fulfill the promise of V-Nova's technology and make a significant impact in the field of video compression.TakeawaysData compression plays a crucial role in digital technology, enabling more efficient use of data and leading to better quality and cost savings.V-Nova's compression technology, such as MPEG-5 LCVC, allows for significant reductions in data transmission and processing power while maintaining quality.The future of entertainment includes volumetric movies, where viewers can be inside the movie and experience it firsthand.AI can enhance data compression and improve machine intelligence, leading to more efficient processing and analysis of data.Guido Meardi's journey from being a management consultant at McKinsey to becoming an entrepreneur highlights the importance of driving positive change and making a difference in the world. Choosing mentors and having multiple mentors is crucial for personal growth and success in entrepreneurship.Balancing self-confidence with the ability to listen to feedback is essential for making informed decisions.Securing funding for innovative ventures can be challenging, especially when the idea is unconventional.Surrounding oneself with a supportive team and mentors is important for navigating the entrepreneurial journey.Driving adoption and expanding into new markets are ongoing challenges for technology companies.The dream is to fulfill the promise of innovative technology and make a significant impact in the industry.Linkswww.norhart.com Beat the banks! Interested in earning up to 8.5% interest? Visit Norhart Invest to learn more!

Redefining AI - Artificial Intelligence with Squirro
Spotlight Eleven: How to Save Human Ability in an Age of Intelligent Machines

Redefining AI - Artificial Intelligence with Squirro

Play Episode Listen Later Jun 20, 2024 1:22


Season Three - Spotlight Eleven Our Eleventh spotlight of this season is a snippet of our upcoming episode: Matt Beane - How to Save Human Ability in an Age of Machine Intelligence. Join host Lauren Hawker Zafer as she engages with Matt Beane. Don't miss this unique redefining what in means to develop skill in the age of machine intelligence. Who is Matt Beane? Matt Beane does field research on work involving robots and AI to uncover systematic positive exceptions that we can use across the broader world of work. His award-winning research has been published in top management journals such as Administrative Science Quarterly and Harvard Business Review, and he graced the TED stage. He also took a two-year hiatus from his PhD at MIT's Sloan School of Management to help found and fund Humatics, a full-stack IoT startup. In 2012 he was selected as a Human-Robot Interaction Pioneer, and in 2021 was named to the Thinkers50 Radar list. Beane is an assistant professor in the Technology Management department at the University of California, Santa Barbara, and a Digital Fellow with Stanford's Digital Economy Lab and MIT's Initiative on the Digital Economy. When he's not studying intelligent technologies and learning, he enjoys playing guitar; his morning coffee ritual with his wife, Kristen; and reading science fiction. “If you're worried about your skills becoming obsolete, this book may be your saving grace. Matt Beane has spent his career studying how to gain and maintain expertise as technology evolves, and his analysis is both engrossing and edifying.” —Adam Grant, #1 New York Times bestselling author of Hidden Potential and Think Again, and host of the TED podcast WorkLife “Beane shows us the true human-centered approach to AI advancements and how we must act now to achieve the next generation of human skills coupled with the productivity gains from AI.” —Fei Fei Li, Sequoia Professor of Computer Science, Founding, Director of Stanford Institute for Human-centered AI (HAI), Stanford University #ai #redefiningai #techpodcast #squirro

Design Better Podcast
Bonus: AI and the Creative Process

Design Better Podcast

Play Episode Listen Later Jun 4, 2024 61:04


Generative AI is finding its way into the tools and processes that power creative work. Exciting? Terrifying? Maybe a little of both. Adobe has been not only shipping impressive generative AI tools and features, but thinking about the implications this new technology could have on creative careers.  Adobe invited us to their offices in San Francisco for a conversation with a panel of leaders including Rachana Rele, Samantha Warren, Danielle Morimoto, and Laura Herman who shared how they and their teams are building and training AI models ethically while bringing innovation to the creative process. Find the transcript, show notes and more on our Substack: https://designbetterpodcast.com/p/bonus-ai-and-the-creative-process Panelists Rachana Rele, VP of Design, Generative AI, Emerging Products, & Adobe Ventures Rachana is at the forefront of shaping the future of design and technology. In her role, she leads the charge in harnessing the power of generative AI, and Adobe Firefly, to unlock creativity for creatives, communicators, and marketers. She serves as a product leader, shepherding incubations from zero to one and guiding emerging businesses like Adobe Stock to achieve scale. With a deep-seated passion for fostering world-class design teams, Rachana thrives on crafting experiences that resonate with customers and drive tangible value for businesses. Rachana holds both bachelor's and master's degrees in industrial engineering with a specialized focus on human-computer interaction. Her student-always mindset has led her to pursue an Executive MBA at Haas School of Business, UC Berkeley (class of 2025). Samantha Warren, Sr Design Director, Machine Intelligence and New Technologies Samantha is the Senior Design Director for MINT (Machine Intelligence and New Technologies), where we focus on Emerging projects, Adobe Firefly, and Artificial Intelligence across Adobe software. Samantha specializes in product strategy and user experience design. Her superpower is leading teams with vision while managing practical execution. Danielle Morimoto, Sr Design Manager, Adobe Firefly Danielle Morimoto a Sr. Design Manager for Generative AI with the Machine Intelligence and New Technologies team at Adobe. I've worked on a range of projects from initiatives supporting emerging artists ages 13 to 24 that are using creativity as a force for positive impact, to the next evolution of Creative Cloud on the web. I've helped define the most compelling experiences for development over the next 1–3 years by uncovering untapped potential and ultimately identifying how people could be using Adobe in the future. I'm an avid road cyclist, NBA Golden State Warriors fan and lover of ice cream. Laura Herman, Sr Research Manager, Adobe Firefly Laura Herman is the Head of AI Research at Adobe and a doctoral researcher at the University of Oxford's Internet Institute. Laura's academic research examines the impact of algorithmic curation on global visual cultures, taking an inclusive and international approach with a particular focus on the Global South. At Adobe, Laura leads the team that researches Generative AI for Creative Cloud. Previous technologies that she has worked on have been acknowledged as Apple's “App of the Day” and as a Webby People's Choice Award winner. Laura has previously held research positions at Intel, Harvard University, and Princeton University, from which she graduated with honors in Neuroscience & Psychology. Learn more about your ad choices. Visit megaphone.fm/adchoices

FoodNavigator-USA Podcast
Soup-To-Nuts Podcast: Spate leverages machine intelligence to identify, separate emerging trends from fads

FoodNavigator-USA Podcast

Play Episode Listen Later May 30, 2024 26:18


Today's niche diets, social media blips and internet search spikes could be tomorrow's must-have functional ingredients, breakout snacks or new kitchen staples -- the trick is spotting them early and knowing whether they represent long-term trends or short-term fads and how best to position a launch or brand to capitalize on consumer demand.

Infinite Machine Learning
How Symbolic AI is Transforming Critical Infrastructure

Infinite Machine Learning

Play Episode Listen Later May 28, 2024 38:08


Eric Daimler is the cofounder and CEO of Conexus AI, a data management platform that provides composable and machine-verifiable data integration. He was previously an assistant dean and assistant professor at Carnegie Mellon University. He was the founding partner of Hg Analytics and managing director at Skilled Science. He was also the White House Presidential Innovation Fellow for Machine Intelligence and Robotics. Eric's favorite book: ReCulturing (Author: Melissa Daimler) (00:00) Understanding Symbolic AI(02:42) Symbolic AI mirrors biological intelligence(06:01) Category Theory(08:42) Comparing Symbolic AI and Probabilistic AI(11:22) Symbolic Generative AI(14:19) Implementing Symbolic AI(18:25) Symbolic Reasoning(21:24) Explainability(24:39) Neuro Symbolic AI(26:41) The Future of Symbolic AI(30:43) Rapid Fire Round--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi 

The Gradient Podcast
Seth Lazar: Normative Philosophy of Computing

The Gradient Podcast

Play Episode Listen Later May 23, 2024 110:17


Episode 124You may think you're doing a priori reasoning, but actually you're just over-generalizing from your current experience of technology.I spoke with Professor Seth Lazar about:* Why managing near-term and long-term risks isn't always zero-sum* How to think through axioms and systems in political philosphy* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AISeth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:54) Ad read — MLOps conference* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation* (03:53) Attention allocation as an independent good (or bad)* (08:22) Axioms in political philosophy* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust* (15:05) AI safety / catastrophic risk concerns* (22:10) Superintelligence arguments, reasoning about technology* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other? * (35:55) GPT-2, model weights, related debates* (39:11) Power and economics—coordination problems, company incentives* (50:42) Morality tales, relationship between safety and capabilities* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy* (1:02:28) What is a feasibility horizon? * (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter* (1:14:25) Sociotechnical lenses, narrowly technical solutions* (1:19:47) Experiments for responsibly integrating AI systems into society* (1:26:53) Helpful/honest/harmless and antagonistic AI systems* (1:33:35) Managing incentives conducive to developing technology in the public interest* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia* (1:46:54) How we can help legitimize and support interdisciplinary work* (1:50:07) OutroLinks:* Seth's Linktree and Twitter* Resources* Attention, moral skill, and algorithmic recommendation* Catastrophic AI Risk slides Get full access to The Gradient at thegradientpub.substack.com/subscribe

CERIAS Security Seminar Podcast
David Stracuzzi, Defining Trusted Artificial Intelligence for the National Security Space

CERIAS Security Seminar Podcast

Play Episode Listen Later Apr 24, 2024 51:26


For the past four years, Sandia National Laboratories has been conducting a focused research effort on Trusted AI for national security problems. The goal is to develop the fundamental insights required to use AI methods in high-consequence national security applications while also improving the practical deployment of AI. This talk looks at key properties of many national security problems along with Sandia's ongoing effort to develop a certification process for AI-based solutions. Along the way, we will examine several recent and ongoing research projects, including how they contribute to the larger goals of Trusted AI.  The talk concludes with a forward-looking discussion of remaining research gaps. About the speaker: David manages the Machine Intelligence and Visualization department, which conducts cutting-edge research in machine learning and artificial intelligence for national security applications, including the advanced visualization of data and results.  David has been studying machine learning in the broader context of artificial intelligence for over 15 years.  His research focuses on applying machine learning methods to a wide variety of domains with an emphasis on estimating the uncertainty in model predictions to support decision making.  He also leads the Trusted AI Strategic Initiative at Sandia, which seeks to develop fundamental insights into AI algorithms, their performance and reliability, and how people use them in national security contexts.  Prior to joining Sandia, David spent three years as research faculty at Arizona State University and one year as a postdoc at Stanford University developing intelligent agent architectures. He received his doctorate in 2006 and MS in 2002 from the University of Massachusetts at Amherst for his work in machine learning.  David earned his Bachelor of Science from Clarkson University in 1998.Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

Town Hall Seattle Arts & Culture Series
251. Robots Who Paint: What's Next with AI and Art?

Town Hall Seattle Arts & Culture Series

Play Episode Listen Later Apr 1, 2024 80:50


Three expert guests discuss the implications of AI and the fine arts in a conversation moderated by Steve Scher.  Scientist and founder of the Artists and Machine Intelligence program at Google, Blaise Agüera y Arcas, will offer his “news from the front” about the latest developments in AI capabilities, and what he foresees ahead.  Alex Alben, technology executive, author, and law professor, will review the implications of AI to the artist from the point of view of intellectual property: is anything on the internet up for grabs, or is compensation for image “borrowing” a possibility? Finally, painter Jason Puccinelli, who uses AI as one of his tools in image creation, will talk about what he finds exciting and useful, and what he finds problematic, about this new resource. Presented by Town Hall Seattle and Gage Academy of Art.

Discover Daily by Perplexity
Anthropic's Claude 3, Apple's M3 MacBook Air, and Google's AI Program for Publishers

Discover Daily by Perplexity

Play Episode Listen Later Mar 5, 2024 7:35 Transcription Available


This episode takes a closer look at Anthropic's Claude 3 AI models, known for their advanced cognitive capabilities and safety features, Apple's introduction of the M3 MacBook Air with its enhanced performance and sustainability, and Google's AI program aimed at supporting publishers.For more on these stories:Anthropic launches Claude 3Apple unveils M3 MacBook AirGoogle pays publishers to use AIPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin

Quarantine Sessions with Jake Kobrin
Cybernetic Conjurations: The Nexus of Magic and Machine Intelligence

Quarantine Sessions with Jake Kobrin

Play Episode Listen Later Jan 10, 2024 8:52


“Cybernetic Conjurations: The Nexus of Magic and Machine Intelligence” This piece goes into the history of magic, introducing key figures such as Aleister Crowley and John Dee – both famous for their mystical practices about our contemporary development of AI. It delves into how the ancient ideas of summoning and servitors correlate with contemporary digital creations, stating that it is a never-ending thread of human intention to shape reality. The article also discusses the ethical considerations and philosophical issues raised by AI's exponential rise, making parallels with ancient magicians' goals. It invites readers to ponder the place of AI in our lives and its potential for either elevating or undermining them.

Tough Tech Today with Meyen and Miller
Tough Tech Today Season 4 Preview: Exploring Oceans, Quantum Realms, and Machine Intelligence

Tough Tech Today with Meyen and Miller

Play Episode Play 60 sec Highlight Listen Later Jan 9, 2024 13:32


Tough Tech Today is transitioning seasons, recapping our Season 3 themes exploring the tough tech domains of biology, space, and fusion energy. We are also preparing for the show's fourth season and we are really excited for it!Upcoming themes:Blue tech – The advanced technologies of the maritime industry. Incredible machines, sustainable oceans, and mysteries to solve in our planet's seas.Quantum sciences – A wild world where physics gets weird, we are super curious about opportunities in computing, sensing, and communication.Artificial Intelligence – While it was only fairly recently that "A.I." has debuted on the pop-culture zeitgeist, for years tech trailblazers have been developing incredible applications of machine intelligence to solve our world's toughest challenges.Thank you, our Season 3 guests!BioTechNew Equilibrium Biosciences - Virginia BurgerElevian - Mark AllenConcerto Biosciences - Cheri AckermanSpaceSpace Capital - Chad AndersonMithril Technologies - Scarlett KollerArkenstone Ventures/USAF USSF - Preston DunlapFusionProxima Fusion - Francesco SciortinoTDK Ventures / Type One Energy - Tina TosukhowongFocused Energy - Thomas Forner and Pravesh PatelP.S. Thank you to our tough tech champions. We really appreciate your support! If you'd like to level-up your support of our work, take a look at our pay-if-you-can membership options so you can help us bring Tough Tech Today to more folks!

TechStuff
The Fallout from OpenAI Firing CEO Sam Altman

TechStuff

Play Episode Listen Later Nov 20, 2023 51:27 Transcription Available


In a dramatic turn of events, OpenAI's board of directors fired CEO and co-founder Sam Altman. Then they tried to hire him back. Then they announced a former Twitch CEO will lead the company. What the what?See omnystudio.com/listener for privacy information.

The Genetics Podcast
EP 111: Combining human and machine intelligence in protein engineering with Dr. James Field, CEO of LabGenius

The Genetics Podcast

Play Episode Listen Later Nov 9, 2023 35:34


In this episode of The Genetics Podcast, we welcome Dr. James Field, founder and CEO of LabGenius. Join us as we delve into LabGenius' cutting-edge approach that utilises machine learning, artificial intelligence, and sophisticated robotics to advance antibody discovery and drug development. As a bonus, learn about James' path from scientist to CEO, and how he created LabGenius.

In Pursuit of Development
Humanity's Enduring Quest for Power and Prosperity – Daron Acemoglu

In Pursuit of Development

Play Episode Listen Later Oct 25, 2023 48:10


We engage in a discussion centered around Daron Acemoglu's latest book, co-authored with Simon Johnson, titled Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. The choices we make regarding technology can either advance the interests of a select elite or serve as the foundation for widespread prosperity. But technology's trajectory can be, and should be, controlled and directed for the benefit of all. The remarkable advances in computing over the past fifty years have the potential to be tools of empowerment and democratization, but only if decision-making power is dispersed rather than concentrated in the hands of a few overconfident tech leaders.Daron Acemoglu is Professor of Economics at the Massachusetts Institute of Technology, MIT.  @DAcemogluMIT(Cover photo of Daron Acemoglu by Cody O'Loughlin)Key highlights:Introduction - 00:24Understanding “progress” - 04:06Optimism in an era of doom and gloom - 12:00The power of persuasion - 16:10Shared prosperity, welfare, and whether technology is always useful - 25:08Machine intelligence vs. machine usefulness - 30:12How technology (e.g., AI) can help promote development in low-income countries - 36:50Host:Professor Dan Banik (@danbanik  @GlobalDevPod)Apple Google Spotify YouTubeSubscribe: https://globaldevpod.substack.com/

Economist Podcasts
Babbage: How AI promises to revolutionise science

Economist Podcasts

Play Episode Listen Later Sep 20, 2023 46:27


Discussions about artificial intelligence tend to focus on its risks, but there is also excitement on the horizon. AI tools, like the models beneath ChatGPT, are being increasingly used by scientists for everything from finding new drugs and materials to predicting the shapes of proteins. Self-driving lab robots could take things even further towards making new discoveries. As it gets ever more useful, could AI change the scientific process altogether?Jane Dyson, structural biologist at the Scripps Research Institute in La Jolla, California, explains why Google DeepMind's AlphaFold tool is useful, but scientists should be aware of its limitations. This week, Google DeepMind released a new tool to unpick the link between genes and disease, as Pushmeet Kohli, head of the company's “AI for Science” team, explains. Also, Kunal Patel, one of our producers, meets Erik Bjurström, a researcher at Chalmers University of Technology and Ross King, a professor of Machine Intelligence at Chalmers University of Technology and at the University of Cambridge. They explain why self-driving lab robots could make research more efficient. Alok Jha, The Economist's science and technology editor hosts, with Abby Bertics, our science correspondent and Tom Standage, deputy editor. Sign up for Economist Podcasts+ now and get 50% off your subscription with our limited time offer: economist.com/podcastsplus-babbage. You will not be charged until Economist Podcasts+ launches.If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription.For more information about Economist Podcasts+, including how to get access, please visit our FAQs page. Hosted on Acast. See acast.com/privacy for more information.

Babbage from Economist Radio
Babbage: How AI promises to revolutionise science

Babbage from Economist Radio

Play Episode Listen Later Sep 20, 2023 46:27


Discussions about artificial intelligence tend to focus on its risks, but there is also excitement on the horizon. AI tools, like the models beneath ChatGPT, are being increasingly used by scientists for everything from finding new drugs and materials to predicting the shapes of proteins. Self-driving lab robots could take things even further towards making new discoveries. As it gets ever more useful, could AI change the scientific process altogether?Jane Dyson, structural biologist at the Scripps Research Institute in La Jolla, California, explains why Google DeepMind's AlphaFold tool is useful, but scientists should be aware of its limitations. This week, Google DeepMind released a new tool to unpick the link between genes and disease, as Pushmeet Kohli, head of the company's “AI for Science” team, explains. Also, Kunal Patel, one of our producers, meets Erik Bjurström, a researcher at Chalmers University of Technology and Ross King, a professor of Machine Intelligence at Chalmers University of Technology and at the University of Cambridge. They explain why self-driving lab robots could make research more efficient. Alok Jha, The Economist's science and technology editor hosts, with Abby Bertics, our science correspondent and Tom Standage, deputy editor. Sign up for Economist Podcasts+ now and get 50% off your subscription with our limited time offer: economist.com/podcastsplus-babbage. You will not be charged until Economist Podcasts+ launches.If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription.For more information about Economist Podcasts+, including how to get access, please visit our FAQs page. Hosted on Acast. See acast.com/privacy for more information.

Digital Optimist
Episode 34: Machine Intelligence: When Digital Minds Wake Up

Digital Optimist

Play Episode Listen Later Aug 11, 2023 28:53


What is the human mind in relationship to our brain? What is a machine mind going to look like? How do humans decide on their next best word or action and how does an LLM do the same? At what point might we believe that a machine mind is waking up? In this episode, Scott dives into the philosophy of mind by asking you the listener a handful of intriguing questions. Along the way you might come to believe that we overstate the sophistication of the human mind and under predict the possibility for a machine mind to be awake. If nothing else, this episode will help you think about analogies and ideas you might not have heard before and that will help inspire your mind!

Ground Truths
Melanie Mitchell: Straight Talk on A.I. Large Language Models

Ground Truths

Play Episode Listen Later Aug 4, 2023 39:17


Transcript with LinksEric Topol (00:00):This is Eric Topol, and I'm so excited to have the chance to speak to Melanie Mitchell. Melanie is the Davis Professor of Complexity at the Santa Fe Institute in New Mexico. And I look to her as one of the real, not just leaders, but one with balance and thoughtfulness in the high velocity AI world of large language models that we live in. And just by way of introduction, the way I got to first meet Professor Mitchell was through her book, Artificial Intelligence, A Guide for Thinking Humans. And it sure got me thinking back about four years ago. So welcome, Melanie.Melanie Mitchell (00:41):Thanks Eric. It's great to be here.The Lead Up to ChatGPT via Transformer ModelsEric Topol (00:43):Yeah. There's so much to talk about and you've been right in the middle of many of these things, so that's what makes it especially fun. I thought we'd start off a little bit of history, because when we both were writing books about AI back in 2019 publishing the world kind of changed since . And in November when ChatGPT got out there, it signaled there was this big thing called transformer model. And I don't think many people really know the difference between a transformer model, which had been around for a while, but maybe hadn't come to the surface versus what were just the deep neural networks that ushered in deep learning that you had so systematically addressed in your book.Melanie Mitchell (01:29):Right. Yeah. Transformers are, were kind of a new thing. I can't remember exactly when they came out, maybe 2018, something like that, right from Google. They were an architecture that showed that you didn't really need to have a recurrent neural network in order to deal with language. So that was one of the earlier things, you know, and Google translate and other language processing systems, people were using recurrent neural networks, networks that sort of had feedback from one time step to the next. But now we have the transformers, which instead use what they call an attention mechanism where the entire text that the system is dealing with is available all at once. And the name of the paper, in fact was Attention is All You need. And that by attention is all you need they meant this particular attention mechanism in the neural network, and that was really a revolution and enabled this new era of large language models.Eric Topol (02:34):Yeah. And as you aptly pointed out, that was in, that was five years ago. And then it took like, oh, five years for it to become in the public domain of Chat GPT. So what was going on in the background?Melanie Mitchell (02:49):Well, you know, the idea of language models (LLMs) that is neural network language models that learn by trying to predict the next word in a, in a text had been around for a long time. You know, we now have GPT-4, which is what's underlying at least some of ChatGPT, but there was GPT-1 and GPT-2, you probably remember that. And all of this was going on over those many years. And I think that those of us in the field have seen more of a progression with the increase in abilities of these increasingly large, large language models. that has really been an evolution. But I think the general public didn't have access to them and ChatGPT was the first one that like, was generally available, and that's why it sort of seemed to appear out of nothing.SPARKS OF ARTIFICIAL GENERAL INTELLIGENCESentience vs IntelligenceEric Topol (03:50):Alright. So it was kind of the, the inside world of the computer science kinda saw a more natural progression, but people were not knowing that LLMs were on the move. They  were kinda stunned that, oh, look at these conversations I can have and how, how humanoid it seemed. Yeah. And you'll recall there was a fairly well-publicized event where a Google employee back I think last fall was, put on suspension, ultimately left Google because he felt that the AI was sentient. Maybe you'd want to comment that because that's kind of a precursor to some of the other things we're going to discuss,Melanie Mitchell (04:35):Right? So yeah, so one of the engineers who was working with their version of ChatGPT, which I think at the time was called LaMDA was having conversations with it and came to the conclusion that it was sentient, whatever that means, , you know, that, that it was aware that it had feelings that it experienced emotions and all of that. He was so worried about this and he wanted, you know, I think he made it public by releasing some transcripts of his conversations with it. And I don't think he was allowed to do that under his Google contract, and that was the issue.  tThat made a lot of news and Google pushed back and said, no, no, of course it's not sentient. and then there was a lot of debate in the philosophy sphere of what sentient actually means, how you would know if something is sentient. And it Yeah. and it's kind of gone from there.Eric Topol (05:43):Yeah. And then what was interesting is then in March based upon GPT-4 the Microsoft Research Group published this sparks paper where they said, it seems like it has some artificial general intelligence, AGI qualities, kind of making the same claim to some extent. Right?Melanie Mitchell (06:05):Well, that's a good question. I mean, you know, intelligence is one thing, sentience is another. There's a question of whether, you know, how they're related, right? Or if they're related at all, you know, and what they all actually mean. And these terms, this is one of the problems. Of course, these terms are not well-defined, but most, I think most people in AI would say that intelligence and sentience are different. You know something can be intelligent or act intelligently without having any sort of awareness or sense of self or, you know, feelings or whatever sentience might mean. So I think that the sparks of AGI paper from Microsoft was more about this, that saying that they thought GPT-4 four, the system they were experimenting with, showed some kind of generality in its ability to deal with different kinds of tasks. You know, and this, this contrasts with the old, older fashioned ai, which typically was narrow only, could do one task, you know, could play chess, could play Go, could do speech recognition, or could, you know, generate translations. But it, they couldn't do all of those things. And now we have these language models, which seemed to have some degree of generality.The Persistent Gap Between Humans and LLMsEric Topol (07:33):Now that gets us perfectly to an important Nature feature last week which was called the “Easy Intelligence Test that AI chatbots fail.” And it made reference to an important study you did. First, I guess the term ARC --Abstract and Reasoning Corpus, I guess that was introduced a few years back by Francois Chollet. And then you did a ConceptARC test. So maybe you can tell us about this, because that seemed to have a pretty substantial gap between humans and GPT-4.Melanie Mitchell (08:16):Right? So, so, so Francois Chollet is a researcher at Google who put together this set of sort of intelligence test like puzzles visual reasoning puzzles that tested for abstraction abilities or analogy abilities. And he put it out there as a challenge. A whole bunch of people participated in a competition to get AI programs to solve the problems, and none of them were very successful. And so what, what our group did was we thought that, that the original challenge was fantastic, but the prob one of the problems was it was too hard, it was even hard for people. And also it didn't really systematically explore concepts, whether a, a system understood a particular concept. So, as an example, think about, you know, the concept of two things being the same, or two things being different. Okay?(09:25):So I can show you two things and say, are these the same or are they different? Well, it turns out that's actually a very subtle question. 'cause when we, you know, when we say the same we, we can mean sort of the, the same the same size, the same shape, the same color, this, you know, and there's all kinds of attributes in which things can be the same. And so what our system did was it took concepts like same versus different. And it tried to create lots of different challenges, puzzles that had that required understanding of that concept. So these are very basic spatial and semantic concepts that were similar to the ones that Solei had proposed, but much more systematic. 'cause you know, this is one of the big issues in evaluating AI systems is that people evaluate them on particular problems.(10:24):For example, you know, I think a lot of people know that ChatGPT was able to answer many questions from the bar exam. But if you take like a single question from the bar exam and think about what concept it's testing, it may be that ChatGPT could answer that particular question, but it can't answer variations that has the same concept. So we tried to take inside of this arc domain abstraction and reasoning corpus domain, look at particular concepts and say, systematically can the system understand different variations of the same concept? And then we tested this, these problems on humans. We tested them on the programs that were designed to solve the ARC challenges, and we tested them on G P T four, and we found that humans way outperformed all the machines. But there's a caveat, though, is that these are visual puzzles, and we're giving them to GPT-4, which is a language model, a text, right? Right. System. Now, GPT four has been trained on images, but we're not using the system that can deal with images. 'cause that hasn't been released yet. So we're giving the system our problems in a text-based format rather than like, like giving it to humans who actually can see the pictures. So this, this can make a difference. I would say our, our our, our results are, are preliminary .Eric Topol (11:57):Well, what do you think will happen when you can use in inputs with images? Do you think that it will equilibrate there'll be parity, or there still will be a gap in that particular measure of intelligence?Melanie Mitchell (12:11):I would predict there, there will still be a big gap. Mm-hmm. , but, you know, I guess we'll seeThe Biggest Question: Stochastic Parrot or LLM Real Advance in Machine Intelligence?Eric Topol (12:17):Well, that, that's what we want to get into more. We want to drill down on the biggest question of large language models. and that is, are they really you know, what is their level of intelligence? Is it something that is beyond the so-called stochastic parrot or the statistical ability to adjudicate language and words? So there was a paper this week in Nature Human Behavior, not a journal that normally publishes these kind of papers. And as you know it was by Taylor Webb and colleagues at U C L A. And it was basically saying for analogic reasoning ,making analogs, which would be more of a language task,  I guess, but also some image capabilities that it could do as well or better than humans. And these were college students. So , just to qualify, they're, they're not, maybe not, they're not fully representative of the species, but they're at least some learned folks. So what did, what did you think of that study?Melanie Mitchell (13:20):Yeah, I found it really fascinating. and, and kind of provocative. And, you know, it, it kind of goes along with a, a many, there's been many studies that have, have been applying tests that were kind of designed for humans, psychological tests to large language models. And this one was applying sort of analogy tests that, that psychologists have done on humans to, to, to large language models. But there's always kind of an issue of interpreting the results because we know these large language models most likely do not think like we do. Hmm. And so one question is like, how are they performing these analogies? How are they making these analogies? So this brings up some issues with evaluation. When we try to evaluate large language models using tests that were designed for humans. One question is, were these tests at all actually in the training data of a large language model? Like, had they, you know, these language models are trained on enormous amounts of text that humans have produced. And some of the tests that that paper was using were things that had been published in the psychology literature.(14:41):So one question is, you know, to what extent were those in this training data? It's hard to tell because we don't know what the training data exactly is. So that's one question. Another question is are these systems actually using analog reasoning the way that we humans use it? Or are they using some other way of solving the problems? Hmm. And that's also hard to tell. 'cause these systems are black boxes, but it might actually matter because it might affect how well they're able to generalize. You know, if I can make an analogy usually you would assume that I could actually use that analogy to understand some new, you know, some new situation by an analogy to some old situation. But it's not totally clear that these systems are able to do that in any general way. And so, you know, I tdo hink these results, like these analogy results, are really provocative and interesting.(15:48):But they will require a lot of further study to really make sense of what they mean, like to when you give, when, when the, the, you know, ChatGPT passes a bar exam, you might ask, well, and let's say it's, you know, it does better than most humans, can you say, well, can it now be a lawyer? Can it go out and replace human lawyers? I mean, a human who passed the bar exam can do that. But I don't know if you can make the same assumption for a language model, because it's the way that it's doing, answering the questions in a way that its reasoning might be quite different and not imply the same kinds of more general abilities.Eric Topol (16:32):Yeah. That's really vital. And something else that you just brought up in multiple dimensions is the problem of transparency. So we don't even know the, the specs, the actual training, you know, so many of the components that led to the model. and so you, by not knowing this we're kind of stuck to try to interpret it. And I, I guess if you could comment about transparency seems to be a really big issue, and then how are we going to ever understand when there's certain aspects or components of intelligence where, you know, there does appear to be something that's surprising, something that you wouldn't have anticipated, and how could that be? Or on the other hand, you know, why is it failing? so what is, is transparency the key to this? Or is there something more to be unraveled?Melanie Mitchell (17:29):I think transparency is, is a big part of it. Transparency, meaning, you know, knowing what data, the system was trained on, what the architecture of the system is. you know, what other aspects that go into designing the system. Those are important for us to understand, like how, how these systems are actually work and to assess them. There are some methods that people are using to try and kind of tease out the extent to which these systems have actually developed sort of the kind of intelligence that people have. So, so one, there was a paper that came out also last week, I think from a group at MIT where they looked at several tasks that were given that GPT-4 did very well on that seemed like certain computer programming, code generation, mathematics some other tasks.(18:42):And they said, well, if a human was able to generate these kinds of things to do these kinds of tasks, some small change in the task probably shouldn't matter. The human would still be able to do it. So as an example in programming, you know, generating code, so there's this notion that like an array is indexed from zero. The first number is, is indexed as zero, the second number is indexed as one, and so on. So but some programming languages start at one instead of zero. So what if you just said, now change to starting at one? Probably a human programmer could adapt to that very quickly, but they found that GPT-4 was not able to adapt very well.Melanie Mitchell (19:33):So the question was, is it using, being able to write the program by sort of picking things that it has already seen in its training data much more? Or is it able to, or is it actually developing some kind of human-like, understanding of the program? And they were finding that to some extent it was more the former than the latter.Eric Topol (19:57):So when you process all this you lean more towards because of the pre-training and the stochastic parrot side, or do you think there is this enhanced human understanding that we're seeing a level of machine intelligence, not broad intelligence, but at least some parts of what we would consider intelligence that we've never seen before? Where do you find yourself?Melanie Mitchell (20:23):Yeah, I think I'm, I'm, I'm sort of in the center ,Eric Topol (20:27):Okay. That's good.Melanie Mitchell (20:28):Everybody has to describe themselves as a centrist, right. I don't think these systems are, you know, stochastic parrots. They're, they're not just sort of parroting the data that they, they've been trained on, although they do that sometimes, you know, but I do think there is some reasoning ability there. Mm-hmm. , there is some, you know, what you might call intelligence. You know, it's, it's, but the, the question is how do you characterize it and, and how do you, I for the most important thing is, you know, how do you decide that it, that these systems have a general enough understanding to trust them,Eric Topol (21:15):Right? Right. You know,Melanie Mitchell (21:18):You know, in your field, in, in medicine, I think that's a super important question. They can, maybe they can outperform radiologists on some kind of diagnostic task, but the question is, you know, is that because they understand the data like radiologists do or even better, and will therefore in the future be much more trustworthy? Or are they doing something completely different? That means that they're going to make some very unhuman like mistakes. Yeah. And I think we just don't know.End of the Turing TestEric Topol (21:50):Well, that's, that's an important admission, if you will. That is, we don't know. And as you're, again I think really zooming in on, on for medical applications some of them, of course, are not so critical for accuracy because you, for example, if you have a, a conversation in a clinic and that's made into a note and all the other downstream tasks, you still can go right to the transcript and see exactly if there was a potential miscue. But if you're talking about making a diagnosis in a complex patient that can be, if, if you, if we see hallucination, confabulation or whatever your favorite word is to characterize the false outputs, that's a big issue. But I, I actually really love your Professor of Complexity title because if there's anything complex this, this would fulfill it. And also, would you say it's time to stop talking about the Turing tests that retire? It? It's, it's over with the Turing test because it's so much more complex than that .Melanie Mitchell (22:55):Yeah. I mean, one problem with the Turing test is there never was a Turing test. Turing never really gave the details of how this, this test should work. Right? And so we've had Turing tests with chatbots, you know, since the two thousands where people have been fooled. It's not that hard to fool people into thinking that they're talking to a human. So I, I do think that the Turing test is not adequate for the, the question of like, are these things thinking? Are they robustly intelligent?Eric Topol (23:33):Yeah. One of my favorite stories you told in your book was about Hans Clever and the you know, basically faking out the potent that, that there was this machine intelligence with that. And yeah, I, I think this, this is so apropo a term that is used a lot that a lot of people I don't think fully understand is zero shot or one shot, or can you just help explain that to the non-computer science community?Melanie Mitchell (24:01):Yeah. So, so in the context of large language models, what that means is so I could, so do I give you zero, zero shot means I just ask you a question and expect you to answer it. One shot means I give you an example of a question and an answer, and now I ask you a new question that you, you should answer. But you already had an example, you know, two shot is you give two examples. So it's just a ma matter of like, how many examples am I going to give you in order for you to get the idea of what I'm asking?Eric Topol (24:41):Well, and in a sense, if you were pre-trained unknowingly, it might not be zero shot. That is, if, if the, if the model was pre-trained with all the stuff that was really loaded into that first question or prompt, it might not really qualify as a zero shot in a way. Right?Melanie Mitchell (24:59):Yeah. Right. If it's already seen that, if it's learned, I think we're getting, it's seen that in its training data.The Great LLM (Doomsday?) Debate: An Existential ThreatEric Topol (25:06):Right. Exactly. Now, another topic that is related to all this is that you participated in what I would say is a historic debate. you and Yann LeCun, who I would not have necessarily put together . I don't know that Yan is a centrist. I would say he's more, you know, on one end of the spectrum versus Max Tegmark and Yoshua BengioEric Topol (25:37):Youshua Bengio, who was one of the three notables for a Turing award with Geoffrey Hinton So you were in this debate. I think called a Musk debate.Melanie Mitchell (25:52):Monk debate. Monk.Eric Topol (25:54):Monk. I was gonna say not right. Monk debate. Yeah. the Monk Debates, which is a classic debate series out of, I think, University of TorontoMelanie Mitchell (26:03):That's rightEric Topol (26:03):And it was debating, you know, is it all over ? Is AI gonna, and obviously there's been a lot of this in recent weeks, months since ChatGPT surfaced. So can you kind of give us, I, I tried to access that debate, but since I'm not a member or subscriber, I couldn't watch it, and I'd love to actually but can you give us the skinny of what was discussed and your position there?Melanie Mitchell (26:29):Yeah. So, so actually you can't, you can access it on YouTube.Eric Topol (26:32):Oh, good. Okay. Good. I'll put the link in for this. Okay, great.Melanie Mitchell (26:37):Yeah. so, so the, the resolution was, you know, is AI an existential threat? Okay. By an existential, meaning human extinction. So pretty dramatic, right? and there's been, this debate actually has been going on for a long time, you know, since, since the beginning of the talks about this, the “singularity”, right? and there's many people in the sort of AI world who fear that AI, once it becomes quote unquote smarter than people will be we'll lose control of it.(27:33):We'll, we'll give it some task like, you know, solve, solve the problem of carbon emissions, and it will then misinterpret or mis sort of not, not care about the consequences. it will just sort of maniacally try and achieve that goal, and in, in the process of that, for accidentally kill us all. So that's one of the scenarios. There's many different scenarios for this, you know and the, you know, debate. The debate was, it was very a debate is kind of an artificial, weird structured discussion where you have rebuttals and try, you know. But I think the debate really was about sort of should we right now be focusing our attention on what's called existential risk, that is that, you know, some future AI is going to become smarter than humans and then somehow destroy us, or should we be more focused on more immediate risks, the ones that we have right now like AI creating disinformation, fooling people and into thinking it's a human, magnifying biases in society, all the risks that people, you know, are experiencing immediately, right. You know, or will be very soon. and that the debate was more about sort of what should be the focusEric Topol (29:12):Hmm.Melanie Mitchell (29:13):And whether we can focus on very shorter, shorter immediate risks also, and also focus on very long-term speculative risks, and sort of what is the likelihood of those speculative risks and how would we, you know, even estimate that. So that was kind of the topic of the debate. SoEric Topol (29:35):Did, did you all wind up agreeing then thatMelanie Mitchell (29:38):? No. Were youEric Topol (29:38):Scared or, or where, where did it land?Melanie Mitchell (29:41):Well, I don't know. Interestingly what they do is they take a vote at the beginning of the audience. Mm-hmm. And they say like, you know, how many people agree with, with the resolution, and 67 percent of people agreed that AI was an existential threat. So it was two thirds, and then at the end, they also take a vote and say like, how many, what percent of minds were changed? And that's the side that wins. But ironically, the, the voting mechanism broke at the end, . So technology, you know, for the win ,Eric Topol (30:18):Because it wasn't a post-debate vote?Melanie Mitchell (30:21):But they did do an email survey. Oh. Oh. Which is I think not very, you know,Eric Topol (30:26):No, not very good. No, you can't compare that. No.Melanie Mitchell (30:28):Yeah. So I, you know, technically our side won. Okay. But I don't take it as a win, actually. ,Are Your Afraid? Are You Scared?Eric Topol (30:38):Well, I guess another way to put it. Are you, are you afraid? Are you scared?Melanie Mitchell (30:44):So I, I'm not scared of like super intelligent AI getting out of control and destroying humanity, right? I think there's a lot of reasons why that's extremely unlikely.Eric Topol (31:00):Right.Melanie Mitchell (31:01):But I am, I do fear a lot of things about ai, you know, some of the things I mentioned yes, I think are real threats, you know, real dire threats to democracy.Eric Topol (31:15):Absolutely.Melanie Mitchell (31:15):That to our information ecosystem, how much we can trust the information that we have. And also just, you know, to people losing jobs to ai, I've already seen that happening, right. And the sort of disruption to our whole economic system. So I am worried about those things.What About Open-Source LLMs, Like Meta's Llama2?Eric Topol (31:37):Yeah. No, I think the inability to determine whether something's true or fake in so many different spheres is putting us in a lot of jeopardy, highly vulnerable, but perhaps not the broad existential threat of the species. Yeah. But serious stuff for sure. Now another thing that's just been of interest of late is the willingness for at least one of these companies Meta to put out their model as an open Llama2. Two I guess to, to make it open for everyone so that they can do whatever specialized fine tuning and whatnot. Is that a good thing? Is that, is that a, is that a game changer for the field? Because obviously the computer resources, which we understand, for example, GPUs [graphic processing units] used-- over 25,000 for GPT-4, not many groups or entities have that many GPUs on hand to do the base models. But is having an open model, like Meta's available is that good? Or is that potentially going to be a problem?Melanie Mitchell (32:55):Yeah, I think probably I would say yes to both .Eric Topol (32:59):Okay. Okay.Melanie Mitchell (33:01):No, 'cause it is a mixed bag. I, I think ultimately, you know, we talked about transparency and open source models are transparent. I mean, I, I don't know if, I don't think they actually have released information on the data they use to train it, right? Right. So that, it lacks that transparency. But at least, you know, if you are doing research and trying to understand how this model works, you have access to a lot of the model. You know, it would be nice to know more about the data it was trained on, but so there's a lot of, there's a lot of big positives there. and it also means that the data that you then use to continue training it or fine tuning it, is not then being given to a big company. Like, you're not doing it through some closed API, like you do for open AI(33:58):On the other hand, these, as we just saw, talked about, these models can be used for a lot of negative things like, you know, spreading disinformation and so on. Right. And giving, sort of making them generally available and tuneable by anyone presents that risk. Yeah. So I think there's, you know, there's an analogy I think, you know, with like genetics for example, you know, or disease research where I think there was a, the scientists had sequenced the genome of the smallpox virus, right? And there was like a big debate over should they publish that. Because it could be used to like create a new smallpox, right? But on the other hand, it also could be used to, to, to develop better vaccines and better treatments and so on. And so I think there, there are, you know, any technology like that, there's always the sort of balance between transparency and making it open and keeping it closed. And then the question is, who gets to control it?The Next Phase of LLMs and the Plateau of Human-Derived Input ContentEric Topol (35:11):Yeah. Who gets to control it? And to understand the potential for nefarious use cases. yeah. The worst case scenario. Sure. Well, you know, I look to you Melanie, as a leading light because you are so balanced and, you know, you don't, the interest thing about you is what I have the highest level of respect, and that's why I like to read anything you write or where you're making comments about other people's work. Are you going write another book?Melanie Mitchell (35:44):Yeah, I'm thinking about it now. I mean, I think kind of a follow up to my book, which as you mentioned, like your book, it was before large language models came on the scene and before transformers and all of that stuff. And I think that there really is a need for some non-technical explanation of all of this. But of course, you know, every time you write a book about AI, it becomes obsolete by the time it's published.Eric Topol (36:11):That that's I worry about, you know? And that was actually going be my last question to you, which is, you know, where are we headed? Like, whatever, GPT-5 and on and it's going, it's the velocity's so high. it, where can you get a steady state to write about and try to, you know, pull it all together? Or, or are we just going be in some crazed zone here for some time where the things are moving too fast to try to be able to get your arms around it?Melanie Mitchell (36:43):Yeah, I mean, I don't know. I, I think there's a question of like-- can AI keep moving so fast? You know, we've obviously it's moved extremely fast in the last few years and, but the way that it's moved fast is by having huge amounts of training data and scaling up these models. But the problem now is it's almost like the field is run out of training data generated by people. And if people start using language models all the time for generating text, the internet is going be full of generated text, right? Right. HumanEric Topol (37:24):WrittenMelanie Mitchell (37:24):Text. And it's been shown that if these models keep, are sort of trained on the text that they generate themselves, they start behaving very poorly. So that's a question. It's like, where's the new data going to come from?Eric Topol (37:39):, and there's lots of upsettedness among people whose data are being used.Melanie Mitchell (37:44):Oh, sure.Eric Topol (37:45): understandably. And as you get to is there a limit of, you know, there's only so many Wikipedias and Internets and hundreds of thousands of books and whatnot to put in that are of human source content. So do we reach a, a plateau of human derived inputs? That's really fascinating question. I perhaps things will not continue at such a crazed pace so we can I mean, the way you put together A Guide for Thinking Humans was so prototypic because it, it was so thoughtful and it brought along those of us who were not trained in computer science to really understand where the state of the field was and where deep neural networks were. We need another one of those. And you're no one, I nominate you to help us to give us the, the, the right perspective. So Melanie, Professor Mitchell, I'm so grateful to you, all of us who follow your work remain indebted for keeping it straight. You know, you don't get ever get carried away. and we learn from that, all of us. It's really important. 'cause this, you know, there's so many people on one end of the spectrum here, whether it's doomsday or whether this is just stochastic parrot or open source and whatnot. It's really good to have you as a reference anchor to help us along.Melanie Mitchell (39:13):Well, thanks so much, Eric. That's really kind of you. Get full access to Ground Truths at erictopol.substack.com/subscribe

English Academic Vocabulary Booster
3394. 202 Academic Words Reference from "Zeynep Tufekci: Machine intelligence makes human morals more important | TED Talk"

English Academic Vocabulary Booster

Play Episode Listen Later Aug 2, 2023 181:33


This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important ■Post on this topic (You can get FREE learning materials!) https://englist.me/202-academic-words-reference-from-zeynep-tufekci-machine-intelligence-makes-human-morals-more-important-ted-talk/ ■Youtube Video https://youtu.be/zRPg38xXceA (All Words) https://youtu.be/Rs_Ry1o58lY (Advanced Words) https://youtu.be/7f7NNHzJkeM (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)

A Call to Actions
Pt.3] Ethics Review | Biological research and self-driving labs in deep space supported by artificial intelligence

A Call to Actions

Play Episode Listen Later Jul 9, 2023 42:08


Lifelong best friend and Investigator, "Breem",  joins me on part three of our Ethics Review of the offically released Machine Intelligence project document:  Biological research and self-driving labs in deep space supported by artificial intelligence

Made in Germany: Das Wirtschaftsmagazin
Notstand im Altersheim: die Pflegeroboter kommen

Made in Germany: Das Wirtschaftsmagazin

Play Episode Listen Later May 23, 2023 4:22


Roboter sollen helfen, den Mangel an Pflegekräften in Altenheimen und Krankenhäusern zu lindern. Wollen die Betroffenen das überhaupt? In Garmisch-Partenkirchen hat die TU München ein Forschungszentrum für Robotics and Machine Intelligence.

English Academic Vocabulary Booster
1317. 153 Academic Words Reference from "Refik Anadol: Art in the age of machine intelligence | TED Talk"

English Academic Vocabulary Booster

Play Episode Listen Later May 21, 2023 139:04


This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/refik_anadol_art_in_the_age_of_machine_intelligence ■Post on this topic (You can get FREE learning materials!) https://englist.me/153-academic-words-reference-from-refik-anadol-art-in-the-age-of-machine-intelligence--ted-talk/ ■Youtube Video https://youtu.be/dHMes8aVN4k (All Words) https://youtu.be/BANldHGp3PA (Advanced Words) https://youtu.be/McvdjzGkOm4 (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)

The Nonlinear Library
AF - LeCun's “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem by Steve Byrnes

The Nonlinear Library

Play Episode Listen Later May 8, 2023 25:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LeCun's “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem, published by Steve Byrnes on May 8, 2023 on The AI Alignment Forum. Summary This post is about the paper A Path Towards Autonomous Machine Intelligence (APTAMI) by Yann LeCun. It's a high-level sketch of an AI architecture inspired by the brain. APTAMI is mostly concerned with arguing that this architecture is a path towards more-capable AI. However, it is also claimed (both in the paper itself and in associated public communication) that this architecture is a path towards AI that is “controllable and steerable”, kind, empathetic, and so on. I argue that APTAMI is in fact, at least possibly, a path towards that latter destination, but only if we can solve a hard and currently-unsolved technical problem. This problem centers around the Intrinsic Cost module, which performs a role loosely analogous to “innate drives” in humans—e.g. pain being bad, sweet food being good, a curiosity drive, and so on. APTAMI does not spell out explicitly (e.g. with pseudocode) how to create the Intrinsic Cost module. It offers some brief, vague ideas of what might go into the Intrinsic Cost module, but does not provide any detailed technical argument that an AI with such an Intrinsic Cost would be controllable / steerable, kind, empathetic, etc. I will argue that, quite to the contrary, if we follow the vague ideas in the paper for building the Intrinsic Cost module, then there are good reasons to expect the resulting AI to be not only unmotivated by human welfare, but in fact motivated to escape human control, seek power, self-reproduce, etc., including by deceit and manipulation. Indeed, it is an open technical problem to write down any Intrinsic Cost function (along with training environment and other design choices) for which there is a strong reason to believe that the resulting AI would be controllable and/or motivated by human welfare, while also being sufficiently competent to do the hard intellectual tasks that we're hoping for (e.g. human-level scientific R&D). I close by encouraging LeCun himself, his colleagues, and anyone else to try to solve this open problem. It's technically interesting, very important, and we have all the information we need to start making progress now. I've been working on that problem myself for years, and I think I'm making more than zero progress, and if anyone reaches out to me I'd be happy to discuss the current state of the field in full detail. .And then there's an epilogue, which steps away from the technical discussion of the Intrinsic Cost module, and instead touches on bigger-picture questions of research strategy & prioritization. I will argue that the question of AI motivations merits much more than the cursory treatment that it got in APTAMI—even given the fact that APTAMI was a high-level early-stage R&D vision paper in which every other aspect of the AI is given an equally cursory treatment. (Note: Anyone who has read my Intro to Brain-Like AGI Safety series will notice that much of this post is awfully redundant with it—basically an abbreviated subset with various terminology changes to match the APTAMI nomenclature. And that's no coincidence! As mentioned, the APTAMI architecture was explicitly inspired by the brain.) 1. Background: the paper's descriptions of the “Intrinsic Cost module” For the reader's convenience, I'll copy everything specific that APTAMI says about the Intrinsic Cost module. (Emphasis in original.) PAGES 7-8: The Intrinsic Cost module is hard-wired (immutable, non trainable) and computes a single scalar, the intrinsic energy that measures the instantaneous “discomfort” of the agent – think pain (high intrinsic energy), pleasure (low or negative intrinsic energy), hunger, etc. The input to the module is the curren...

The Nonlinear Library
LW - LeCun's “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem by Steven Byrnes

The Nonlinear Library

Play Episode Listen Later May 8, 2023 25:20


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LeCun's “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem, published by Steven Byrnes on May 8, 2023 on LessWrong. Summary This post is about the paper A Path Towards Autonomous Machine Intelligence (APTAMI) by Yann LeCun. It's a high-level sketch of an AI architecture inspired by the brain. APTAMI is mostly concerned with arguing that this architecture is a path towards more-capable AI. However, it is also claimed (both in the paper itself and in associated public communication) that this architecture is a path towards AI that is “controllable and steerable”, kind, empathetic, and so on. I argue that APTAMI is in fact, at least possibly, a path towards that latter destination, but only if we can solve a hard and currently-unsolved technical problem. This problem centers around the Intrinsic Cost module, which performs a role loosely analogous to “innate drives” in humans—e.g. pain being bad, sweet food being good, a curiosity drive, and so on. APTAMI does not spell out explicitly (e.g. with pseudocode) how to create the Intrinsic Cost module. It offers some brief, vague ideas of what might go into the Intrinsic Cost module, but does not provide any detailed technical argument that an AI with such an Intrinsic Cost would be controllable / steerable, kind, empathetic, etc. I will argue that, quite to the contrary, if we follow the vague ideas in the paper for building the Intrinsic Cost module, then there are good reasons to expect the resulting AI to be not only unmotivated by human welfare, but in fact motivated to escape human control, seek power, self-reproduce, etc., including by deceit and manipulation. Indeed, it is an open technical problem to write down any Intrinsic Cost function (along with training environment and other design choices) for which there is a strong reason to believe that the resulting AI would be controllable and/or motivated by human welfare, while also being sufficiently competent to do the hard intellectual tasks that we're hoping for (e.g. human-level scientific R&D). I close by encouraging LeCun himself, his colleagues, and anyone else to try to solve this open problem. It's technically interesting, very important, and we have all the information we need to start making progress now. I've been working on that problem myself for years, and I think I'm making more than zero progress, and if anyone reaches out to me I'd be happy to discuss the current state of the field in full detail. .And then there's an epilogue, which steps away from the technical discussion of the Intrinsic Cost module, and instead touches on bigger-picture questions of research strategy & prioritization. I will argue that the question of AI motivations merits much more than the cursory treatment that it got in APTAMI—even given the fact that APTAMI was a high-level early-stage R&D vision paper in which every other aspect of the AI is given an equally cursory treatment. (Note: Anyone who has read my Intro to Brain-Like AGI Safety series will notice that much of this post is awfully redundant with it—basically an abbreviated subset with various terminology changes to match the APTAMI nomenclature. And that's no coincidence! As mentioned, the APTAMI architecture was explicitly inspired by the brain.) 1. Background: the paper's descriptions of the “Intrinsic Cost module” For the reader's convenience, I'll copy everything specific that APTAMI says about the Intrinsic Cost module. (Emphasis in original.) PAGES 7-8: The Intrinsic Cost module is hard-wired (immutable, non trainable) and computes a single scalar, the intrinsic energy that measures the instantaneous “discomfort” of the agent – think pain (high intrinsic energy), pleasure (low or negative intrinsic energy), hunger, etc. The input to the module is the current state of t...

The Nonlinear Library
LW - [Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2 by Olivia Jimenez

The Nonlinear Library

Play Episode Listen Later Apr 28, 2023 13:24


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2, published by Olivia Jimenez on April 28, 2023 on LessWrong. I'm often surprised more people haven't read Open AI CEO Sam Altman's 2015 blog posts Machine Intelligence Part 1 & Part 2. In my opinion, they contain some of the most strong, direct, and clear articulations of why AGI is dangerous from a person at an AGI company. (Note that the posts were published before OpenAI was founded. There's a helpful wiki of OpenAI history here.) Hence: a linkpost. I've copied both posts directly below for convenience. I've also bolded a few of the lines I found especially noteworthy. Machine intelligence, part 1 This is going to be a two-part post—one on why machine intelligence is something we should be afraid of, and one on what we should do about it. If you're already afraid of machine intelligence, you can skip this one and read the second post tomorrow—I was planning to only write part 2, but when I asked a few people to read drafts it became clear I needed part 1. WHY YOU SHOULD FEAR MACHINE INTELLIGENCE Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared. It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it's still many, many years away. But it's also extremely hard to believe that it isn't very likely that it will happen at some point. SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn't care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don't want them to get in the way of our plans. (Incidentally, Nick Bostrom's excellent book “Superintelligence” is the best thing I've seen on this topic. It is well worth a read.) Most machine intelligence development involves a “fitness function”—something the program tries to optimize. At some point, someone will probably try to give a program the fitness function of “survive and reproduce”. Even if not, it will likely be a useful subgoal of many other fitness functions. It worked well for biological life. Unfortunately for us, one thing I learned when I was a student in the Stanford AI lab is that programs often achieve their fitness function in unpredicted ways. Evolution will continue forward, and if humans are no longer the most-fit species, we may go away. In some sense, this is the system working as designed. But as a human programmed to survive and reproduce, I feel we should fight it. How can we survive the development of SMI? It may not be possible. One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable. It's very hard to know how close we are to machine intelligence surpassing human intelligence. Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve i...

The Next Byte
113. The Secret Weapon In The Fight Against Rare Diseases

The Next Byte

Play Episode Listen Later Mar 14, 2023 24:52


(2:33) - Using Machine Learning to Detect Rare DiseasesThis episode was brought to you by Mouser, our favorite place to get electronics parts for any project, whether it be a hobby at home or a prototype for work. Click HERE to learn about how a) AI is being leveraged in healthcare and b) the tools available from vendors to empower development in this area.

Currents in Religion
Should We Abandon Inclusion? Erin Raffety on Disability and the Church

Currents in Religion

Play Episode Listen Later Jan 13, 2023 50:46


For many, the term “inclusion” is the end all for social justice efforts. But, in her new book, Erin Raffety suggests that “inclusion” doesn't work, at least in churches with disabled people. Listen to this quote: “The church is called apart from the world to repent of its ableism, disown its power, abandon inclusion, and pursue justice alongside disabled people.” Throughout her book she clarifies why inclusions falters and what justice might look like. She does this by interpreting scripture, drawing from her ethnographic research with congregations in Northeastern America, and engaging with disability activists and scholars. So, you'll get to hear about some of those things in our conversation. I'm excited for you to hear it. Erin Raffety is Associate Research Scholar at Princeton Theological Seminary and Research Fellow in Machine Intelligence & Pastoral Care at the Center for Theological Inquiry in Princeton, New Jersey. She is the author of From Inclusion to Justice, the book we're discussing today, which is out now through Baylor University Press. And I'm grateful also that Dr. Devan Stahl joined us for this conversation as a cohost. Devan is Assistant Professor of Religion here at Baylor University and author of a new book called Disability's Challenge to Theology (UND Press). You can listen to us discuss Devan's book in our episode "An Era of Soft Eugenics?" Resources for Further Education on Disability Visit Erin's curated list of resources on her website. Browse Baylor University Press's books on the topic.

Other Life
Writing in the Age of Machine Intelligence

Other Life

Play Episode Listen Later Jan 12, 2023 22:39


I spent a lot of time reading and thinking about AI this week. I'm especially interested in the implications it will have for writers and the so-called creator economy. Obviously things will change if it becomes nearly free to generate decently intelligent content with machines. But how will things change exactly, and how should writers spend their time now to position themselves for these changes? I think the implications are not obvious. Specifically, AI will increase the economic power of deep classical education, and truly unique artistic personality. In this podcast, I explain why.I wrote more about this at otherlife.co/writing-machinesOther Life✦ Subscribe to the coolest newsletter in the world https://OtherLife.co✦ Get a free Urbit ship at https://imperceptible.computer✦ Join the community https://imperceptible.countryIndieThinkers.org✦ If you're working on independent projects, join the next cohort of https://IndieThinkers.org

Accidental Gods
Data is the New Plastic! Ethics, Accuracy and AI with Dr John Collins of Machine Intelligence Garage

Accidental Gods

Play Episode Listen Later Nov 30, 2022 59:05


Dr John Collins worked for the UK's Central Electricity Generating Board in the days when such things were nationalised industries. His PhD involved creating a real-time dosimeter for workers in nuclear plants so they didn't have to wait 2 weeks to learn the results of the film-based dosimeters that were in use. In doing so, he saved the CEGB considerable amounts of money - and, mere importantly,  saved the lives and health of the men and women who worked there. Thus began a lifetime working at the leading edge of business where innovation meets ethics and morality so that now, he is the Ethics and Responsible Innovation Advisor at Machine Intelligence Garage and on the Ethics Advisory Board at Digital Catapult. He's writing a book called 'A History of the Future in Seven Words.' With all this, he's an ideal person to open up the worlds of business, innovation and technology. In a wide-ranging, sparky, fun conversation, we explore what might make AI safe, how a future might look with sustainable business, whether 1.5 is 'still alive' and if that's even a useful metric - and how much power does it take to post an Instagram picture compared to making a plastic bottle (spoiler alert: it's the same power and the same CO2 generated - assuming both use the same power source and *if* the image is stored for 100 years... which the way we're going, might not happen. But still... ). John on LinkedIn https://www.linkedin.com/in/drjohnlcollins/Digital Catapult https://www.digicatapult.org.uk/

Engines of Our Ingenuity
Engines of Our Ingenuity 2544: Face Recognition

Engines of Our Ingenuity

Play Episode Listen Later Oct 26, 2022 3:48


Episode: 2544 How humans and computers recognize faces.  Today, UH math professor Krešo Josić recognizes your face.

HumAIn
Steven Banerjee: How Machine Intelligence, NLP and AI is changing Health Care

HumAIn

Play Episode Listen Later Sep 21, 2022 30:39


Steven Banerjee: How Machine Intelligence, NLP and AI is changing Health Care  [Audio] Podcast: Play in new window | DownloadSubscribe: Google Podcasts | Spotify | Stitcher | TuneIn | RSSSteven Banerjee is the CEO of NExTNet Inc. NExTNet is a Silicon Valley based technology startup pioneering natural language based Explainable AI platform to accelerate drug discovery and development. Steven is also the founder of Mekonos, a Silicon Valley based biotechnology company backed by world-class Institutional investors (pre-Series B) — pioneering proprietary cell and gene-engineering platforms to advance personalized medicine. He also advises Lumen Energy, a company that uses a radically simplified approach to deploy commercial solar. Lumen Energy makes it easy for building owners to get clean energy.  Please support this podcast by checking out our sponsors:Episode Links:  Steven Banerjee LinkedIn: https://www.linkedin.com/in/steven-banerjee/ Steven Banerjee Website: https://www.nextnetinc.com/ Podcast Details: Podcast website: https://www.humainpodcast.com Apple Podcasts: https://podcasts.apple.com/us/podcast/humain-podcast-artificial-intelligence-data-science/id1452117009 Spotify: https://open.spotify.com/show/6tXysq5TzHXvttWtJhmRpS RSS: https://feeds.redcircle.com/99113f24-2bd1-4332-8cd0-32e0556c8bc9 YouTube Full Episodes: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag YouTube Clips: https://www.youtube.com/channel/UCxvclFvpPvFM9_RxcNg1rag/videos Support and Social Media:  – Check out the sponsors above, it's the best way to support this podcast– Support on Patreon: https://www.patreon.com/humain/creators – Twitter: https://twitter.com/dyakobovitch – Instagram: https://www.instagram.com/humainpodcast/ – LinkedIn: https://www.linkedin.com/in/davidyakobovitch/ – Facebook: https://www.facebook.com/HumainPodcast/ – HumAIn Website Articles: https://www.humainpodcast.com/blog/ Outline: Here's the timestamps for the episode: (05:20)- So I am a mechanical engineer by training. And I started my graduate research in semiconductor technologies with applications in biotech almost more than a decade ago, in the early 2010s. I was a Doctoral Fellow at IBM labs here in San Jose, California. And then I also ended up writing some successful federal grants with a gene sequencing pioneer at Stanford, and Ron Davis, before I went, ended up going to UC Berkeley for grad school research, and then I became a visiting researcher.  (09:28)- An average cost of bringing a drug to market is around $2.6 billion. It takes around 10 to 15 years, like from the earliest days of discovery, to launching into the market. And unfortunately, more than 96% of all drug R&D actually fails . This is a really bad social model. This creates this enormous burden on our society and our healthcare spending as well. One of the reasons I started NextNet was when I was running Mekonos, I kept on seeing a lot of our customers had this tremendous pain point of, where you go, there's all this demand and subject matter experts, as scientists, they're actually working with very little of the available biomedical evidence out there. And a lot of the times that actually leads to false discoveries. (13:40)- And so there are tools, they're all this plethora of bioinformatics tools and software and databases out there that are plagued with program bugs. They mostly lack documentation or have very complicated documentation and best, very technical UI's. And for an average scientist or an average person in this industry, you really need to have a fairly deep grasp or a sophisticated understanding of database schemas and SQL querying and statistical modeling and coding and data science.  (22:36)- So, a transformer is potentially one of the greatest breakthroughs that has happened in NLP recently. It's basically a neural net architecture that was incorporated into NLP models by Google Brain researchers that came along in 2017 and 2018. And before transformers, your state of the art models and NLP basically were like, LSTM, like long term memories are the widely used architecture. (27:24)- So Sapiens is, our goal here is to really make biomedical data accessible and useful for scientific inquiry, using this platform, so that, your average person and industry, let's say a wet lab or dry lab scientist, or a VP of R&D or CSO, or let's say a director of research can ask and answer complex biological questions. And a better frame hypothesis to understand is very complex, multifactorial diseases. And a lot of the insights that Sapiens is extracting from all this, with publicly available data sources are proprietary to the company. And then you can map and upload your own internal data, and begin to really contextualize all that information, by uploading onto the Sapiens.  (31:34)- We are definitely looking for early adopters. This includes biotech companies, pharma, academic research labs, that would like to test out Sapiens and like this to be a part of their journey of their biomedical R&D. We're definitely, as I said, looking for investors who would like to partner with us, as we continue on this journey of building this probably one of the most sophisticated natural language based platforms, or as we call it, an excellent AI platform.  Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Hidden Forces
Machine Intelligence, Humanity, & the Will to Power | Demetri Kofinas

Hidden Forces

Play Episode Listen Later Aug 29, 2022 31:36


In Episode 268 of Hidden Forces, Demetri Kofinas replays a monologue that he wrote and first published nearly five years ago about the future of humanity, machine intelligence, and the power process. When he began Hidden Forces, Demetri was primarily concerned with exploring some of the recent advancements in technology and humanity's scientific understanding of both the external world, as well the internal one that gives rise to the human experience. He believed (and still does) that many of the socio-cultural and political changes that we were experiencing in the world were being strongly influenced and shaped by these advancements, which is why so many of our early episodes were philosophical in nature, dealing with questions of epistemology, identity, and meaning. In preparing for his recent conversation with director Alex Lee Moyer, Demetri was inspired to go back and listen to a monologue that we published nearly five years ago and which dealt directly with some of these more philosophical questions. The monologue was rooted in a set of observations about the role of digital technology and machine intelligence in shaping our experience of the world, our sense of agency, and our very notions of what it means to be a human being. The monologue is even more relevant today than it was in 2017 when he first wrote it. We are living through an extortionary moment in human history. Nothing short of our humanity is at stake, whether from the growing possibility of total war, the creeping security state and surveillance capitalism, or even just the abject apathy and nihilism that seems to be infecting more and more of society. If you are a listener of this show, then these issues concern you directly, and while you may not be the CEO of Google or the head of the Federal Communications Commission, your input, activism, and engagement on these issues will be a decisive factor in determining not only the quality of our future, but the quality of the world that we leave for our children and grandchildren. You can access the full episode, transcript, and intelligence report to this week's conversation by going directly to the episode page at HiddenForces.io and clicking on "premium extras." All subscribers gain access to our premium feed, which can be easily added to your favorite podcast application. If you enjoyed listening to today's episode of Hidden Forces you can help support the show by doing the following: Subscribe on Apple Podcasts | YouTube | Spotify | Stitcher | SoundCloud | CastBox | RSS Feed Write us a review on Apple Podcasts & Spotify Subscribe to our mailing list at https://hiddenforces.io/newsletter/ Producer & Host: Demetri Kofinas Editor & Engineer: Stylianos Nicolaou Subscribe & Support the Podcast at https://hiddenforces.io Join the conversation on Facebook, Instagram, and Twitter at @hiddenforcespod Follow Demetri on Twitter at @Kofinas Episode Recorded on 12/15/2017

Town Hall Seattle Science Series
186. Blaise Aguera y Arcas and Melanie Mitchell with Lili Cheng: How Close Are We to AI?

Town Hall Seattle Science Series

Play Episode Listen Later Jul 29, 2022 84:57


Building Policy Update: As of June 1, 2022, masks remain required at Town Hall Seattle. Read our current COVID-19 policies and in-building safety protocols. Thu 7/14, 2022, 7:30pm Blaise Agüera y Arcas and Melanie Mitchell with Lili Cheng How Close Are We to AI? BUY THE BOOKS   Ubi SuntBy Blaise Agüera y Arcas   Artificial Intelligence: A Guide for Thinking HumansBy Melanie Mitchell     Artificial Intelligence (AI), a term first coined at a Dartmouth workshop in 1956, has seen several boom and bust cycles over the last 66 years. Is the current boom different? The most exciting advance in the field since 2017 has been the development of “Large Language Models,” giant neural networks trained on massive databases of text on the web. Still highly experimental, Large Language Models haven't yet been deployed at scale in any consumer product — smart/voice assistants like Alexa, Siri, Cortana, or the Google Assistant are still based on earlier, more scripted approaches. Large Language Models do far better at routine tasks involving language processing than their predecessors. Although not always reliable, they can give a strong impression of really understanding us and holding up their end of an open-ended dialog. Unlike previous forms of AI, which could only perform specific jobs involving rote perception, classification, or judgment, Large Language Models seem to be capable of a lot more — including possibly passing the Turing Test, named after computing pioneer Alan Turing's thought experiment that posits when an AI in a chat can't be distinguished reliably from a human, it will have achieved general intelligence. But can Large Language Models really understand anything, or are they just mimicking the superficial “form” of language? What can we say about our progress toward creating real intelligence in a machine? What do “intelligence” and “understanding” even mean? Blaise Agüera y Arcas, a Fellow at Google Research, and Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute, take on these thorny questions in a wide-ranging presentation and discussion. The discussion will be moderated by Lili Cheng, Corporate Vice President of the Microsoft AI and Research division. Blaise Agüera y Arcas is a VP and Fellow at Google Research, where he leads an organization working on basic research and new products in Artificial Intelligence. His team focuses on the intersection of machine learning and devices, developing AI that augments humanity while preserving privacy. One of the team's technical contributions is Federated Learning, an approach to training neural networks in a distributed setting that avoids sending user data off-device. Blaise also founded Google's Artists and Machine Intelligence program and has been an active participant in cross-disciplinary dialogs about AI and ethics, fairness and bias, policy, and risk. He has given TED talks on Sead­ragon and Pho­to­synth (2007, 2012), Bing Maps (2010), and machine creativity (2016). In 2008, he was awarded MIT's TR35 prize. Melanie Mitchell is the Davis Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems.  Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans. Lili Cheng is a Corporate Vice President of the Microsoft AI and Research division, responsible for the AI developer platform which includes Cognitive Services and Bot Framework. Prior to Microsoft, Lili worked in Apple Computer's Advanced Technology Group on the user interface research team where she focused on QuickTime Conferencing and QuickTime VR. Lili is also a registered architect, having worked in Tokyo and Los Angeles for Nihon Sekkei and Skidmore Owings and Merrill on commercial urban design and large-scale building projects. She has also taught at New York University and Harvard University. Ubi SuntBy Blaise Agüera y Arcas    Artificial Intelligence: A Guide for Thinking HumansBy Melanie Mitchell   Presented by Town Hall Seattle. To become a member or make a donation click here.

Philosophy for our times
Human justice and machine intelligence | Joanna Bryson

Philosophy for our times

Play Episode Listen Later Jul 12, 2022 18:46


Should we be scared of AI?Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesJoanna Bryson discusses how she became interested in the ways different species use intelligence, how the typical tropes in science fiction misunderstand AI and the problem of anthropomorphism. In this interview, Bryson discusses the most pressing ethical challenges concerning the future of artificial intelligence and whether or not we can stabilize democracy when we have so much information about each other. She also touches on how the problems that arise with AI aren't always to do with the technology itself but with the social conditions that often produce it.Joanna Bryson is professor at Hertie School in Berlin. She works on Artificial Intelligence, ethics and collaborative cognition. She advises governments, corporations, and other agencies globally, particularly on AI policy.There are thousands of big ideas to discover at IAI.tv – videos, articles, and courses waiting for you to explore. Find out more: https://iai.tv/podcast-offers?utm_source=podcast&utm_medium=shownotes&utm_campaign=human-justice-and-machine-intelligenceSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Mind Matters
Why Computers Will Never Understand What They are Doing

Mind Matters

Play Episode Listen Later Jul 7, 2022 19:54


Can computers be sentient? Are there things which humans can do that computers can’t? Is artificial intelligence really creative? Robert J. Marks talks about his new book Non-Computable You: What You Do That Artificial Intelligence Never Will with talk show host Bill Meyer. Additional Resources Hear Bill's podcasts at www.BillMeyerShow.com (broadcast from KMED / KCMD, Medford, OR). Purchase Robert J. Marks’… Source

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Studying Machine Intelligence with Been Kim - #571

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later May 9, 2022 52:13


Today we continue our ICLR coverage joined by Been Kim, a staff research scientist at Google Brain, and an ICLR 2022 Invited Speaker. Been, whose research has historically been focused on interpretability in machine learning, delivered the keynote Beyond interpretability: developing a language to shape our relationships with AI, which explores the need to study AI machines as scientific objects, in isolation and with humans, which will provide principles for tools, but also is necessary to take our working relationship with AI to the next level.  Before we dig into Been's talk, she characterizes where we are as an industry and community with interpretability, and what the current state of the art is for interpretability techniques. We explore how the Gestalt principles appear in neural networks, Been's choice to characterize communication with machines as a language as opposed to a set of principles or foundational understanding, and much much more. The complete show notes for this episode can be found at twimlai.com/go/571

Thoth-Hermes Podcast
Season 8-Episode 4 – Machine Intelligence-Luke Lafitte

Thoth-Hermes Podcast

Play Episode Listen Later Mar 20, 2022 109:16


Luke Lafitte, J.D., Ph.D., is a trial attorney, American history teacher, a Christian mystic, and co-founder of Dead White Zombies, an award-winning theater group in Dallas, Texas. Partner in a leading law firm in Dallas, he is the author of the three-volume series Chronicles of a Curious Mind. From the book cover:  Humans invented and constructed machines to aid them as far back as the Stone Age. As the machines became more complex, they became extensions of the body and mind, and we naturally began projecting consciousness onto them. As Luke Lafitte shows in detail, although machines complicate the already complicated issue of identity, because they are “ours” and “of us,” they are part of our spiritual development. In this sweeping exploration of the history of the machine as a tool, as a transpersonal object to assist human activity, and as a transitional artifact between spirits and the humans who interact with them, Lafitte examines the role that machines play in the struggle between “spiritual man” and “mechanical-man” throughout history. He interprets the messages, archetypes, and language of the unconscious in the first popular stories related to mechanical-men, and he demonstrates a direct connection between consciousness and the history of machines in American history, specifically between the inventors of these machines and the awakening of our imaginations and our powers of manifestation. He examines the influence of Philip K. Dick, Nikola Tesla, Thomas Edison, Grace Hopper, Richard Feynman, Elon Musk, David Bohm, and others and shows how the Nag Hammadi gospels explain how we can take back our myth and spirit from the machine. Although the term mechanical-man is a catch-all phrase, Lafitte shows that the term is also a meeting ground where extra-dimensional communications between different forms of matter occur. Every machine, android, robot, and cyborg arose from consciousness, and these mechanical-men, whether real or fictive, offer us an opportunity to free ourselves from enslavement to materialism and awaken our imaginations to create our own realities. Luke Laffite's author page with Inner Traditions And all of his books on Amazon.com The Dallas theatre group "Dead White Zombies", co-founded by Luke Lafitte Music played in this episode This episode brings the musical return of FRATER F, whose music we have already presented on several previous episodes since 2019! 1) THE DRAGON INVOCATION from the Album "The Caves of Qliphoth" (2019) (Track starts at 8:05) 2) FADING - WITHER - PERISH three shorter single tracks from 2021 (Tracks start at 56:47) 3) THRICE Single track fom 2018 (Track starts at 1:39:45) Intro and Outro Musicespecially written and recorded for the Thoth-Hermes Podcast by Chris Roberts

DeepMind: The Podcast
The road to AGI

DeepMind: The Podcast

Play Episode Listen Later Feb 15, 2022 32:58


Hannah meets DeepMind co-founder and chief scientist Shane Legg, the man who coined the phrase ‘artificial general intelligence', and explores how it might be built. Why does Shane think AGI is possible? When will it be realised? And what could it look like? Hannah also explores a simple theory of using trial and error to reach AGI and takes a deep dive into MuZero, an AI system which mastered complex board games from chess to Go, and is now generalising to solve a range of important tasks in the real world. For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com.  Interviewees: DeepMind's Shane Legg, Doina Precup, Dave Silver & Jackson Broshear CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible!  Further reading: Real-world challenges for AGI, DeepMind: https://deepmind.com/blog/article/real-world-challenges-for-agiAn executive primer on artificial general intelligence, McKinsey: https://www.mckinsey.com/business-functions/operations/our-insights/an-executive-primer-on-artificial-general-intelligenceMastering Go, chess, shogi and Atari without rules, DeepMind: https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rulesWhat is AGI?, Medium: https://medium.com/intuitionmachine/what-is-agi-99cdb671c88eA Definition of Machine Intelligence by Shane Legg, arXiv: https://arxiv.org/abs/0712.3329Reward is enough by David Silver, ScienceDirect: https://www.sciencedirect.com/science/article/pii/S0004370221000862

DeepMind: The Podcast
Speaking of intelligence

DeepMind: The Podcast

Play Episode Listen Later Jan 25, 2022 38:12


Hannah explores the potential of language models, the questions they raise, and if teaching a computer about language is enough to create artificial general intelligence (AGI). Beyond helping us communicate ideas, language plays a crucial role in memory, cooperation, and thinking – which is why AI researchers have long aimed to communicate with computers using natural language. Recently, there has been extraordinary progress using large-language models (LLM), which learn how to speak by processing huge amounts of data from the internet. The results can be very convincing, but pose significant ethical challenges.  For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com.  Interviewees: DeepMind's Geoffrey Irving, Chris Dyer, Angeliki Lazaridou, Lisa-Anne Hendriks & Laura Weidinger  CreditsPresenter: Hannah FrySeries Producer: Dan HardoonProduction support: Jill AchinekuSounds design: Emma BarnabyMusic composition: Eleni ShawSound Engineer: Nigel AppletonEditor: David PrestCommissioned by DeepMind Thank you to everyone who made this season possible!  Further reading: GPT-3 Powers the Next Generation of Apps, OpenAI: https://openai.com/blog/gpt-3-apps/https://web.stanford.edu/class/linguist238/p36-weizenabaum.pdfNever Mind the Computer 1983 about the ELIZA program, BBC: https://www.bbc.co.uk/programmes/p023kpf8How Large Language Models Will Transform Science, Society, and AI, Stanford University: https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-aiChallenges in Detoxifying Language Models, DeepMind: https://deepmind.com/research/publications/2021/Challenges-in-Detoxifying-Language-ModelsExtending Machine Language Models toward Human-Level Language Understanding, DeepMind: https://deepmind.com/research/publications/2020/Extending-Machine-Language-Models-toward-Human-Level-Language-UnderstandingLanguage modelling at scale, DeepMind: https://deepmind.com/blog/article/language-modelling-at-scaleArtificial general intelligence, Technology Review: https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/A Definition of Machine Intelligence by Shane Legg, arXiv: https://arxiv.org/abs/0712.3329Stuart Russell - Living With Artificial Intelligence, BBC: https://www.bbc.co.uk/programmes/m001216k/episodes/player