Intelligence demonstrated by machines
POPULARITY
Today, Ceri speaks the extraordinary Memo Akten — artist, researcher, computer scientist. For more than a decade, he has worked with emerging technologies, AI, Big Data, and our Collective Consciousness as scraped and shaped by the internet, to explore consciousness, perception, ecology and the politics of our techno-lifestyles. He won the Golden Nica at Prix Ars Electronica, became Google's first artist-in-residence in their Artists & Machine Intelligence programme, and has exhibited at the Venice Biennale, Tribeca, the Barbican, ACMI, Mori Art Museum, and the Academy Museum in LA. His collaborations span U2, Lenny Kravitz, Depeche Mode, Max Cooper, Richard Dawkins, Google, Apple and McLaren. KEY TAKEAWAYS Technology is never neutral. It shapes us as much as we shape it. Memo reminds us that behind every dataset is a culture, behind every model is a worldview, and behind every technological leap is a chain of ecological, political and emotional consequences. The world can only meet your ideas if you let them out of hiding. Memo's story is a masterclass in releasing the work before you feel ready. If you are wrestling your way through a project remember - the destination is just the documentary still. Gathering the threads that eventually become something whole is where the real art is. BEST MOMENTS “We can use technology to understand ourselves more deeply, to pay attention to the world more carefully, and to ask bigger, braver questions.” “I very rarely begin a project with an end goal of this is what it should look like, in mind. I usually begin with this is how I want it to behave.” AN UNMISSABLE OFFER If the art world feels confusing, you're not imagining it. Most artists are guessing their way through it and staying stuck far longer than they need to. Inside the Ceri Hand Coaching Membership, you get straight answers and real support. Each week, I run live sessions where you can bring any problem and I'll help you cut through it fast — creative blocks, pitches, pricing, all of it. You'll get coaching with me, the chance to host or attend a virtual studio visit, portfolio reviews, monthly art world experts, and a community who genuinely get it. It's the kind of guidance most artists wish they'd had years ago. Right now, you can join or gift a full year for £99, our only discount of the year, available until the first of January. Join the Membership, or gift it to someone who needs it. We'll get there faster together. Just click here: cerihand.com/membership. EPISODE RESOURCES https://www.memo.tv https://www.instagram.com/memo_akten HOST BIO With over 35 years in the art world, Ceri has worked closely with leading artists and arts professionals, managed public and private galleries and charities, and curated more than 250 exhibitions and events. She sold artworks to major museums and private collectors and commissioned thousands of works across diverse media, from renowned artists such as John Akomfrah, Pipilotti Rist, Rafael Lozano-Hemmer and Vito Acconci. Now, she wants to share her extensive knowledge with you, so you can excel and achieve your goals. ** Artworld Network Self Study Course Our self-study video course, 'Unlock Your Artworld Network', offers a straightforward 5-step framework to help you build valuable relationships effortlessly. Gain the tools and confidence you need to create new opportunities and thrive in the art world today. https://cerihand.com/courses/unlock_your_artworld_network/ ** Book a Discovery Call To schedule a personalised 1-2-1 coaching session with Ceri or explore our group coaching options, simply email us at hello@cerihand.com ** This Podcast has been brought to you by Disruptive Media https://disruptivemedia.co.uk/
AI & Ecommerce: The Infinite Loop—How Machine Intelligence and Shopping Are Rewiring Each Other's Future
Cisco's Vijoy Pandey - SVP & GM of Outshift by Cisco - explains how AI agents and quantum networks could completely redefine how software, infrastructure, and security function in the next decade.You'll learn:→ What “Agentic AI” and the “Internet of Agents” actually are→ How Cisco open-sourced the Internet of Agents framework and why decentralization matters→ The security threat of “store-now, decrypt-later” attacks—and how post-quantum cryptography will defend against them→ How Outshift's “freedom to fail” model fuels real innovation inside a Fortune-500 company→ Why the next generation of software will blur the line between humans, AI agents, and machines→ The vision behind Cisco's Quantum Internet—and two real-world use cases you can see today: Quantum Sync and Quantum AlertAbout Today's Guest:Meet Vijoy Pandey, the mind behind Cisco's Outshift—a team pushing the boundaries of what's next in AI, quantum computing, and the future internet. With 80+ patents to his name and a career spent redefining how systems connect and think, he's one of the few leaders truly building the next era of computing before the rest of us even see it coming.Key Moments:00:00 Meet Vijoy Pandey & Outshift's mission04:30 The two hardest problems in computer science: Superintelligence & Quantum Computing06:30 Why “freedom to fail” is Cisco's innovation superpower10:20 Inside the Outshift model: incubating like a startup inside Cisco21:00 What is Agentic AI? The rise of the Internet of Agents27:00 AGNTCY.org and open-sourcing the Internet of Agents32:00 What would an Internet of Agents actually look like?38:19 Responsible AI & governance: putting guardrails in early49:40 What is quantum computing? What is quantum networking?55:27 The vision for a global Quantum InternetWatch Next: https://youtu.be/-Jb2tWsAVwI?si=l79rdEGxB-i-Wrrn -- This episode of IT Visionaries is brought to you by Meter - the company building better networks. Businesses today are frustrated with outdated providers, rigid pricing, and fragmented tools. Meter changes that with a single integrated solution that covers everything wired, wireless, and even cellular networking. They design the hardware, write the firmware, build the software, and manage it all so your team doesn't have to.That means you get fast, secure, and scalable connectivity without the complexity of juggling multiple providers. Thanks to meter for sponsoring. Go to meter.com/itv to book a demo.---IT Visionaries is made by the team at Mission.org. Learn more about our media studio and network of podcasts at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
On this episode of Mind Matters News, host Robert J. Marks speaks with Dr. Georgios Mappouras about his proposal for a more rigorous test for measuring artificial intelligence. Mappouras argues that the original Turing Test is not enough to determine true intelligence in AI systems, as it focuses too much on simulating human-like conversation rather than demonstrating genuine understanding and problem-solving Read More › Source
- Infowars Drama and Owen Shroyer's Departure - Infowars' Struggles and Radical Transparency - Alex Jones' Accuracy and Criticism of Corporate Media - Owen Shroyer's Role and Mike Adams' Future Plans - Economic Impact of Nord Stream Pipeline Destruction - Power of Siberia Pipeline and China's Energy Future - US-China Economic and Military Strategy - Impact of Machine Intelligence on the US Economy - Michael Yon's Analysis of Global Trade Routes - Human Terraforming and Demographic Shifts - Global Demographic Changes and Terraforming - Historical Context and Modern Implications - Economic and Political Instability - Geopolitical Rivalries and Strategic Moves - The Role of Data Centers and AI in Global Competition - The Impact of US Policies on Global Alliances - The Future of Global Trade and Economic Stability - The Role of AI and Data Centers in Global Competition - The Impact of US Policies on Global Alliances - The Future of Global Trade and Economic Stability For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Dr. Michael Timothy Bennett is a computer scientist who's deeply interested in understanding artificial intelligence, consciousness, and what it means to be alive. He's known for his provocative paper "What the F*** is Artificial Intelligence" which challenges conventional thinking about AI and intelligence.**SPONSOR MESSAGES***Prolific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=mb***Michael takes us on a journey through some of the biggest questions in AI and consciousness. He starts by exploring what intelligence actually is - settling on the idea that it's about "adaptation with limited resources" (a definition from researcher Pei Wang that he particularly likes).The discussion ranges from technical AI concepts to philosophical questions about consciousness, with Michael offering fresh perspectives that challenge Silicon Valley's "just scale it up" approach to AI. He argues that true intelligence isn't just about having more parameters or data - it's about being able to adapt efficiently, like biological systems do.TOC:1. Introduction & Paper Overview [00:01:34]2. Definitions of Intelligence [00:02:54]3. Formal Models (AIXI, Active Inference) [00:07:06]4. Causality, Abstraction & Embodiment [00:10:45]5. Computational Dualism & Mortal Computation [00:25:51]6. Modern AI, AGI Progress & Benchmarks [00:31:30]7. Hybrid AI Approaches [00:35:00]8. Consciousness & The Hard Problem [00:39:35]9. The Diverse Intelligences Summer Institute (DISI) [00:53:20]10. Living Systems & Self-Organization [00:54:17]11. Closing Thoughts [01:04:24]Michaels socials:https://michaeltimothybennett.com/https://x.com/MiTiBennettTranscript:https://app.rescript.info/public/share/4jSKbcM77Sf6Zn-Ms4hda7C4krRrMcQt0qwYqiqPTPIReferences:Bennett, M.T. "What the F*** is Artificial Intelligence"https://arxiv.org/abs/2503.23923Bennett, M.T. "Are Biological Systems More Intelligent Than Artificial Intelligence?" https://arxiv.org/abs/2405.02325Bennett, M.T. PhD Thesis "How To Build Conscious Machines"https://osf.io/preprints/thesiscommons/wehmg_v1Legg, S. & Hutter, M. (2007). "Universal Intelligence: A Definition of Machine Intelligence"Wang, P. "Defining Artificial Intelligence" - on non-axiomatic reasoning systems (NARS)Chollet, F. (2019). "On the Measure of Intelligence" - introduces the ARC benchmark and developer-aware generalizationHutter, M. (2005). "Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability"Chalmers, D. "The Hard Problem of Consciousness"Descartes, R. - Cartesian dualism and the pineal gland theory (historical context)Friston, K. - Free Energy Principle and Active Inference frameworkLevin, M. - Work on collective intelligence, cancer as information isolation, and "mind blindness"Hinton, G. (2022). "The Forward-Forward Algorithm" - introduces mortal computation conceptAlexander Ororbia & Friston - Formal treatment of mortal computationSutton, R. "The Bitter Lesson" - on search and learning in AIPearl, J. "The Book of Why" - causal inference and reasoningAlternative AGI ApproachesWang, P. - NARS (Non-Axiomatic Reasoning System)Goertzel, B. - Hyperon system and modular AGI architecturesBenchmarks & EvaluationHendrycks, D. - Humanities Last Exam benchmark (mentioned re: saturation)Filmed at:Diverse Intelligences Summer Institute (DISI) https://disi.org/
WarRoom Battleground EP 834: Machine Intelligence, Artificial Idiocracy, And A World On Edge
Dave Hatz, VP of Machine Intelligence at CTI, a major AV integrator and events company, shares his journey from AV programming into AI leadership.How AI is reshaping the AV integrator's role, from hardware-centric deployments to software-driven, experience-focused solutionsThe shift from complex, bespoke AV setups to scalable, software-upgradable Microsoft Teams Rooms and how this enables sustainability and innovationThe growing importance of AI features like intelligent cameras, transcription, and meeting recap, and how AV professionals can support these capabilitiesFrom personal productivity tools to organisational workflows, Dave discusses how CTI helps clients integrate AIThanks to Neat for sponsoring this podcast episode and for their ongoing support.
Raphaël Raux's 2025 Harvard Horizon project, "Human Learning about AI," conducted in collaboration with fellow PhD student Bnaya Dreyfuss, explores how people often assume AI thinks like a human, which can lead to confusion about what these systems can and can't do. As a PhD candidate in economics at Harvard, Raux studies the complex relationship between how humans think and how artificial intelligence works. His research challenges common assumptions about AI and encourages a clearer, more realistic understanding of the technology. In his April 2025 talk at the annual Harvard Horizons Symposium, Raux shared insights from his work, which he hopes will support smarter decisions about how we use AI and help guide its development in ways that benefit both the economy and society.
As AI tools become more advanced, what does effective project leadership look like? In this episode, Rich Maltzman, co-author of AI-Powered Leadership, discusses what effective project leadership looks like in an AI-driven world. He emphasizes the need for a collaborative approach between human judgment and machine intelligence, advocating for a “both/and” mindset that blends human insight with technological precision.
Can true Machine Intelligence be created? In this story several computer experts ask that question with surprizing results.
Dr. Craig Martell served as the Chief Digital and Artificial Intelligence Officer for the DOD from June 2022 through April 2024. While at the Pentagon, he helped the Department of Defense modernize their approach to employing software. He now works as the Chief AI Officer for Cohesity, a cybersecurity startup that helps companies secure, analyze, and manage their data. In this episode of the Defense Tech Underground, we discuss Dr. Martell's path from teaching computer science to leading a major Pentagon office, his early career in big tech at the dawn of AI, his concerns about the use of generative AI in warfare, and how tech startups can be effective by innovating alongside warfighters. This episode is hosted by Jeff Phaneuf and Andrew Paulmeno. Full Bio: Dr. Craig Martell is the former Chief Digital and Artificial Intelligence Officer for the United States Department of Defense. As Chief AI Officer of Cohesity, Craig shapes Cohesity's technical vision—and defines and executes a strategic roadmap for the company's future. Craig brings extensive industry and public sector experience and expertise in artificial intelligence (AI) and machine learning to his role. Most recently, as the first Chief Digital and Artificial Intelligence Officer (CDAO) for the U.S. Department of Defense, Craig accelerated the adoption of data, analytics, digital solutions, and AI functions. Prior to the DoD, he held senior roles at several leading technology companies. He served as Head of Machine Learning at Lyft, Head of Machine Intelligence at Dropbox, and was a leader of numerous AI teams and initiatives at LinkedIn. Craig was also a tenured computer science professor at the Naval Postgraduate School specializing in natural language processing. He holds a Ph.D. in Computer Science from the University of Pennsylvania.
Join Tom Shaughnessy as he hosts Travis Good, CEO and co-founder of Ambient, for a deep dive into the world's first useful proof-of-work blockchain powered by AI. Fresh out of stealth, Ambient reimagines the intersection of crypto and AI by creating a decentralized network where mining secures the chain through verified AI inference on a 600B+ parameter model.
Sylvia Martinez was an aerospace engineer before becoming an educational software producer and vice president of a video game company. She spent a decade as the President of Generation YES, the groundbreaking non-profit that provides educators with the tools necessary to place students in leadership roles in their schools and communities. In addition to leading workshops, Sylvia delights and challenges audiences as a keynote speaker at major conferences around the world. She brings her real-world experience in highly innovative work environments to learning organizations that wish to change STEM education to be more inclusive, effective, and engaging.Sylvia is co-author of Invent To Learn: Making, Tinkering, and Engineering in the Classroom, often called the “bible” of the classroom maker movement. She runs the book publishing arm of CMK Futures, Constructing Modern Knowledge Press, to continue to publish books about creative education by educators.Ken Kahn has been interested in Al and education for 50 years. His 1977 paper "Three interactions between Al and education" In E. Elcock and D. Michie, editors, Machine Intelligence 8: Machine Representations of Knowledge may be among the first publications on the topic. He received his doctorate from the MIT Al Lab in 1979. He designed and implemented ToonTalk, a programming language for children that looks and feels like a video game. He has developed a large collection of Al programming resources for school students (https://ecraft2learn.github.io/ai/). He recently retired as a senior researcher from the University of Oxford.Linkshttps://constructingmodernknowledge.com/about-the-cmk-hosts/https://sylviamartinez.comhttps://www.linkedin.com/posts/garystager_ken-kahn-speaks-with-sylvia-martinez-about-activity-7303865110035341313-BcUlhttps://uk.linkedin.com/in/ken-kahn-997a225 Hosted on Acast. See acast.com/privacy for more information.
In this fascinating episode, we dive deep into the race towards true AI intelligence, AGI benchmarks, test-time adaptation, and program synthesis with star AI researcher (and philosopher) Francois Chollet, creator of Keras and the ARC AGI benchmark, and Mike Knoop, co-founder of Zapier and now co-founder with Francois of both the ARC Prize and the research lab Ndea. With the launch of ARC Prize 2025 and ARC-AGI 2, they explain why existing LLMs fall short on true intelligence tests, how new models like O3 mark a step change in capabilities, and what it will really take to reach AGI.We cover everything from the technical evolution of ARC 1 to ARC 2, the shift toward test-time reasoning, and the role of program synthesis as a foundation for more general intelligence. The conversation also explores the philosophical underpinnings of intelligence, the structure of the ARC Prize, and the motivation behind launching Ndea — a ew AGI research lab that aims to build a "factory for rapid scientific advancement." Whether you're deep in the AI research trenches or just fascinated by where this is all headed, this episode offers clarity and inspiration.NdeaWebsite - https://ndea.comX/Twitter - https://x.com/ndeaARC PrizeWebsite - https://arcprize.orgX/Twitter - https://x.com/arcprizeFrançois CholletLinkedIn - https://www.linkedin.com/in/fcholletX/Twitter - https://x.com/fcholletMike KnoopX/Twitter - https://x.com/mikeknoopFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (01:05) Introduction to ARC Prize 2025 and ARC-AGI 2 (02:07) What is ARC and how it differs from other AI benchmarks (02:54) Why current models struggle with fluid intelligence (03:52) Shift from static LLMs to test-time adaptation (04:19) What ARC measures vs. traditional benchmarks (07:52) Limitations of brute-force scaling in LLMs (13:31) Defining intelligence: adaptation and efficiency (16:19) How O3 achieved a massive leap in ARC performance (20:35) Speculation on O3's architecture and test-time search (22:48) Program synthesis: what it is and why it matters (28:28) Combining LLMs with search and synthesis techniques (34:57) The ARC Prize structure: efficiency track, private vs. public (42:03) Open source as a requirement for progress (44:59) What's new in ARC-AGI 2 and human benchmark testing (48:14) Capabilities ARC-AGI 2 is designed to test (49:21) When will ARC-AGI 2 be saturated? AGI timelines (52:25) Founding of NDEA and why now (54:19) Vision beyond AGI: a factory for scientific advancement (56:40) What NDEA is building and why it's different from LLM labs (58:32) Hiring and remote-first culture at NDEA (59:52) Closing thoughts and the future of AI research
On this episode of Crazy Wisdom, host Stewart Alsop speaks with Ivan Vendrov for a deep and thought-provoking conversation covering AI, intelligence, societal shifts, and the future of human-machine interaction. They explore the "bitter lesson" of AI—that scale and compute ultimately win—while discussing whether progress is stalling and what bottlenecks remain. The conversation expands into technology's impact on democracy, the centralization of power, the shifting role of the state, and even the mythology needed to make sense of our accelerating world. You can find more of Ivan's work at nothinghuman.substack.com or follow him on Twitter at @IvanVendrov.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction and Setting00:21 The Bitter Lesson in AI02:03 Challenges in AI Data and Infrastructure04:03 The Role of User Experience in AI Adoption08:47 Evaluating Intelligence and Divergent Thinking10:09 The Future of AI and Society18:01 The Role of Big Tech in AI Development24:59 Humanism and the Future of Intelligence29:27 Exploring Kafka and Tolkien's Relevance29:50 Tolkien's Insights on Machine Intelligence30:06 Samuel Butler and Machine Sovereignty31:03 Historical Fascism and Machine Intelligence31:44 The Future of AI and Biotech32:56 Voice as the Ultimate Human-Computer Interface36:39 Social Interfaces and Language Models39:53 Javier Malay and Political Shifts in Argentina50:16 The State of Society in the U.S.52:10 Concluding Thoughts on Future ProspectsKey InsightsThe Bitter Lesson Still Holds, but AI Faces Bottlenecks – Ivan Vendrov reinforces Rich Sutton's "bitter lesson" that AI progress is primarily driven by scaling compute and data rather than human-designed structures. While this principle still applies, AI progress has slowed due to bottlenecks in high-quality language data and GPU availability. This suggests that while AI remains on an exponential trajectory, the next major leaps may come from new forms of data, such as video and images, or advancements in hardware infrastructure.The Future of AI Is Centralization and Fragmentation at the Same Time – The conversation highlights how AI development is pulling in two opposing directions. On one hand, large-scale AI models require immense computational resources and vast amounts of data, leading to greater centralization in the hands of Big Tech and governments. On the other hand, open-source AI, encryption, and decentralized computing are creating new opportunities for individuals and small communities to harness AI for their own purposes. The long-term outcome is likely to be a complex blend of both centralized and decentralized AI ecosystems.User Interfaces Are a Major Limiting Factor for AI Adoption – Despite the power of AI models like GPT-4, their real-world impact is constrained by poor user experience and integration. Vendrov suggests that AI has created a "UX overhang," where the intelligence exists but is not yet effectively integrated into daily workflows. Historically, technological revolutions take time to diffuse, as seen with the dot-com boom, and the current AI moment may be similar—where the intelligence exists but society has yet to adapt to using it effectively.Machine Intelligence Will Radically Reshape Cities and Social Structures – Vendrov speculates that the future will see the rise of highly concentrated AI-powered hubs—akin to "mile by mile by mile" cubes of data centers—where the majority of economic activity and decision-making takes place. This could create a stark divide between AI-driven cities and rural or off-grid communities that choose to opt out. He draws a parallel to Robin Hanson's Age of Em and suggests that those who best serve AI systems will hold power, while others may be marginalized or reduced to mere spectators in an AI-driven world.The Enlightenment's Individualism Is Being Challenged by AI and Collective Intelligence – The discussion touches on how Western civilization's emphasis on the individual may no longer align with the realities of intelligence and decision-making in an AI-driven era. Vendrov argues that intelligence is inherently collective—what matters is not individual brilliance but the ability to recognize and leverage diverse perspectives. This contradicts the traditional idea of intelligence as a singular, personal trait and suggests a need for new frameworks that incorporate AI into human networks in more effective ways.Javier Milei's Libertarian Populism Reflects a Global Trend Toward Radical Experimentation – The rise of Argentina's President Javier Milei exemplifies how economic desperation can drive societies toward bold, unconventional leaders. Vendrov and Alsop discuss how Milei's appeal comes not just from his radical libertarianism but also from his blunt honesty and willingness to challenge entrenched power structures. His movement, however, raises deeper questions about whether libertarianism alone can provide a stable social foundation, or if voluntary cooperation and civil society must be explicitly cultivated to prevent libertarian ideals from collapsing into chaos.AI, Mythology, and the Need for New Narratives – The conversation closes with a reflection on the power of mythology in shaping human understanding of technological change. Vendrov suggests that as AI reshapes the world, new myths will be needed to make sense of it—perhaps similar to Tolkien's elves fading as the age of men begins. He sees AI as part of an inevitable progression, where human intelligence gives way to something greater, but argues that this transition must be handled with care. The stories we tell about AI will shape whether we resist, collaborate, or simply fade into irrelevance in the face of machine intelligence.
The world of decision-making is now dominated by algorithms and automation. But how much has the AI really changed? Haven't, on some level, humans always thought in algorithmic terms? Kartik Hosanagar is a professor of technology at The Wharton School at The University of Pennsylvania. His book, A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control explores how algorithms and AI are increasingly influencing our daily decisions and society, and proposes ways for individuals and organizations to maintain control in this algorithmic world.Kartik and Greg discuss the integration of AI in decision-making, the differences and similarities of human based algorithmic thinking and AI based algorithmic thinking, the significance of AI literacy, and the future of creativity with AI. *unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Herbert A. SimonPedro Domingos“At UPS, the Algorithm Is the Driver” | The Wall Street Journal“(Re)Introducing the AI Bill of Rights: An AI Governance Proposal” by Kartik HosanagarGuest Profile:Faculty Profile at The Wharton School of the University of PennsylvaniaKartik Hosanagar's SubstackProfessional Profile on LinkedInHis Work:A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in ControlEpisode Quotes:What's a good system design for AI?43:02: A good system design for AI systems, would be when there's deviation from the recommended decision to have some feedback loop. It's like in a music recommendation system, and Spotify Discover Weekly or any of these other systems where a recommendation comes in; ideally, you want some feedback on did this person like the song or not. And if there's a way to get that feedback, whether you know one way is it's an explicit feedback thumbs up, thumbs down, sometimes it's implicit; they just skipped it, or they didn't finish the song, they just left it halfway through, or something like that. But you need some way to get that feedback, and that helps the system get better over time.At the end of the day, humans shape the future of AI12:43: This view that it's all automation and we'll have mass human replacement by AI, I think, at the end of the day, we shape that outcome. We need to be actively involved in shaping that future where AI is empowering us and augmenting our work. And we design these human-AI systems in a more deliberate manner.On driving trust in algorithmic systems36:08: What drives trust in an algorithmic system shows that transparency and user control are two extremely important variables. Of course, you care about things like how accurate or good that system is. Those things, of course, matter. But transparency and trust are interesting. So, in transparency, the idea that you have a system making decisions for you or about you, but you have no clue about how the system works, is disturbing for people. And we've seen ample evidence that people reject that system.
Are we on the brink of merging with machines? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O'Reilly dive into the mysteries of consciousness versus intelligence, panpsychism, and AI with neuroscientist and author Anil Seth.NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here:https://startalkmedia.com/show/is-consciousness-everywhere-with-anil-seth/Thanks to our Patrons James Boothe, Vicken Serpakian, John Webb, Doctor Pants, Greg Gralenski, Lost_AI, Bob Lester, kim christensen, Micheal Gannon, Aaron Rosenberg, Shai Kr, Kyle Bullock, JyinxTV, James Myers, victor recabarren, David Pederson, Ted McSheehy, Terena, Tracy Sheckells, Groovemaster24, Sheedrealmusic, David Amicucci, Brian Ridge, M Ranger, Peter Ackerman, Mars Colony AI, DonAlan, Harry Sørensen, G Anthony, Muhammad Umer, and Joshua MacDonald for supporting us this week. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Episode: 2544 How humans and computers recognize faces. Today, UH math professor Krešo Josić recognizes your face.
What happens when artificial intelligence starts weighing in on our moral decisions? Matt Prewitt is joined by Meaning Alignment Institute co-founder Joe Edelman to explore this thought-provoking territory in examining how AI is already shaping our daily experiences and values through social media algorithms. They explore the tools developed to help individuals negotiate their values and the implications of AI in moral reasoning – venturing into compelling questions about human-AI symbiosis, the nature of meaningful experiences, and whether machines can truly understand what matters to us. For anyone intrigued by the future of human consciousness and decision-making in an AI-integrated world, this discussion opens up fascinating possibilities – and potential pitfalls – we may not have considered.Links & References: References:CouchSurfing - Wikipedia | CouchSurfing.org | WebsiteTristan Harris: How a handful of tech companies control billions of minds every day | TED TalkCenter for Humane Technology | WebsiteMEANING ALIGNMENT INSTITUTE | WebsiteReplika - AI Girlfriend/BoyfriendWill AI Improve Exponentially At Value Judgments? - by Matt Prewitt | RadicalxChangeMoral Realism (Stanford Encyclopedia of Philosophy)Summa Theologica - WikipediaWhen Generative AI Refuses To Answer Questions, AI Ethics And AI Law Get Deeply Worried | AI RefusalsAmanda Askell: The 100 Most Influential People in AI 2024 | TIME | Amanda Askells' work at AnthropicOvercoming Epistemology by Charles TaylorGod, Beauty, and Symmetry in Science - Catholic Stand | Thomas Aquinas on symmetryFriedrich Hayek - Wikipedia | “Hayekian”Eliezer Yudkowsky - Wikipedia | “AI policy people, especially in this kind Yudkowskyian scene”Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources | Resource rational (cognitive science term)Papers & posts mentioned[2404.10636] What are human values, and how do we align AI to them? | Paper by Oliver Klingefjord, Ryan Lowe, Joe EdelmanModel Integrity - by Joe Edelman and Oliver Klingefjord | Meaning Alignment Institute SubstackBios:Joe Edelman is a philosopher, sociologist, and entrepreneur whose work spans from theoretical philosophy to practical applications in technology and governance. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures.Joe's Social Links:Meaning Alignment Institute | WebsiteMeaning Alignment Institute (@meaningaligned) / XJoe Edelman (@edelwax) / XMatt Prewitt (he/him) is a lawyer, technologist, and writer. He is the President of the RadicalxChange Foundation.Matt's Social Links:ᴍᴀᴛᴛ ᴘʀᴇᴡɪᴛᴛ (@m_t_prewitt) / X Connect with RadicalxChange Foundation:RadicalxChange Website@RadxChange | TwitterRxC | YouTubeRxC | InstagramRxC | LinkedInJoin the conversation on Discord.Credits:Produced by G. Angela Corpus.Co-Produced, Edited, Narrated, and Audio Engineered by Aaron Benavides.Executive Produced by G. Angela Corpus and Matt Prewitt.Intro/Outro music by MagnusMoone, “Wind in the Willows,” is licensed under an Attribution-NonCommercial-ShareAlike 3.0 International License (CC BY-NC-SA 3.0)
Jeffrey Ladish is the director of Palisades research, and AI safety organization based in the San Francisco Bay. Our previous conversations about the dangers of AI left us insufficiently concerned. Ladish takes up the mantle of trying to convince us that there's something worth worrying about by detailing the various projects and experiments that Palisades has been undertaking with the goal of demonstrating that AI agents let loose on the world are capable of wreaking far more havoc than we expect. We leave the conversation more wary of the machines than ever - less because we think hyper-intelligent machines are just around the corner, and more because Ladish paints a visceral picture of the cage we're building ourself into. PATREON: get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasB MERCH: Rock some DemystifySci gear : https://demystifysci.myspreadshop.com/ AMAZON: Do your shopping through this link: https://amzn.to/3YyoT98 (00:00) Go! (00:07:36) Risks from Nuclear Wars and Emerging Technologies (00:15:01) Experiments with AI Agents (00:25:11) Enhanced AI as Tools vs. Intelligent Agents (00:34:39) AI Learning Through Games (00:44:04) AI Goal Accomplishment (00:55:01) Intelligence and Reasoning (01:07:11) Technological Arms Race and AI (01:17:16) The Rise of AI in Corporate Roles (01:25:20) Inception and Incentivization Issues in AI (01:35:12) AI Threats and Comparisons to Bioterrorism (01:45:13) Constitutional Analogies and Regulatory Challenges (01:55:11) AI as a Threat to Human Control (02:07:02) Challenges in Managing Technological Advancements (02:16:49) Advancements and Risks in AI Development (02:25:01) Current AI Research and Public Awareness #FutureOfAI, #AlgorithmicControl, #Cybersecurity, #AI, #AISafety, #ArtificialIntelligence, #TechnologyEthics, #FutureTech, #AIRegulation, #AIThreats, #Innovation, #TechRisks, #Cybersecurity, #SyntheticBiology, #TechGovernance, #HumanControl, #AIAlignment, #AIAdvancement, #TechTalk, #Podcast, #TechEthics, #sciencepodcast, #longformpodcast Check our short-films channel, @DemystifySci: https://www.youtube.com/c/DemystifyingScience AND our material science investigations of atomics, @MaterialAtomics https://www.youtube.com/@MaterialAtomics Join our mailing list https://bit.ly/3v3kz2S PODCAST INFO: Anastasia completed her PhD studying bioelectricity at Columbia University. When not talking to brilliant people or making movies, she spends her time painting, reading, and guiding backcountry excursions. Shilo also did his PhD at Columbia studying the elastic properties of molecular water. When he's not in the film studio, he's exploring sound in music. They are both freelance professors at various universities. - Blog: http://DemystifySci.com/blog - RSS: https://anchor.fm/s/2be66934/podcast/rss - Donate: https://bit.ly/3wkPqaD - Swag: https://bit.ly/2PXdC2y SOCIAL: - Discord: https://discord.gg/MJzKT8CQub - Facebook: https://www.facebook.com/groups/DemystifySci - Instagram: https://www.instagram.com/DemystifySci/ - Twitter: https://twitter.com/DemystifySci MUSIC: -Shilo Delay: https://g.co/kgs/oty671
Does Searle's Chinese Room show that AI is not intelligent or creative? Does AI learn the way humans do? And could AI ever be capable of truly creative thought? [00:00] Introduction to the Chinese Room Argument [03:55] The Evolution of Human Language [05:58] ChatGPT's Capabilities and Limitations [12:09] Human Understanding vs. AI Responses [15:33] The Complexity of Human Desires [20:24] The Nature of Human and Machine Intelligence [30:58] AI and Creativity: A Writer's Perspective [33:08] The Limits of AI Creativity [35:01] The Future of AI and AGI [36:05] Thermodynamics and Human Creativity [39:13] Live Experiment: AI Poetry [42:40] AI's Impressive Achievements [49:42] The Debate on AGI [59:53] Final Thoughts
SummaryGuido Meardi, CEO of V-Nova, discusses the importance of data compression in digital technology and its impact on various industries. He explains how V-Nova's compression technology enables more efficient use of data, leading to better quality and cost savings. Meardi also shares the exciting future of volumetric movies and immersive entertainment, where viewers can be inside the movie and experience it firsthand. He highlights the role of AI in data compression and its potential to enhance machine intelligence. Meardi's journey from being a management consultant at McKinsey to becoming an entrepreneur is also explored. Guido Meardi discusses the importance of mentors and self-confidence in entrepreneurship. He shares his experience of transitioning from a successful career at McKinsey to starting his own company, V-Nova. He emphasizes the challenges of securing funding for innovative ventures and the need to balance self-confidence with listening to feedback. Meardi also highlights the importance of surrounding oneself with mentors and a supportive team. He discusses the ongoing challenges of driving adoption and expanding into new markets. His dream is to fulfill the promise of V-Nova's technology and make a significant impact in the field of video compression.TakeawaysData compression plays a crucial role in digital technology, enabling more efficient use of data and leading to better quality and cost savings.V-Nova's compression technology, such as MPEG-5 LCVC, allows for significant reductions in data transmission and processing power while maintaining quality.The future of entertainment includes volumetric movies, where viewers can be inside the movie and experience it firsthand.AI can enhance data compression and improve machine intelligence, leading to more efficient processing and analysis of data.Guido Meardi's journey from being a management consultant at McKinsey to becoming an entrepreneur highlights the importance of driving positive change and making a difference in the world. Choosing mentors and having multiple mentors is crucial for personal growth and success in entrepreneurship.Balancing self-confidence with the ability to listen to feedback is essential for making informed decisions.Securing funding for innovative ventures can be challenging, especially when the idea is unconventional.Surrounding oneself with a supportive team and mentors is important for navigating the entrepreneurial journey.Driving adoption and expanding into new markets are ongoing challenges for technology companies.The dream is to fulfill the promise of innovative technology and make a significant impact in the industry.Linkswww.norhart.com Beat the banks! Interested in earning up to 8.5% interest? Visit Norhart Invest to learn more!
Season Three - Spotlight Eleven Our Eleventh spotlight of this season is a snippet of our upcoming episode: Matt Beane - How to Save Human Ability in an Age of Machine Intelligence. Join host Lauren Hawker Zafer as she engages with Matt Beane. Don't miss this unique redefining what in means to develop skill in the age of machine intelligence. Who is Matt Beane? Matt Beane does field research on work involving robots and AI to uncover systematic positive exceptions that we can use across the broader world of work. His award-winning research has been published in top management journals such as Administrative Science Quarterly and Harvard Business Review, and he graced the TED stage. He also took a two-year hiatus from his PhD at MIT's Sloan School of Management to help found and fund Humatics, a full-stack IoT startup. In 2012 he was selected as a Human-Robot Interaction Pioneer, and in 2021 was named to the Thinkers50 Radar list. Beane is an assistant professor in the Technology Management department at the University of California, Santa Barbara, and a Digital Fellow with Stanford's Digital Economy Lab and MIT's Initiative on the Digital Economy. When he's not studying intelligent technologies and learning, he enjoys playing guitar; his morning coffee ritual with his wife, Kristen; and reading science fiction. “If you're worried about your skills becoming obsolete, this book may be your saving grace. Matt Beane has spent his career studying how to gain and maintain expertise as technology evolves, and his analysis is both engrossing and edifying.” —Adam Grant, #1 New York Times bestselling author of Hidden Potential and Think Again, and host of the TED podcast WorkLife “Beane shows us the true human-centered approach to AI advancements and how we must act now to achieve the next generation of human skills coupled with the productivity gains from AI.” —Fei Fei Li, Sequoia Professor of Computer Science, Founding, Director of Stanford Institute for Human-centered AI (HAI), Stanford University #ai #redefiningai #techpodcast #squirro
Generative AI is finding its way into the tools and processes that power creative work. Exciting? Terrifying? Maybe a little of both. Adobe has been not only shipping impressive generative AI tools and features, but thinking about the implications this new technology could have on creative careers. Adobe invited us to their offices in San Francisco for a conversation with a panel of leaders including Rachana Rele, Samantha Warren, Danielle Morimoto, and Laura Herman who shared how they and their teams are building and training AI models ethically while bringing innovation to the creative process. Find the transcript, show notes and more on our Substack: https://designbetterpodcast.com/p/bonus-ai-and-the-creative-process Panelists Rachana Rele, VP of Design, Generative AI, Emerging Products, & Adobe Ventures Rachana is at the forefront of shaping the future of design and technology. In her role, she leads the charge in harnessing the power of generative AI, and Adobe Firefly, to unlock creativity for creatives, communicators, and marketers. She serves as a product leader, shepherding incubations from zero to one and guiding emerging businesses like Adobe Stock to achieve scale. With a deep-seated passion for fostering world-class design teams, Rachana thrives on crafting experiences that resonate with customers and drive tangible value for businesses. Rachana holds both bachelor's and master's degrees in industrial engineering with a specialized focus on human-computer interaction. Her student-always mindset has led her to pursue an Executive MBA at Haas School of Business, UC Berkeley (class of 2025). Samantha Warren, Sr Design Director, Machine Intelligence and New Technologies Samantha is the Senior Design Director for MINT (Machine Intelligence and New Technologies), where we focus on Emerging projects, Adobe Firefly, and Artificial Intelligence across Adobe software. Samantha specializes in product strategy and user experience design. Her superpower is leading teams with vision while managing practical execution. Danielle Morimoto, Sr Design Manager, Adobe Firefly Danielle Morimoto a Sr. Design Manager for Generative AI with the Machine Intelligence and New Technologies team at Adobe. I've worked on a range of projects from initiatives supporting emerging artists ages 13 to 24 that are using creativity as a force for positive impact, to the next evolution of Creative Cloud on the web. I've helped define the most compelling experiences for development over the next 1–3 years by uncovering untapped potential and ultimately identifying how people could be using Adobe in the future. I'm an avid road cyclist, NBA Golden State Warriors fan and lover of ice cream. Laura Herman, Sr Research Manager, Adobe Firefly Laura Herman is the Head of AI Research at Adobe and a doctoral researcher at the University of Oxford's Internet Institute. Laura's academic research examines the impact of algorithmic curation on global visual cultures, taking an inclusive and international approach with a particular focus on the Global South. At Adobe, Laura leads the team that researches Generative AI for Creative Cloud. Previous technologies that she has worked on have been acknowledged as Apple's “App of the Day” and as a Webby People's Choice Award winner. Laura has previously held research positions at Intel, Harvard University, and Princeton University, from which she graduated with honors in Neuroscience & Psychology. Learn more about your ad choices. Visit megaphone.fm/adchoices
Today's niche diets, social media blips and internet search spikes could be tomorrow's must-have functional ingredients, breakout snacks or new kitchen staples -- the trick is spotting them early and knowing whether they represent long-term trends or short-term fads and how best to position a launch or brand to capitalize on consumer demand.
Eric Daimler is the cofounder and CEO of Conexus AI, a data management platform that provides composable and machine-verifiable data integration. He was previously an assistant dean and assistant professor at Carnegie Mellon University. He was the founding partner of Hg Analytics and managing director at Skilled Science. He was also the White House Presidential Innovation Fellow for Machine Intelligence and Robotics. Eric's favorite book: ReCulturing (Author: Melissa Daimler) (00:00) Understanding Symbolic AI(02:42) Symbolic AI mirrors biological intelligence(06:01) Category Theory(08:42) Comparing Symbolic AI and Probabilistic AI(11:22) Symbolic Generative AI(14:19) Implementing Symbolic AI(18:25) Symbolic Reasoning(21:24) Explainability(24:39) Neuro Symbolic AI(26:41) The Future of Symbolic AI(30:43) Rapid Fire Round--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi
Episode 124You may think you're doing a priori reasoning, but actually you're just over-generalizing from your current experience of technology.I spoke with Professor Seth Lazar about:* Why managing near-term and long-term risks isn't always zero-sum* How to think through axioms and systems in political philosphy* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AISeth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:54) Ad read — MLOps conference* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation* (03:53) Attention allocation as an independent good (or bad)* (08:22) Axioms in political philosophy* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust* (15:05) AI safety / catastrophic risk concerns* (22:10) Superintelligence arguments, reasoning about technology* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other? * (35:55) GPT-2, model weights, related debates* (39:11) Power and economics—coordination problems, company incentives* (50:42) Morality tales, relationship between safety and capabilities* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy* (1:02:28) What is a feasibility horizon? * (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter* (1:14:25) Sociotechnical lenses, narrowly technical solutions* (1:19:47) Experiments for responsibly integrating AI systems into society* (1:26:53) Helpful/honest/harmless and antagonistic AI systems* (1:33:35) Managing incentives conducive to developing technology in the public interest* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia* (1:46:54) How we can help legitimize and support interdisciplinary work* (1:50:07) OutroLinks:* Seth's Linktree and Twitter* Resources* Attention, moral skill, and algorithmic recommendation* Catastrophic AI Risk slides Get full access to The Gradient at thegradientpub.substack.com/subscribe
For the past four years, Sandia National Laboratories has been conducting a focused research effort on Trusted AI for national security problems. The goal is to develop the fundamental insights required to use AI methods in high-consequence national security applications while also improving the practical deployment of AI. This talk looks at key properties of many national security problems along with Sandia's ongoing effort to develop a certification process for AI-based solutions. Along the way, we will examine several recent and ongoing research projects, including how they contribute to the larger goals of Trusted AI. The talk concludes with a forward-looking discussion of remaining research gaps. About the speaker: David manages the Machine Intelligence and Visualization department, which conducts cutting-edge research in machine learning and artificial intelligence for national security applications, including the advanced visualization of data and results. David has been studying machine learning in the broader context of artificial intelligence for over 15 years. His research focuses on applying machine learning methods to a wide variety of domains with an emphasis on estimating the uncertainty in model predictions to support decision making. He also leads the Trusted AI Strategic Initiative at Sandia, which seeks to develop fundamental insights into AI algorithms, their performance and reliability, and how people use them in national security contexts. Prior to joining Sandia, David spent three years as research faculty at Arizona State University and one year as a postdoc at Stanford University developing intelligent agent architectures. He received his doctorate in 2006 and MS in 2002 from the University of Massachusetts at Amherst for his work in machine learning. David earned his Bachelor of Science from Clarkson University in 1998.Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
Three expert guests discuss the implications of AI and the fine arts in a conversation moderated by Steve Scher. Scientist and founder of the Artists and Machine Intelligence program at Google, Blaise Agüera y Arcas, will offer his “news from the front” about the latest developments in AI capabilities, and what he foresees ahead. Alex Alben, technology executive, author, and law professor, will review the implications of AI to the artist from the point of view of intellectual property: is anything on the internet up for grabs, or is compensation for image “borrowing” a possibility? Finally, painter Jason Puccinelli, who uses AI as one of his tools in image creation, will talk about what he finds exciting and useful, and what he finds problematic, about this new resource. Presented by Town Hall Seattle and Gage Academy of Art.
This episode takes a closer look at Anthropic's Claude 3 AI models, known for their advanced cognitive capabilities and safety features, Apple's introduction of the M3 MacBook Air with its enhanced performance and sustainability, and Google's AI program aimed at supporting publishers.For more on these stories:Anthropic launches Claude 3Apple unveils M3 MacBook AirGoogle pays publishers to use AIPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
“Cybernetic Conjurations: The Nexus of Magic and Machine Intelligence” This piece goes into the history of magic, introducing key figures such as Aleister Crowley and John Dee – both famous for their mystical practices about our contemporary development of AI. It delves into how the ancient ideas of summoning and servitors correlate with contemporary digital creations, stating that it is a never-ending thread of human intention to shape reality. The article also discusses the ethical considerations and philosophical issues raised by AI's exponential rise, making parallels with ancient magicians' goals. It invites readers to ponder the place of AI in our lives and its potential for either elevating or undermining them.
Tough Tech Today is transitioning seasons, recapping our Season 3 themes exploring the tough tech domains of biology, space, and fusion energy. We are also preparing for the show's fourth season and we are really excited for it!Upcoming themes:Blue tech – The advanced technologies of the maritime industry. Incredible machines, sustainable oceans, and mysteries to solve in our planet's seas.Quantum sciences – A wild world where physics gets weird, we are super curious about opportunities in computing, sensing, and communication.Artificial Intelligence – While it was only fairly recently that "A.I." has debuted on the pop-culture zeitgeist, for years tech trailblazers have been developing incredible applications of machine intelligence to solve our world's toughest challenges.Thank you, our Season 3 guests!BioTechNew Equilibrium Biosciences - Virginia BurgerElevian - Mark AllenConcerto Biosciences - Cheri AckermanSpaceSpace Capital - Chad AndersonMithril Technologies - Scarlett KollerArkenstone Ventures/USAF USSF - Preston DunlapFusionProxima Fusion - Francesco SciortinoTDK Ventures / Type One Energy - Tina TosukhowongFocused Energy - Thomas Forner and Pravesh PatelP.S. Thank you to our tough tech champions. We really appreciate your support! If you'd like to level-up your support of our work, take a look at our pay-if-you-can membership options so you can help us bring Tough Tech Today to more folks!
In a dramatic turn of events, OpenAI's board of directors fired CEO and co-founder Sam Altman. Then they tried to hire him back. Then they announced a former Twitch CEO will lead the company. What the what?See omnystudio.com/listener for privacy information.
In this episode of The Genetics Podcast, we welcome Dr. James Field, founder and CEO of LabGenius. Join us as we delve into LabGenius' cutting-edge approach that utilises machine learning, artificial intelligence, and sophisticated robotics to advance antibody discovery and drug development. As a bonus, learn about James' path from scientist to CEO, and how he created LabGenius.
We engage in a discussion centered around Daron Acemoglu's latest book, co-authored with Simon Johnson, titled Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. The choices we make regarding technology can either advance the interests of a select elite or serve as the foundation for widespread prosperity. But technology's trajectory can be, and should be, controlled and directed for the benefit of all. The remarkable advances in computing over the past fifty years have the potential to be tools of empowerment and democratization, but only if decision-making power is dispersed rather than concentrated in the hands of a few overconfident tech leaders.Daron Acemoglu is Professor of Economics at the Massachusetts Institute of Technology, MIT. @DAcemogluMIT(Cover photo of Daron Acemoglu by Cody O'Loughlin)Key highlights:Introduction - 00:24Understanding “progress” - 04:06Optimism in an era of doom and gloom - 12:00The power of persuasion - 16:10Shared prosperity, welfare, and whether technology is always useful - 25:08Machine intelligence vs. machine usefulness - 30:12How technology (e.g., AI) can help promote development in low-income countries - 36:50Host:Professor Dan Banik (@danbanik @GlobalDevPod)Apple Google Spotify YouTubeSubscribe: https://globaldevpod.substack.com/
Discussions about artificial intelligence tend to focus on its risks, but there is also excitement on the horizon. AI tools, like the models beneath ChatGPT, are being increasingly used by scientists for everything from finding new drugs and materials to predicting the shapes of proteins. Self-driving lab robots could take things even further towards making new discoveries. As it gets ever more useful, could AI change the scientific process altogether?Jane Dyson, structural biologist at the Scripps Research Institute in La Jolla, California, explains why Google DeepMind's AlphaFold tool is useful, but scientists should be aware of its limitations. This week, Google DeepMind released a new tool to unpick the link between genes and disease, as Pushmeet Kohli, head of the company's “AI for Science” team, explains. Also, Kunal Patel, one of our producers, meets Erik Bjurström, a researcher at Chalmers University of Technology and Ross King, a professor of Machine Intelligence at Chalmers University of Technology and at the University of Cambridge. They explain why self-driving lab robots could make research more efficient. Alok Jha, The Economist's science and technology editor hosts, with Abby Bertics, our science correspondent and Tom Standage, deputy editor. Sign up for Economist Podcasts+ now and get 50% off your subscription with our limited time offer: economist.com/podcastsplus-babbage. You will not be charged until Economist Podcasts+ launches.If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription.For more information about Economist Podcasts+, including how to get access, please visit our FAQs page. Hosted on Acast. See acast.com/privacy for more information.
Discussions about artificial intelligence tend to focus on its risks, but there is also excitement on the horizon. AI tools, like the models beneath ChatGPT, are being increasingly used by scientists for everything from finding new drugs and materials to predicting the shapes of proteins. Self-driving lab robots could take things even further towards making new discoveries. As it gets ever more useful, could AI change the scientific process altogether?Jane Dyson, structural biologist at the Scripps Research Institute in La Jolla, California, explains why Google DeepMind's AlphaFold tool is useful, but scientists should be aware of its limitations. This week, Google DeepMind released a new tool to unpick the link between genes and disease, as Pushmeet Kohli, head of the company's “AI for Science” team, explains. Also, Kunal Patel, one of our producers, meets Erik Bjurström, a researcher at Chalmers University of Technology and Ross King, a professor of Machine Intelligence at Chalmers University of Technology and at the University of Cambridge. They explain why self-driving lab robots could make research more efficient. Alok Jha, The Economist's science and technology editor hosts, with Abby Bertics, our science correspondent and Tom Standage, deputy editor. Sign up for Economist Podcasts+ now and get 50% off your subscription with our limited time offer: economist.com/podcastsplus-babbage. You will not be charged until Economist Podcasts+ launches.If you're already a subscriber to The Economist, you'll have full access to all our shows as part of your subscription.For more information about Economist Podcasts+, including how to get access, please visit our FAQs page. Hosted on Acast. See acast.com/privacy for more information.
What is the human mind in relationship to our brain? What is a machine mind going to look like? How do humans decide on their next best word or action and how does an LLM do the same? At what point might we believe that a machine mind is waking up? In this episode, Scott dives into the philosophy of mind by asking you the listener a handful of intriguing questions. Along the way you might come to believe that we overstate the sophistication of the human mind and under predict the possibility for a machine mind to be awake. If nothing else, this episode will help you think about analogies and ideas you might not have heard before and that will help inspire your mind!
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important ■Post on this topic (You can get FREE learning materials!) https://englist.me/202-academic-words-reference-from-zeynep-tufekci-machine-intelligence-makes-human-morals-more-important-ted-talk/ ■Youtube Video https://youtu.be/zRPg38xXceA (All Words) https://youtu.be/Rs_Ry1o58lY (Advanced Words) https://youtu.be/7f7NNHzJkeM (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
This podcast is a commentary and does not contain any copyrighted material of the reference source. We strongly recommend accessing/buying the reference source at the same time. ■Reference Source https://www.ted.com/talks/refik_anadol_art_in_the_age_of_machine_intelligence ■Post on this topic (You can get FREE learning materials!) https://englist.me/153-academic-words-reference-from-refik-anadol-art-in-the-age-of-machine-intelligence--ted-talk/ ■Youtube Video https://youtu.be/dHMes8aVN4k (All Words) https://youtu.be/BANldHGp3PA (Advanced Words) https://youtu.be/McvdjzGkOm4 (Quick Look) ■Top Page for Further Materials https://englist.me/ ■SNS (Please follow!)
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: LeCun's “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem, published by Steven Byrnes on May 8, 2023 on LessWrong. Summary This post is about the paper A Path Towards Autonomous Machine Intelligence (APTAMI) by Yann LeCun. It's a high-level sketch of an AI architecture inspired by the brain. APTAMI is mostly concerned with arguing that this architecture is a path towards more-capable AI. However, it is also claimed (both in the paper itself and in associated public communication) that this architecture is a path towards AI that is “controllable and steerable”, kind, empathetic, and so on. I argue that APTAMI is in fact, at least possibly, a path towards that latter destination, but only if we can solve a hard and currently-unsolved technical problem. This problem centers around the Intrinsic Cost module, which performs a role loosely analogous to “innate drives” in humans—e.g. pain being bad, sweet food being good, a curiosity drive, and so on. APTAMI does not spell out explicitly (e.g. with pseudocode) how to create the Intrinsic Cost module. It offers some brief, vague ideas of what might go into the Intrinsic Cost module, but does not provide any detailed technical argument that an AI with such an Intrinsic Cost would be controllable / steerable, kind, empathetic, etc. I will argue that, quite to the contrary, if we follow the vague ideas in the paper for building the Intrinsic Cost module, then there are good reasons to expect the resulting AI to be not only unmotivated by human welfare, but in fact motivated to escape human control, seek power, self-reproduce, etc., including by deceit and manipulation. Indeed, it is an open technical problem to write down any Intrinsic Cost function (along with training environment and other design choices) for which there is a strong reason to believe that the resulting AI would be controllable and/or motivated by human welfare, while also being sufficiently competent to do the hard intellectual tasks that we're hoping for (e.g. human-level scientific R&D). I close by encouraging LeCun himself, his colleagues, and anyone else to try to solve this open problem. It's technically interesting, very important, and we have all the information we need to start making progress now. I've been working on that problem myself for years, and I think I'm making more than zero progress, and if anyone reaches out to me I'd be happy to discuss the current state of the field in full detail. .And then there's an epilogue, which steps away from the technical discussion of the Intrinsic Cost module, and instead touches on bigger-picture questions of research strategy & prioritization. I will argue that the question of AI motivations merits much more than the cursory treatment that it got in APTAMI—even given the fact that APTAMI was a high-level early-stage R&D vision paper in which every other aspect of the AI is given an equally cursory treatment. (Note: Anyone who has read my Intro to Brain-Like AGI Safety series will notice that much of this post is awfully redundant with it—basically an abbreviated subset with various terminology changes to match the APTAMI nomenclature. And that's no coincidence! As mentioned, the APTAMI architecture was explicitly inspired by the brain.) 1. Background: the paper's descriptions of the “Intrinsic Cost module” For the reader's convenience, I'll copy everything specific that APTAMI says about the Intrinsic Cost module. (Emphasis in original.) PAGES 7-8: The Intrinsic Cost module is hard-wired (immutable, non trainable) and computes a single scalar, the intrinsic energy that measures the instantaneous “discomfort” of the agent – think pain (high intrinsic energy), pleasure (low or negative intrinsic energy), hunger, etc. The input to the module is the current state of t...
(2:33) - Using Machine Learning to Detect Rare DiseasesThis episode was brought to you by Mouser, our favorite place to get electronics parts for any project, whether it be a hobby at home or a prototype for work. Click HERE to learn about how a) AI is being leveraged in healthcare and b) the tools available from vendors to empower development in this area.
For many, the term “inclusion” is the end all for social justice efforts. But, in her new book, Erin Raffety suggests that “inclusion” doesn't work, at least in churches with disabled people. Listen to this quote: “The church is called apart from the world to repent of its ableism, disown its power, abandon inclusion, and pursue justice alongside disabled people.” Throughout her book she clarifies why inclusions falters and what justice might look like. She does this by interpreting scripture, drawing from her ethnographic research with congregations in Northeastern America, and engaging with disability activists and scholars. So, you'll get to hear about some of those things in our conversation. I'm excited for you to hear it. Erin Raffety is Associate Research Scholar at Princeton Theological Seminary and Research Fellow in Machine Intelligence & Pastoral Care at the Center for Theological Inquiry in Princeton, New Jersey. She is the author of From Inclusion to Justice, the book we're discussing today, which is out now through Baylor University Press. And I'm grateful also that Dr. Devan Stahl joined us for this conversation as a cohost. Devan is Assistant Professor of Religion here at Baylor University and author of a new book called Disability's Challenge to Theology (UND Press). You can listen to us discuss Devan's book in our episode "An Era of Soft Eugenics?" Resources for Further Education on Disability Visit Erin's curated list of resources on her website. Browse Baylor University Press's books on the topic.
I spent a lot of time reading and thinking about AI this week. I'm especially interested in the implications it will have for writers and the so-called creator economy. Obviously things will change if it becomes nearly free to generate decently intelligent content with machines. But how will things change exactly, and how should writers spend their time now to position themselves for these changes? I think the implications are not obvious. Specifically, AI will increase the economic power of deep classical education, and truly unique artistic personality. In this podcast, I explain why.I wrote more about this at otherlife.co/writing-machinesOther Life✦ Subscribe to the coolest newsletter in the world https://OtherLife.co✦ Get a free Urbit ship at https://imperceptible.computer✦ Join the community https://imperceptible.countryIndieThinkers.org✦ If you're working on independent projects, join the next cohort of https://IndieThinkers.org
Dr John Collins worked for the UK's Central Electricity Generating Board in the days when such things were nationalised industries. His PhD involved creating a real-time dosimeter for workers in nuclear plants so they didn't have to wait 2 weeks to learn the results of the film-based dosimeters that were in use. In doing so, he saved the CEGB considerable amounts of money - and, mere importantly, saved the lives and health of the men and women who worked there. Thus began a lifetime working at the leading edge of business where innovation meets ethics and morality so that now, he is the Ethics and Responsible Innovation Advisor at Machine Intelligence Garage and on the Ethics Advisory Board at Digital Catapult. He's writing a book called 'A History of the Future in Seven Words.' With all this, he's an ideal person to open up the worlds of business, innovation and technology. In a wide-ranging, sparky, fun conversation, we explore what might make AI safe, how a future might look with sustainable business, whether 1.5 is 'still alive' and if that's even a useful metric - and how much power does it take to post an Instagram picture compared to making a plastic bottle (spoiler alert: it's the same power and the same CO2 generated - assuming both use the same power source and *if* the image is stored for 100 years... which the way we're going, might not happen. But still... ). John on LinkedIn https://www.linkedin.com/in/drjohnlcollins/Digital Catapult https://www.digicatapult.org.uk/
Episode: 2544 How humans and computers recognize faces. Today, UH math professor Krešo Josić recognizes your face.
In Episode 268 of Hidden Forces, Demetri Kofinas replays a monologue that he wrote and first published nearly five years ago about the future of humanity, machine intelligence, and the power process. When he began Hidden Forces, Demetri was primarily concerned with exploring some of the recent advancements in technology and humanity's scientific understanding of both the external world, as well the internal one that gives rise to the human experience. He believed (and still does) that many of the socio-cultural and political changes that we were experiencing in the world were being strongly influenced and shaped by these advancements, which is why so many of our early episodes were philosophical in nature, dealing with questions of epistemology, identity, and meaning. In preparing for his recent conversation with director Alex Lee Moyer, Demetri was inspired to go back and listen to a monologue that we published nearly five years ago and which dealt directly with some of these more philosophical questions. The monologue was rooted in a set of observations about the role of digital technology and machine intelligence in shaping our experience of the world, our sense of agency, and our very notions of what it means to be a human being. The monologue is even more relevant today than it was in 2017 when he first wrote it. We are living through an extortionary moment in human history. Nothing short of our humanity is at stake, whether from the growing possibility of total war, the creeping security state and surveillance capitalism, or even just the abject apathy and nihilism that seems to be infecting more and more of society. If you are a listener of this show, then these issues concern you directly, and while you may not be the CEO of Google or the head of the Federal Communications Commission, your input, activism, and engagement on these issues will be a decisive factor in determining not only the quality of our future, but the quality of the world that we leave for our children and grandchildren. You can access the full episode, transcript, and intelligence report to this week's conversation by going directly to the episode page at HiddenForces.io and clicking on "premium extras." All subscribers gain access to our premium feed, which can be easily added to your favorite podcast application. If you enjoyed listening to today's episode of Hidden Forces you can help support the show by doing the following: Subscribe on Apple Podcasts | YouTube | Spotify | Stitcher | SoundCloud | CastBox | RSS Feed Write us a review on Apple Podcasts & Spotify Subscribe to our mailing list at https://hiddenforces.io/newsletter/ Producer & Host: Demetri Kofinas Editor & Engineer: Stylianos Nicolaou Subscribe & Support the Podcast at https://hiddenforces.io Join the conversation on Facebook, Instagram, and Twitter at @hiddenforcespod Follow Demetri on Twitter at @Kofinas Episode Recorded on 12/15/2017
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today we continue our ICLR coverage joined by Been Kim, a staff research scientist at Google Brain, and an ICLR 2022 Invited Speaker. Been, whose research has historically been focused on interpretability in machine learning, delivered the keynote Beyond interpretability: developing a language to shape our relationships with AI, which explores the need to study AI machines as scientific objects, in isolation and with humans, which will provide principles for tools, but also is necessary to take our working relationship with AI to the next level. Before we dig into Been's talk, she characterizes where we are as an industry and community with interpretability, and what the current state of the art is for interpretability techniques. We explore how the Gestalt principles appear in neural networks, Been's choice to characterize communication with machines as a language as opposed to a set of principles or foundational understanding, and much much more. The complete show notes for this episode can be found at twimlai.com/go/571