POPULARITY
Jensen Huang Just Won IEEE's Highest Honor. The Reason Tells Us Everything About Where Tech Is Headed.IEEE announced Jensen Huang as its 2026 Medal of Honor recipient at CES this week. The NVIDIA founder joins a lineage stretching back to 1917—over a century of recognizing people who didn't just advance technology, but advanced humanity through technology.That distinction matters more than ever.I spoke with Mary Ellen Randall, IEEE's 2026 President and CEO, from the floor of CES Las Vegas. The timing felt significant. Here we are, surrounded by the latest gadgets and AI demonstrations, having a conversation about something deeper: what all this technology is actually for.IEEE isn't a small operation. It's the world's largest technical professional society—500,000 members across 190 countries, 38 technical societies, and 142 years of history that traces back to when the telegraph was connecting continents and electricity was the revolutionary new thing. Back then, engineers gathered to exchange ideas, challenge each other's thinking, and push innovation forward responsibly.The methods have evolved. The mission hasn't."We're dedicated to advancing technology for the benefit of humanity," Randall told me. Not advancing technology for its own sake. Not for quarterly earnings. For humanity. It sounds like a slogan until you realize it's been their operating principle since before radio existed.What struck me was her framing of this moment. Randall sees parallels to the Renaissance—painters working with sculptors, sharing ideas with scientists, cross-pollinating across disciplines to create explosive growth. "I believe we're in another time like that," she said. "And IEEE plays a crucial role because we are the way to get together and exchange ideas on a very rapid scale."The Jensen Huang selection reflects this philosophy. Yes, NVIDIA built the hardware that powers AI. But the Medal of Honor citation focuses on something broader—the entire ecosystem NVIDIA created that enables AI advancement across healthcare, autonomous systems, drug discovery, and beyond. It's not just about chips. It's about what the chips make possible.That ecosystem thinking matters when AI is moving faster than our ethical frameworks can keep pace. IEEE is developing standards to address bias in AI models. They've created certification programs for ethical AI development. They even have standards for protecting young people online—work that doesn't make headlines but shapes the digital environment we all inhabit."Technology is a double-edged sword," Randall acknowledged. "But we've worked very hard to move it forward in a very responsible and ethical way."What does responsible look like when everything is accelerating? IEEE's answer involves convening experts to challenge each other, peer-reviewing research to maintain trust, and developing standards that create guardrails without killing innovation. It's the slow, unglamorous work that lets the exciting breakthroughs happen safely.The organization includes 189,000 student members—the next generation of engineers who will inherit both the tools and the responsibilities we're creating now. "Engineering with purpose" is the phrase Randall kept returning to. People don't join IEEE just for career advancement. They join because they want to do good.I asked about the future. Her answer circled back to history: the Renaissance happened when different disciplines intersected and people exchanged ideas freely. We have better tools for that now—virtual conferences, global collaboration, instant communication. The question is whether we use them wisely.We live in a Hybrid Analog Digital Society where the choices engineers make today ripple through everything tomorrow. Organizations like IEEE exist to ensure those choices serve humanity, not just shareholder returns.Jensen Huang's Medal of Honor isn't just recognition of past achievement. It's a statement about what kind of innovation matters.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Innovation comes in many forms, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom welcomes Cristina DiGiacomo, founder of 10P1 Inc. Cristina has an extensive background in communications, business, and practical philosophy. Cristina introduces her '10+1 Commandments', a set of ethical guidelines for human interaction with artificial intelligence. They discuss the compelling need to integrate these principles into business compliance and governance frameworks. The commandments aim to provide a high-level, universal, and perpetual moral code that addresses the risks and ethical considerations of AI in the corporate world. Cristina emphasizes the importance of maintaining ethical AI practices amidst the evolving regulatory landscape. Key highlights: Philosophy in Everyday Life Ancient Wisdom and Modern Application The 10+1 Commandments Explained Applying the Commandments in Business Governance and Ethical AI Resources: Cristina DiGiacomo on LinkedIn Website-10+1 Innovation in Compliance was recently ranked the 4th podcast in Risk Management by 1,000,000 Podcasts.
I spotted a LinkedIn post the other day—obviously AI-generated—with dozens of enthusiastic comments underneath. Every single one also written by AI. Bots responding to bots, a whole conversation with zero humans involved. It was both hilarious and deeply sad. This got me thinking about the dead internet theory and our role as founders in either contributing to it or pushing back against it. Today I'm exploring how we can build AI tools that augment human connection rather than replace it entirely—using AI as the means, not the end.This episode of The Bootstraped Founder is sponsored by Paddle.comThe blog post: https://thebootstrappedfounder.com/the-dead-internet-theory-are-we-building-machines-that-only-talk-to-other-machines/ The podcast episode: https://tbf.fm/episodes/the-dead-internet-theory-are-we-building-machines-that-only-talk-to-other-machines Check out Podscan, the Podcast database that transcribes every podcast episode out there minutes after it gets released: https://podscan.fmSend me a voicemail on Podline: https://podline.fm/arvidYou'll find my weekly article on my blog: https://thebootstrappedfounder.comPodcast: https://thebootstrappedfounder.com/podcastNewsletter: https://thebootstrappedfounder.com/newsletterMy book Zero to Sold: https://zerotosold.com/My book The Embedded Entrepreneur: https://embeddedentrepreneur.com/My course Find Your Following: https://findyourfollowing.comHere are a few tools I use. Using my affiliate links will support my work at no additional cost to you.- Notion (which I use to organize, write, coordinate, and archive my podcast + newsletter): https://affiliate.notion.so/465mv1536drx- Riverside.fm (that's what I recorded this episode with): https://riverside.fm/?via=arvid- TweetHunter (for speedy scheduling and writing Tweets): http://tweethunter.io/?via=arvid- HypeFury (for massive Twitter analytics and scheduling): https://hypefury.com/?via=arvid60- AudioPen (for taking voice notes and getting amazing summaries): https://audiopen.ai/?aff=PXErZ- Descript (for word-based video editing, subtitles, and clips): https://www.descript.com/?lmref=3cf39Q- ConvertKit (for email lists, newsletters, even finding sponsors): https://convertkit.com?lmref=bN9CZw
We hope you're enjoying the holiday season with family, friends, and loved ones. We'll be releasing new episodes again in the new year – in the meantime, today, we're re-running a fascinating episode on The future of AI coaching. The past few years have seen an incredible boom in AI and one of our colleagues, James Landay, a professor in Computer Science, thinks that when it comes to AI and education, things are just getting started. He's particularly excited about the potential for AI to serve as a coach or tutor. We hope you'll take another listen to this conversation and come away with some optimism for the potential AI has to help make us smarter and healthier. Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: James LandayConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest James Landay, a professor of Computer Science at Stanford University.(00:01:44) Evolving AI ApplicationsHow large language models can replicate personal coaching experiences.(00:06:24) Role of Health Experts in AIIntegrating insights from medical professionals into AI coaching systems.(00:10:01) Personalization in AI CoachingHow AI coaches can adapt personalities and avatars to cater to user preferences.(00:12:30) Group Dynamics in AI CoachingPros and cons of adding social features and group support to AI coaching systems.(00:13:48) Ambient Awareness in TechnologyAmbient awareness and how it enhances user engagement without active attention.(00:17:24) Using AI in Elementary EducationNarrative-driven tutoring systems to inspire kids' learning and creativity.(00:22:39) Encouraging Student Writing with AIUsing LLMs to motivate students to write through personalized feedback.(00:23:32) Scaling AI Educational ToolsThe ACORN project and creating dynamic, scalable learning experiences.(00:27:38) Human-Centered AIThe concept of human-centered AI and its focus on designing for society.(00:30:13) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode of the Product Experience Podcast, we speak with Kasia Chmielinski, co-founder of The Data Nutrition Project, who discusses their work on responsible AI, data quality, and the Data Nutrition Project. Kasia highlights the importance of balancing innovation with ethical considerations in product management, the challenges of working within large organizations like the UN, and the need for transparency in data usage. Featured Links: Follow Kasia on LinkedIn | The Data Nutrition Project | 'What we learned at Pendomonium and #mtpcon 2024 Raleigh: Day 2' feature by Louron PrattOur HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A Product Manager's Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon's music stores in the US & UK.
ChatGPT ads are coming y'all.
On this episode of The Association Podcast, we welcome repeat guest Jeff De Cagna, AIMP, FRSA, FASAE, and Executive Advisor at Foresight First LLC, for a deep dive into the challenges and considerations involving AI and association boards. We discuss the Future of Association Boards (FAB) Report, which De Cagna curated and edited, touching on the importance of creating a better future for association boards. Jeff stresses the need for ethical reflection in adopting AI, the concept of stewardship over traditional leadership, and fostering humanity within organizational purposes. The conversation also covers practical approaches for boards, board readiness, and actions association leaders can take to effectively navigate the evolving landscape.FAB Report
"I always say, you can't learn how to swim if you don't jump into the water. And it's so important for people to be able to jump into the water and to really test it out."
What happens when a high-powered executive, responsible for scaling multi-billion dollar companies, is asked by her 10-year-old: "What does that money actually mean to us?" In this deeply insightful episode, we sit down with Irene Liu, founder of Hypergrowth GC and former Chief Financial and Legal Officer at Hopin. Irene shares her journey from the Department of Justice to the front lines of the AI revolution, where she now advises the California Senate on AI safety. We explore the "Politics of the C-Suite," the necessity of high EQ in leadership, and why Irene decided to step out of the "survival mode" of corporate life to define what "enough" looks like for her family. In this episode, we dive deep into: Resilience born from crisis: how working in finance in Manhattan during 9/11 shaped Irene's mental fortitude. Navigating layoffs with humanity: whether you are the one being let go, the one left with survivor's guilt, or the executive making the difficult calls. The art of the pivot: effective strategies for transitioning from public service and government roles into the private sector. The AI frontier: a sobering look at the "Empire of AI," the global race for innovation, and the urgent need for safeguards to protect children and vulnerable populations. The path to the C-Suite: the two key qualities you need to transition from "just a lawyer" to a business leader. "More Mommy" vs. "More Money": how to evaluate career choices through the lens of family values and the "seasons of life." Owning your growth: Why you shouldn't let your employer drive your career, and the importance of self-investment and building a genuine community. Connect with us: Learn more about our guest, Irene Liu, on LinkedIn at https://www.linkedin.com/in/ireneliu1/. Follow our host, Samorn Selim, on LinkedIn at https://www.linkedin.com/in/samornselim/. Get a copy of Samorn's book, Career Unicorns™ 90-Day 5-Minute Gratitude Journal: An Easy & Proven Way To Cultivate Mindfulness, Beat Burnout & Find Career Joy, at https://tinyurl.com/49xdxrz8. Ready for a career change? Schedule a free 30-minute build your dream career consult by sending a message at www.careerunicorns.com. Disclaimer: Irene would like our listeners to know that her views expressed in this podcast are her own and do not represent those of any referenced organizations.
In this end-of-year AwesomeCast, hosts Michael Sorg and Katie Dudas are joined by original AwesomeCast co-host Rob De La Cretaz for a wide-ranging discussion on the biggest tech shifts of 2025 — and what's coming next. The panel breaks down how AI tools became genuinely useful in everyday workflows, from content production and health tracking to decision-making and trend analysis. Rob shares why Bambu Labs 3D printers represent a turning point in consumer and professional 3D printing, removing friction and making rapid prototyping accessible for creators, engineers, and hobbyists alike. The episode also covers the evolving role of AI in media creation, concerns around over-reliance and trust, and why human-made content may soon become a premium feature. Intern Mac reflects on changing career paths into media production, while the crew revisits their 2025 tech predictions, holds themselves accountable, and locks in bold forecasts for 2026. Plus: Chachi's Video Game Minute, AI competition heating up, Apple Vision Pro speculation, and why “AI inside” may need clearer definitions moving forward.
In this end-of-year AwesomeCast, hosts Michael Sorg and Katie Dudas are joined by original AwesomeCast co-host Rob De La Cretaz for a wide-ranging discussion on the biggest tech shifts of 2025 — and what's coming next. The panel breaks down how AI tools became genuinely useful in everyday workflows, from content production and health tracking to decision-making and trend analysis. Rob shares why Bambu Labs 3D printers represent a turning point in consumer and professional 3D printing, removing friction and making rapid prototyping accessible for creators, engineers, and hobbyists alike. The episode also covers the evolving role of AI in media creation, concerns around over-reliance and trust, and why human-made content may soon become a premium feature. Intern Mac reflects on changing career paths into media production, while the crew revisits their 2025 tech predictions, holds themselves accountable, and locks in bold forecasts for 2026. Plus: Chachi's Video Game Minute, AI competition heating up, Apple Vision Pro speculation, and why “AI inside” may need clearer definitions moving forward.
In this two-part conversation, Tim Kinzie, CRS, brings decades of real estate wisdom to Real Estate Real Talk. He unpacks how AI is reshaping the industry—without replacing the relationships that keep it human. Tim dives into ethical must-knows, the importance of transparency when using AI, and why protecting client data has never been more critical. He also shares forward-looking insights on the future of real estate education and how emerging tech like blockchain could transform the transaction process. Whether you're excited about innovation or cautious about change, this series shows how agents can stay ahead and stay true to what matters: trust, expertise, and connection.
In this episode of SparX, Mukeshl sits down with, Debjani Ghosh, leader of the Frontier Tech Hub within NITI Aayog for a critical discussion. They dive deep into India's technological future, the existential role of AI in national growth, and the dramatic changes impacting careers and geopolitics.Debjani, who brings a unique perspective from 21 years at Intel and leadership at NASSCOM, discusses her experience driving change from within the government and why technology is now the "axis of power" globally.
As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
The Creative Process in 10 minutes or less · Arts, Culture & Society
“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
Show Notes In this episode Simon speaks with Tatiana Bachkirova, a leading scholar in coaching psychology. They explore how AI is impacting on the field of coaching and what it means to remain human in a world increasingly driven by algorithms. The discussion moves fluidly between neuroscience, pseudo-science, identity, belonging, and ethics, reflecting on the tensions between performance culture and authentic human development. They discuss how coaching must expand beyond individual self-optimization toward supporting meaningful, value-based projects and understanding the broader social and organisational contexts in which people live and work. AI underscores the need for ethical grounding in coaching. Ultimately, the episode reclaims coaching as a moral and relational practice, reminding listeners that the future of coaching depends not on technology, but on how we choose to stay human within it. Key Reflections AI is often a solution in search of a problem, revealing more about our anxieties than our needs. Coaching must evolve with the changing world, engaging complexity rather than retreating to technique. The focus should be on meaningful, value-driven projects that connect personal purpose with collective good. AI coaching risks eroding depth, ethics, and relational presence if not grounded in human awareness. Critical thinking anchors coaching in understanding rather than compliance, enabling ethical discernment. The relational quality defines coaching effectiveness - authentic dialogue remains its living core. Coaching should move from performance and self-optimization to reflection, purpose, and contribution. Human connection and ethical practice sustain trust, belonging, and relevance in the digital age. The future of coaching lies in integrating technology without losing our humanity. Keywords Coaching psychology, AI in coaching, organisational coaching, identity, belonging, neuroscience, critical thinking, human coaching, coaching ethics, coaching research Brief Bio Tatiana Bachkirova is Professor of Coaching Psychology in the International Centre for Coaching and Mentoring Studies at Oxford Brookes University, UK. She supervises doctoral students as an academic, and human coaches as a practitioner. She is a leading scholar in Coaching Psychology and in recent years has been exploring themes such as the role of AI in coaching, the deeper purpose of organisational coaching, what leaders seek to learn at work, and critical perspectives on the neuroscience of coaching. In her over 80 research articles in leading journals, book chapters and books and in her many speaking engagements she addresses most challenging issues of coaching as a service to individuals, organisations and wider societies.
“ I think we're betting on AI as something that can help to solve a lot of problems for us. It's the future, we think, whether it's producing text or art, or doing medical research or planning our lives for us, etc., the bet is that AI is going to be great, that it's going to get us everything we want and make everything better. But at the same time, we're gambling, at the extreme end, with the future of humanity , hoping for the best and hoping that this, what I'm calling the AI wager, is going to work out to our advantage, but we'll see.”As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
In this two-part conversation, CRS Designee Tim Kinzie brings decades of real estate wisdom to Real Estate Real Talk. Together, we unpack how AI is reshaping the industry—without replacing the relationships that keep it human. Kinzie dives into ethical must-knows, the importance of transparency when using AI and why protecting client data has never been more critical. He also shares forward-looking insights on the future of real estate education and how emerging tech like blockchain could transform the transaction process. Whether you're excited about innovation or cautious about change, this series shows how agents can stay ahead and stay true to what matters: trust, expertise and connection.
What are the dangers when pastors let AI assist… or sometimes author?How do we think well about plagiarism, spiritual formation and the loss of our pastoral voice?And are there positive, God-honouring ways to use these tools?Stephen Driscoll works in Campus Ministry in Canberra. He's the author of 'Made in Our Image: God, artificial intelligence and you. 'Stephen argues that writing is thinking, and when we automate the writing we risk automating away the deep thinking and wrestling with God's word that forms the preacher's heart. We talk dangers, temptations, reputation, the Holy Spirit, and the kinds of careful, ethical uses of AI that still require the pastor to be the author.Stephen helps us preach faithfully and use AI to assist in that in an ethical way in a rapidly changing world. Also see:The traumatic implications of artificial intelligence.What morality to teach artificial intelligence?The Church Cohttp://www.thechurchco.com is a website and app platform built specifically for churches. Advertise on The Pastor's HeartTo advertise on The Pastor's Heart go to thepastorsheart.net/sponsorSupport the show
As we move towards 2026, we are in a massive “upgrade moment” that most of us can feel. New pressures, new identities, new expectations on our work, our relationships, and our inner lives. Throughout the year, I've been speaking with professional creatives, climate and tech experts, teachers, neuroscientists, psychologists, and futureists about how AI can be used intelligently and ethically as a partnership to ensure we do not raise a generation that relies on machines to think for them. It's not that we are being replaced by machines. It's that we're being invited to become a new kind of human. Where AI isn't the headline; human transformation is. And that includes the arts, culture, and the whole of society. Generative AI – the technologies that write our emails, draft our reports, and even create art – have become a fixture of daily life, and the philosophical and moral questions they raise are no longer abstract. They are immediate, personal, and potentially disruptive to the core of what we consider human work.Our guest today, Sven Nyholm, is one of the leading voices helping us navigate this new reality. As the Principal Investigator of AI Ethics at the Munich Center for Machine Learning, and co-editor of the journal Science and Engineering Ethics. He has spent his career dissecting the intimate relationship between humanity and the machine. His body of work systematically breaks down concepts that worry us all: the responsibility gap in autonomous systems, the ethical dimensions of human-robot interaction, and the question of whether ceding intellectual tasks to a machine fundamentally atrophies our own skills. His previous books, like Humans and Robots: Ethics, Agency, and Anthropomorphism, have laid the foundational groundwork for understanding these strange new companions in our lives.His forthcoming book is The Ethics of Artificial Intelligence: A Philosophical Introduction. The book is a rigorous exploration of everything from algorithmic bias and opacity to the long-term existential risks of powerful AI. We'll talk about what it means when an algorithm can produce perfect language without genuine meaning, why we feel entitled to take credit for an AI's creation, and what this technological leap might be costing us, personally, as thinking, moral beings.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
Explore how Be My Eyes is redefining accessibility with AI and human connection. CEO Mike Buckley discusses their Apple App Store Finalist nomination, the ethics of AI in assistive technology, and the challenges of awareness and global reach.This episode is supported by Pneuma Solutions. Creators of accessible tools like Remote Incident Manager and Scribe. Get $20 off with code dt20 at https://pneumasolutions.com/ and enter to win a free subscription at doubletaponair.com/subscribe!In this episode of Double Tap, Steven Scott and Shaun Preece chat with Be My Eyes CEO Mike Buckley. The conversation begins with the app's recognition as an Apple App Store Cultural Impact finalist, celebrating its global influence on the blind and low vision community. The discussion evolves into an honest exploration of AI's role in accessibility, including Be My AI, human volunteers, and the emotional dimensions of social connection. Mike shares insights into: The balance between AI utility and human kindness. Overcoming the trepidation blind users feel before calling a volunteer. Ethical dilemmas around AI companionship, mental health, and responsible guardrails. Future possibilities for niche AI models designed for blind users. Like, comment, and subscribe for more conversations on tech and accessibility.Share your thoughts: feedback@doubletaponair.comLeave us a voicemail: 1-877-803-4567Send a voice or video message via WhatsApp: +1-613-481-0144 Relevant LinksBe My Eyes: https://www.bemyeyes.com Find Double Tap online: YouTube, Double Tap Website---Follow on:YouTube: https://www.doubletaponair.com/youtubeX (formerly Twitter): https://www.doubletaponair.com/xInstagram: https://www.doubletaponair.com/instagramTikTok: https://www.doubletaponair.com/tiktokThreads: https://www.doubletaponair.com/threadsFacebook: https://www.doubletaponair.com/facebookLinkedIn: https://www.doubletaponair.com/linkedin Subscribe to the Podcast:Apple: https://www.doubletaponair.com/appleSpotify: https://www.doubletaponair.com/spotifyRSS: https://www.doubletaponair.com/podcastiHeadRadio: https://www.doubletaponair.com/iheart About Double TapHosted by the insightful duo, Steven Scott and Shaun Preece, Double Tap is a treasure trove of information for anyone who's blind or partially sighted and has a passion for tech. Steven and Shaun not only demystify tech, but they also regularly feature interviews and welcome guests from the community, fostering an interactive and engaging environment. Tune in every day of the week, and you'll discover how technology can seamlessly integrate into your life, enhancing daily tasks and experiences, even if your sight is limited. "Double Tap" is a registered trademark of Double Tap Productions Inc. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
- Updates on AI Tools and Book Generator (0:10) - Health Advice and Lifestyle Habits (1:42) - Critique of Conventional Doctors (6:50) - The Rise of AI in Healthcare (10:05) - Better Than a Doctor AI Feature (17:24) - Health Ranger's AI and Robotics Projects (36:07) - Philosophical Discussion on AI and Human Rights (1:10:58) - The Future of AI and Human Interaction (1:17:53) - The Role of AI in Survival Scenarios (1:18:57) - The Potential for AI in Enhancing Human Life (1:19:13) - Personal Experience with AI and Health Data (1:19:32) - AI in Diagnostics and Natural Solutions (1:22:17) - Critique of Google and AI Ethics (1:25:00) - Impact of AI on Human Relationships and Society (1:30:24) - Debate on Consciousness and AI (1:35:54) - Historical and Scientific Perspectives on Consciousness (1:50:21) - Practical Applications and Future of AI (1:53:17) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
In this episode of Tea With Gen Z, we sit down with Dr. Aqeel Taher, a long-serving AUS faculty member. We discuss the strengths and challenges of Gen Z engineering students, touching on ethics, learning styles, and the evolving academic landscape. The conversation concludes with Dr. Aqeel's perspective on AI, its ethical use, and his advice for the next generation of engineers.
Gabriel Weintraub studies how digital markets evolve. In that regard, he says platforms like Amazon, Uber, and Airbnb have already disrupted multiple verticals through their use of data and digital technologies. Now, they face both the opportunity and the challenge of leveraging AI to further transform markets, while doing so in a responsible and accountable way. Weintraub is also applying these insights to ease friction and accelerate results in government procurement and regulation. Ultimately, we must fall in love with solving the problem, not with the technology itself, Weintraub tells host Russ Altman on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Gabriel WeintraubConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest Gabriel Weintraub, a professor of operations, information, and technology at Stanford University.(00:03:00) School Lunches to Digital PlatformsHow designing markets in Chile led Gabriel to study digital marketplaces.(00:03:57) What Makes a Good MarketOutlining the core principles that constitute a well-functioning market.(00:05:29) Opportunities and Challenges OnlineThe challenges associated with the vast data visibility of digital markets.(00:06:56) AI and the Future of SearchHow AI and LLMs could revolutionize digital platforms.(00:08:15) Rise of Vertical MarketplacesThe new specialized markets that curate supply and ensure quality.(00:10:23) Winners and Losers in Market ShiftsHow technology is reshaping industries from real estate to travel.(00:12:38) Government Procurement in ChileApplying market design and AI tools to Chile's procurement system.(00:15:00) Leadership and AdoptionThe role of leadership in modernizing government systems.(00:18:59) AI in Government and RegulationUsing AI to help governments streamline complex bureaucratic systems.(00:21:45) Streamlining Construction PermitsPiloting AI tools to speed up municipal construction-permit approvals.(00:23:20) Building an AI StrategyCreating an AI strategy that aligns with business or policy goals.(00:25:26) Workforce and ExperimentationTraining employees to experiment with LLMs and explore productivity gains.(00:27:36) Humans and AI CollaborationThe importance of designing AI systems to augment human work, not replace it.(00:28:26) Future in a MinuteRapid-fire Q&A: AI's impact, passion and resilience, and soccer dreams.(00:30:39) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Send us a textAre you feeding your AI tools private info you'd never hand to a stranger?If you're dropping sensitive data into ChatGPT, Canva, or Notion without blinking, this episode is your wake-up call. In Part 2 of our eye-opening conversation with AI ethics strategist Elizabeth Goede, we delve into the practical aspects of AI use and how to safeguard your business, clients, and future.This one isn't about fear. It's about founder-level responsibility and smart decision-making in a world where the tools are evolving faster than most policies.Grab your ticket to the AI in Action Conference — March 19–20, 2026 in Grand Rapids, MI. You'll get two days of hands-on AI application with 12 done-with-you business tools. This isn't theory. It's transformation.In This Episode, You'll Learn:Why founders must have an AI policy (yes, even solopreneurs)The #1 AI tool Elizabeth would never trust with sensitive dataHow to vet the tools you already use (based on their founders, not just features)What "locking down your data" actually looks likeA surprising leadership insight AI will reveal about your teamResources & Links:AI in Action Conference – RegistrationFollow Elizabeth Goede socials (LinkedIn, Instagram)Related episode:Episode 104 | AI Ethics and Security (Part 1) with Elizabeth GoedeWant to increase revenue and impact? Listen to “She's That Founder” for insights on business strategy and female leadership to scale your business. Each episode offers advice on effective communication, team building, and management. Learn to master routines and systems to boost productivity and prevent burnout. Our delegation tips and business consulting will advance your executive leadership skills and presence.
Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone. Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other's knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us. Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.Related ResourcesGenerating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6A transcript of this episode is here.
Send us a textIs your AI use exposing your business to risks you can't see coming?It's not just about saving time — it's about protecting your clients, your content, and your credibility.In this episode, Dawn Andrews sits down with AI strategist Elizabeth Goede to unpack the real (and often ignored) risks of using AI in business. From ChatGPT to Claude, learn what founders must know about security, data privacy, and ethical use — without getting lost in the tech.“You wouldn't post your financials on Instagram. So why are you pasting them into AI tools without checking where they're going?”Listen in and get equipped to lead smart, safe, and scalable with AI — no fear-mongering, just facts with a side of sass.Want to stop talking about AI and actually use it safely and strategically?Join us at the AI in Action Conference, happening March 19–20, 2026 in Grand Rapids, Michigan. Get hands-on with 12 action-packed micro workshops designed to help you apply AI in real time to boost your business, protect your data, and ditch the digital grunt work.Register now What You'll Learn:How even small service businesses are vulnerable to AI misuseThe one rule for deciding what data is safe to input into AI toolsWhy AI models like ChatGPT, Claude, and Copilot aren't created equalThe hidden risks of giving tools access to your drive, emails, or client docsWhat every founder should ask before signing any AI-related agreementResources & Links:AI in Action Conference – RegistrationFollow Elizabeth Goede socials (LinkedIn, Instagram)Related episode:Episode 93 | The Dirty Secret About AI No Female Executive Wants To Admit—And Why It's Hurting You - This episode dives into the real reason female founders hesitate with AI — and the hidden risks of staying on the sidelines. Includes smart insights on the security tradeoffs when you don't understand where your data is going or how to control it.Want to increase revenue and impact? Listen to “She's That Founder” for insights on business strategy and female leadership to scale your business. Each episode offers advice on effective communication, team building, and management. Learn to master routines and systems to boost productivity and prevent burnout. Our delegation tips and business consulting will advance your executive leadership skills and presence.
AI ethics expert Mona Hamdy joins Stan Stalnaker for a beguiling look at the future of AI and how to get it right, at FII9 in Riyadh, Saudi's Arabia. Part 2 of a 5 part series.
Artificial intelligence often struggles with the ambiguity, nuance, and shifting context that defines human reasoning. Fuzzy logic offers an alternative, by modelling meaning in degrees rather than absolutes.In this roundtable episode, ResearchPod speaks with Professors Edy Portmann, Irina Perfilieva, Vilem Novak, Cristina Puente, and José María Alonso about how fuzzy systems capture perception, language, social cues, and uncertainty. Their insights contribute to the upcoming FMsquare Foundation booklet on fuzzy logic, exploring the role of uncertainty-aware reasoning in the future of AI.You can read the previous booklet from this series here: Fuzzy Design-Science ResearchYou can listen to previous fuzzy podcasts here: fmsquare.org
Ravit Dotan argues that the primary barrier to accountable AI is not a lack of ethical clarity, but organizational roadblocks. While companies often understand what they should do, the real challenge is organizational dynamics that prevent execution—AI ethics has been shunted into separate teams lacking power and resources, with incentive structures that discourage engineers from raising concerns. Drawing on work with organizational psychologists, she emphasizes that frameworks prescribe what systems companies should have but ignore how to navigate organizational realities. The key insight: responsible AI can't be a separate compliance exercise but must be embedded organically into how people work. Ravit discusses a recent shift in her orientation from focusing solely on governance frameworks to teaching people how to use AI thoughtfully. She critiques "take-out mode" where users passively order finished outputs, which undermines skills and critical review. The solution isn't just better governance, but teaching workers how to incorporate responsible AI practices into their actual workflows. Dr. Ravit Dotan is the founder and CEO of TechBetter, an AI ethics consulting firm, and Director of the Collaborative AI Responsibility (CAIR) Lab at the University of Pittsburgh. She holds a Ph.D. in Philosophy from UC Berkeley and has been named one of the "100 Brilliant Women in AI Ethics" (2023), and was a finalist for "Responsible AI Leader of the Year" (2025). Since 2021, she has consulted with tech companies, investors, and local governments on responsible AI. Her recent work emphasizes teaching people to use AI thoughtfully while maintaining their agency and skills. Her work has been featured in The New York Times, CNBC, Financial Times, and TechCrunch. Transcript My New Path in AI Ethics (October 2025) The Values Encoded in Machine Learning Research (FAccT 2022 Distinguished Paper Award) - Responsible AI Maturity Framework
Austin Gravley of Digital Babylon and the What Would Jesus Tech podcast talks about how the Chinese Communist Party is looking at using AI to enhance the genetic "quality" of their children, among other uses. What are the ethical guidelines? What are acceptable and unacceptable uses? The National Day of Prayer Taskforce's Kathy Branzell (who is a "military brat") talks about the importance of supporting and praying for our veterans and current military members. The also talks about giving thanks and "telling of His glory among the nations, Hsi wonderful deeds among all the peoples." Faith Radio podcasts are made possible by your support. Give now: Click here
Keywordscybersecurity, technology, AI, IoT, Intel, startups, security culture, talent development, career advice SummaryIn this episode of No Password Required, host Jack Clabby and Kayleigh Melton engage with Steve Orrin, the federal CTO at Intel, discussing the evolving landscape of cybersecurity, the importance of diverse teams, and the intersection of technology and security. Steve shares insights from his extensive career, including his experiences in the startup scene, the significance of AI and IoT, and the critical blind spots in cybersecurity practices. The conversation also touches on nurturing talent in technology and offers valuable advice for young professionals entering the field. TakeawaysIoT is now referred to as the Edge in technology.Diverse teams bring unique perspectives and solutions.Experience in cybersecurity is crucial for effective team building.The startup scene in the 90s was vibrant and innovative.Understanding both biology and technology can lead to unique career paths.AI and IoT are integral to modern cybersecurity solutions.Organizations often overlook the importance of security in early project stages.Nurturing talent involves giving them interesting projects and autonomy.Young professionals should understand the hacker mentality to succeed in cybersecurity.Customer feedback is essential for developing effective security solutions. TitlesThe Edge of Cybersecurity: Insights from Steve OrrinNavigating the Intersection of Technology and Security Sound bites"IoT is officially called the Edge.""We're making mainframe sexy again.""Surround yourself with people smarter than you." Chapters00:00 Introduction to Cybersecurity and the Edge01:48 Steve Orrin's Role at Intel04:51 The Evolution of Security Technology09:07 The Startup Scene in the 90s13:00 The Intersection of Biology and Technology15:52 The Importance of AI and IoT20:30 Blind Spots in Cybersecurity25:38 Nurturing Talent in Technology28:57 Advice for Young Cybersecurity Professionals32:10 Lifestyle Polygraph: Fun Questions with Steve
Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit
Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit
Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit
Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit
Can an AI truly care about your feelings—or is emotional intelligence in machines just the most sophisticated form of manipulation? Dr. Alan Cowen of Hume AI joins the crew to unpack the promise and peril of emotionally adept bots, even as they're quietly shaping how we connect, seek help, and parent in the digital age. Virtual Try On Free Online - AI Clothes Changer | i-TryOn Oreo-maker Mondelez to use new generative AI tool to slash marketing costs OpenAI Moves to Generate AI Music in Potential Rivalry With Startup Suno Surprising no one, researchers confirm that AI chatbots are incredibly sycophantic Microsoft's Mico heightens the risks of parasocial LLM relationships Armed police swarm student after AI mistakes bag of Doritos for a weapon - Dexerto A Definition of AGI OpenAI Finalizes Corporate Restructuring, Gives Microsoft 27% Stake and Technology Access Until 2032 - Slashdot This mom's son was asking Tesla's Grok AI chatbot about soccer. It told him to send nude pics, she says Nvidia Becomes World's First $5 Trillion Company - Slashdot Paris Hilton Has Been Training Her AI for Years How Realistic is OpenAI's 2028 Timeline For Automating AI Research Itself? Tesla's "Mad Max" mode is now under federal scrutiny Zenni's Anti-Facial Recognition Glasses are Eyewear for Our Paranoid Age Alphabet earnings Meta earnings You Have No Idea How Screwed OpenAI Actually Is Elon Musk's Grokipedia Pushes Far-Right Talking Points AOL to be sold to Bending Spoons for roughly $1.5B Casio's Fluffy AI Robot Squeaked Its Way Into My Heart For the sake of the show, I will suffer this torture -jj Machine Olfaction and Embedded AI Are Shaping the New Global Sensing Industry A silly little photoshoot with your friends Bugonia Celebrating 25 years of Google Ads The Data Is In: The Washington Post Can't Replace Its "TikTok Guy" Peak screen? Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Dr. Alan Cowen Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security zapier.com/machines agntcy.org ventionteams.com/twit
Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone's radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.Related ResourcesHBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication A transcript of this episode is here.
ChatGPT ads are coming.
Show Notes: Steve recounts his senior year at Harvard, and how he was torn between pursuing acting and philosophy. He graduated with a dual degree in philosophy and math but also found time to act in theater and participated in 20 shows. A Love of Theater and a Move to London Steve explains why the lack of a theater major at Harvard allowed him to explore acting more than a university with a theater major. He touches on his parents' concerns about his career prospects if he pursued acting, and his decision to apply to both acting and philosophy graduate schools. Steve discusses his rejection from all graduate schools and why he decided to move to London with friends Evan Cohn and Brad Rouse. He talks about his experience in London. Europe on $20 a Day Steve details his backpacking trip through Europe on a $20 a day budget, staying with friends from Harvard and high school. He mentions a job opportunity in Japan through the Japanese Ministry of Education and describes his three-year stint in Japan, working as a native English speaker for the Japanese Ministry of Education, and being immersed in Japanese culture. He shares his experiences of living in the countryside and reflects on the impact of living in a different culture, learning some Japanese, and making Japanese friends. He discusses the personal growth and self-reflection that came from his time in Japan, including his first steps off the "achiever track." On to Philosophy Graduate School When Steve returned to the U.S. he decided to apply to philosophy graduate schools again, this time with more success. He enrolled at the University of Michigan. However, he was miserable during grad school, which led him to seek therapy. Steve credits therapy with helping him make better choices in life. He discusses the competitive and prestigious nature of the Michigan philosophy department and the challenges of finishing his dissertation. He touches on the narrow and competitive aspects of pursuing a career in philosophy and shares his experience of finishing his dissertation and the support he received from a good co-thesis advisor. Kalamazoo College and Improv Steve describes his postdoc experience at Kalamazoo College, where he continued his improv hobby and formed his own improv group. He mentions a mockumentary-style improv movie called Comic Evangelists that premiered at the AFI Film Festival. Steve moved to Buffalo, Niagara University, and reflects on the challenges of adjusting to a non-research job. He discusses his continued therapy in Buffalo and the struggle with both societal and his own expectations of professional status, however, with the help of a friend, he came to the realization that he had "made it" in his current circumstances. Steve describes his acting career in Buffalo, including roles in Shakespeare in the Park and collaborating with a classmate, Ian Lithgow. A Speciality in Philosophy of Science Steve shares his personal life, including meeting his wife in 2009 and starting a family. He explains his specialty in philosophy of science, focusing on the math and precise questions in analytic philosophy. He discusses his early interest in AI and computational epistemology, including the ethics of AI and the superintelligence worry. Steve describes his involvement in a group that discusses the moral status of digital minds and AI alignment. Aligning AI with Human Interests Steve reflects on the challenges of aligning AI with human interests and the potential existential risks of advanced AI. He shares his concerns about the future of AI and the potential for AI to have moral status. He touches on the superintelligence concern and the challenges of aligning AI with human goals. Steve mentions the work of Eliezer Yudkowsky and the importance of governance and alignment in AI development. He reflects on the broader implications of AI for humanity and the need for careful consideration of long-term risks. Harvard Reflections Steve mentions Math 45 and how it kicked his butt, and his core classes included jazz, an acting class and clown improv with Jay Nichols. Timestamps: 01:43: Dilemma Between Acting and Philosophy 03:44: Rejection and Move to London 07:09: Life in Japan and Cultural Insights 12:19: Return to Academia and Grad School Challenges 20:09: Therapy and Personal Growth 22:06: Transition to Buffalo and Philosophy Career 26:54: Philosophy of Science and AI Ethics 33:20: Future Concerns and AI Predictions 55:17: Reflections on Career and Personal Growth Links: Steve's Website: https://stevepetersen.net/ On AI Superintelligence: If Anyone Builds it, Everyone Dies Superintelligence The Alignment Problem Some places to donate: The Long-Term Future Fund Open Philanthropy On improv Impro Upright Citizens Brigade Comedy Improvisation Manual Featured Non-profit: The featured non-profit of this week's episode is brought to you by Rich Buery who reports: “Hi, I'm Rich Buery, class of 1992. The featured nonprofit of this episode of The 92 Report is imentor. imentor is a powerful youth mentoring organization that connects volunteers with high school students and prepares them on the path to and through college. Mentors stay with the students through the last two years of high school and on the beginning of their college journey. I helped found imentor over 25 years ago and served as its founding executive director, and I am proud that over the last two decades, I've remained on the board of directors. It's truly a great organization. They need donors and they need volunteers. You can learn more about their work@www.imentor.org That's www, dot i m, e n, t, O, r.org, and now here is Will Bachman with this week's episode. To learn more about their work, visit: www.imentor.org.
Have you ever chatted with customer support, only to realize halfway through that it wasn't a person? That split-second drop in your stomach — when the replies feel right but not real — might be the defining digital wellness moment of our time. In this episode, we dive into a provocative question that could shape the future of human connection: Should we all have the right to refuse AI? As artificial intelligence quietly takes over everything from hiring to healthcare, we explore what happens when convenience starts replacing consent — and when “I want to talk to a person” becomes a radical demand. Here's what you'll hear: Real-world examples of AI in daily life that you probably didn't realize you've already consented to. How AI is changing the way we experience empathy, trust, and human connection. The emerging idea of a Right to Refuse AI — what it could mean for your health, your data, and your dignity. Why automation might make “human interaction” the next luxury product. The ethical, emotional, and psychological costs of letting AI speak for us. If you care about digital balance, human-centered technology, and wellness in the age of automation, this conversation will challenge how you think about your relationship with machines. The right to privacy. The right to repair. The right to disconnect. Is it time we added one more — the right to refuse AI? Listen now and discover why the most human act in the digital age might be as simple as saying, I want a person. Stay connected, stay curious, and subscribe to The Healthier Tech Podcast for more conversations at the crossroads of technology and wellbeing. This episode is brought to you by Shield Your Body—a global leader in EMF protection and digital wellness. Because real wellness means protecting your body, not just optimizing it. If you found this episode eye-opening, leave a review, share it with someone tech-curious, and don't forget to subscribe to Shield Your Body on YouTube for more insights on living healthier with technology.
Is there anything real left on the internet? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O'Reilly explore deepfakes, scams, and cybercrime with the Director of Threat Research at Bitdefender, Bogdan Botezatu. Scams are a trillion-dollar industry; keep your loved ones safe with Bitdefender: https://bitdefend.me/90-StarTalkNOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/deepfakes-and-the-war-on-truth-with-bogdan-botezatu/Thanks to our Patrons Bubbalotski, Oskar Yazan Mellemsether, Craig A, Andrew, Liagadd, William ROberts, Pratiksha, Corey Williams, Keith, anirao, matthew, Cody T, Janna Ladd, Jen Richardson, Elizaveta Nikitenko, James Quagliariello, LA Stritt, Rocco Ciccolini, Kyle Jones, Jeremy Jones, Micheal Fiebelkorn, Erik the Nerd, Debbie Gloom, Adam Tobias Lofton, Chad Stewart, Christy Bradford, David Jirel, e4e5Nf3, John Rost, cluckaizo, Diane Féve, Conny Vigström, Julian Farr, karl Lebeau, AnnElizabeth, p johnson, Jarvis, Charles Bouril, Kevin Salam, Alex Rzem, Joseph Strolin, Madelaine Bertelsen, noel jimenez, Arham Jain, Tim Manzer, Alex, Ray Weikal, Kevin O'Reilly, Mila Love, Mert Durak, Scrubbing Bubblez, Lili Rose, Ram Zaidenvorm, Sammy Aleksov, Carter Lampe, Tom Andrusyna, Raghvendra Singh Bais, ramenbrownie, cap kay, B Rhodes, Chrissi Vergoglini, Micheal Reilly, Mone, Brendan D., Mung, J Ram, Katie Holliday, Nico R, Riven, lanagoeh, Shashank, Bradley Andrews, Jeff Raimer, Angel velez, Sara, Timothy Criss, Katy Boyer, Jesse Hausner, Blue Cardinal, Benjamin Kedwards, Dave, Wen Wei LOKE, Micheal Sacher, Lucas, Ken Kuipers, Alex Marks, Amanda Morrison, Gary Ritter Jr, Bushmaster, thomas hennigan, Erin Flynn, Chad F, fro drick, Ben Speire, Sanjiv VIJ, Sam B, BriarPatch, and Mario Boutet for supporting us this week. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
October 15, 2025: AI is no longer just automating work — it's reorganizing it. In today's episode of Future Ready Today, Jacob Morgan explores five major stories reshaping leadership and HR:
In this episode of Creative Current Events, Margo and Abby dive into a whirlwind of fascinating stories and fresh perspectives from the worlds of creativity, tech, and everyday life. They chat about the accidental invention of the snow globe and the surprising rise of art fairs hosted in U-Haul trucks — celebrating human resourcefulness and the scrappy side of creativity. They also dig into AI and authenticity — from lawsuits against media companies accused of data theft, to AI-generated actors in Hollywood, and the ethical gray areas of algorithm-driven platforms like Spotify. Together, Margo and Abby unpack how these developments are reshaping creative industries and what it means to stay human in a data-driven world. Whether you're a maker, dreamer, or just looking for a new lens on today's creative headlines, this episode proves that inspiration is everywhere — sometimes in the most unexpected places. Articles Mentioned: AI Lawsuits: Japanese Media Giants vs. Perplexity AI Actor Sparks Outrage in Hollywood Cities & Memory: Global Sound Mapping Project The Sphere: Wizard of Oz Experience Magnopus: Storytelling Through Immersive Tech Banana Republic's Vintage Catalog Revival Carhartt x Bethany Yellowtail Collaboration Coach's Coffee Shops Connect with Gen Z Ugmonk: Intentional Design Meets the Analog To-Do List Connect with Abby: https://www.abbyjcampbell.com/ https://www.instagram.com/ajcampkc/ https://www.pinterest.com/ajcampbell/ Connect with Margo: www.windowsillchats.com www.instagram.com/windowsillchats www.patreon.com/inthewindowsill https://www.yourtantaustudio.com/thefoundry
October 10, 2025: A new era of Responsible Intelligence is emerging. Governments are considering human-quota laws to keep people in the loop. Kroger is rolling out a values-based AI assistant that redefines trust and transparency. And legal experts warn that AI bias in HR could soon become a courtroom reality. In today's Future-Ready Today, Jacob Morgan explores how these stories signal the end of reckless automation and the rise of accountable leadership. He shares how the future of work will be shaped not by faster machines, but by wiser humans—and offers one simple “1%-a-Day” challenge to help you lead responsibly in the age of AI.
From lawmakers cracking down on loud ads to Deloitte caught peddling AI-fabricated reports, this episode explores how tech's greatest promises and worst follies are colliding right now. No more loud commercials: Governor Newsom signs SB 576 | Governor of California ChatGPT Now Has 800 Million Weekly Active Users - Slashdot OpenAI will let developers build apps that work inside ChatGPT Senate Dem Report Finds Almost 100 Million Jobs Could Be Lost To AI - Slashdot Jony Ive's secretive AI hardware reportedly hit three problems Deloitte to refund Australian government after AI hallucinations found in report Anthropic and Deloitte Partner to Build AI Solutions for Regulated Industries America is now one big bet on AI The flawed Silicon Valley consensus on AI Data centers responsible for 92% of GDP growth in the first half of this year Martin Peers: The AI Profit Fantasy A Debate About A.I. Plays Out on the Subway Walls Insurers hesitate at multibillion-dollar claims faced by OpenAI, Anthropic in AI lawsuits Slop factory worries about slop: MrBeast says AI could threaten creators' livelihoods, calling it 'scary times' for the industry CAN LARGE LANGUAGE MODELS DEVELOP GAMBLING ADDICTION? Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence Have we passed peak social media? As Elon Musk Preps Tesla's Optimus for Prime Time, Big Hurdles Remain OpenAI signs huge chip deal with AMD, and AMD stock soars Google CodeMender Introducing the Gemini 2.5 Computer Use model Young People Are Falling in Love With Old Technology Our friend Glenn Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: zapier.com/machines agntcy.org fieldofgreens.com Promo Code "IM" pantheon.io
LightSpeed VT: https://www.lightspeedvt.com/ Dropping Bombs Podcast: https://www.droppingbombs.com/ What if a 16-year-old yogurt scooper could turn into a billionaire exit master by 31?