Podcasts about ai ethics

  • 723PODCASTS
  • 1,282EPISODES
  • 44mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jun 19, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about ai ethics

Show all podcasts related to ai ethics

Latest podcast episodes about ai ethics

The Road to Accountable AI
Dale Cendali: How Courts (and Maybe Congress!) Will Determine AI's Copyright Fate

The Road to Accountable AI

Play Episode Listen Later Jun 19, 2025 39:33 Transcription Available


Kevin Werbach interviews Dale Cendali, one of the country's leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution.  Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution. Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm's nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute's Copyright Restatement project and sits on the Board of the International Trademark Association. Transcript Thompson Reuters Wins Key Fair Use Fight With AI Startup Dale Cendali - 2024 Law360 MVP Copyright Office Report on Generative AI Training

The Social-Engineer Podcast
Ep. 279 - Security Awareness Series - Dodging Turkeys and Security Awareness with Stacey Edmonds REPLAY

The Social-Engineer Podcast

Play Episode Listen Later Jun 16, 2025 36:52


REPLAY (Original Air Date Oct 21, 2024)   Today on the Social-Engineer Podcast: The Security Awareness Series, Chris is joined by Stacey Edmonds. Stacey is a multi-disciplinary EdTech innovator and Digital Safety Pioneer, driven by a commitment to democratizing knowledge. Stacey's expertise, encompassing social science, education, EdTech, and multi-platform screen production, culminated in the founding of Lively, which we will hear all about on this podcast. Since 2002, Stacey has been designing and delivering enterprise-wide cyber safety upskilling programs. In 2023, embodying her mission to make knowledge accessible, Stacey launched 'Dodgy or Not?' – a social enterprise offering an engaging approach to digital safety education.   She continues to bridge the gap between emerging technologies and practical education, driving innovation in AI ethics and digital literacy - she is also known for deepfaking herself. [Oct 21, 2024]   00:00 - Intro 00:19 - Intro Links: -          Social-Engineer.com - http://www.social-engineer.com/ -          Managed Voice Phishing - https://www.social-engineer.com/services/vishing-service/ -          Managed Email Phishing - https://www.social-engineer.com/services/se-phishing-service/ -          Adversarial Simulations - https://www.social-engineer.com/services/social-engineering-penetration-test/ -          Social-Engineer channel on SLACK - https://social-engineering-hq.slack.com/ssb -          CLUTCH - http://www.pro-rock.com/ -          innocentlivesfoundation.org - http://www.innocentlivesfoundation.org/                                                03:00 - Stacey Edmonds Intro 04:18 - Teaching, Trains & Turkeys 08:43 - Toilets vs Videos 11:16 - Dodgy or Not? 15:15 - Social Engineering for Good! 17:46 - Pause for the Cause 20:17 - Training in Real Time 24:11 - Real Time Threat Detection 27:49 - Culture is Everything 30:33 - Find Stacey Edmonds online -          LinkedIn: in/staceyedmonds/ 31:28 – Mentors -          Carolyn Breeze -          Chris Hadnagy -          Janine Thompson -          Steve Rowe -          Shane Bell 33:58 - Book Recommendations -          Feel The Fear and Do It Anyway - Susan Jeffers -          The Hitchhiker's Guide to the Galaxy - Douglas Adams -          1984 - George Orwell -          Man-Made – Tracey Spicer 35:51 - Wrap Up & Outro -          www.social-engineer.com -          www.innocentlivesfoundation.org

Big Brains
Are We Making AI Too Human?, with James Evans

Big Brains

Play Episode Listen Later Jun 12, 2025 31:15


Prof. James Evans, a University of Chicago sociologist and data scientist, believes we're training AI to think too much like humans—and it's holding science back.In this episode, Evans shares how our current models risk narrowing scientific exploration rather than expanding it, and explains why he's pushing for AIs that think differently from us—what he calls “cognitive aliens.” Could these “alien minds” help us unlock hidden breakthroughs? And what would it take to build them?

Inside The Vatican
Roundtable: Pope Leo XIV, AI ethics, sexual abuse crisis reforms, Vatican–China relations

Inside The Vatican

Play Episode Listen Later Jun 12, 2025 35:10


We pause our usual “Inside the Vatican” weekly format to continue the conversation from America Media's subscriber-only Conclave Debrief event this past Monday, June 9. Hosts Colleen Dulle, Gerard O'Connell, and producer Ricardo da Silva respond to subscriber questions about Pope Leo XIV and the recent conclave. Gerard compares this conclave with the 2013 election of Pope Francis, highlighting what made it unique. Colleen shares her firsthand experience covering a conclave live from the Vatican for the first time, while Ricardo reflects on the surprising surge in secular media coverage and growing interest in the papacy both in the U.S. and at St. Peter's. They also answer questions about Pope Leo's early warnings on artificial intelligence, the urgent need for structural reforms to address the sexual abuse crisis with a focus on survivors, and how his background may shape Vatican-China diplomacy going forward. Find full show notes and related links on our ⁠website⁠ Support our podcast—become a ⁠⁠⁠digital subscriber to America Media. Learn more about your ad choices. Visit megaphone.fm/adchoices

Smart Software with SmartLogic
LangChain: LLM Integration for Elixir Apps with Mark Ericksen

Smart Software with SmartLogic

Play Episode Listen Later Jun 12, 2025 38:18


Mark Ericksen, creator of the Elixir LangChain framework, joins the Elixir Wizards to talk about LLM integration in Elixir apps. He explains how LangChain abstracts away the quirks of different AI providers (OpenAI, Anthropic's Claude, Google's Gemini) so you can work with any LLM in one more consistent API. We dig into core features like conversation chaining, tool execution, automatic retries, and production-grade fallback strategies. Mark shares his experiences maintaining LangChain in a fast-moving AI world: how it shields developers from API drift, manages token budgets, and handles rate limits and outages. He also reveals testing tactics for non-deterministic AI outputs, configuration tips for custom authentication, and the highlights of the new v0.4 release, including “content parts” support for thinking-style models. Key topics discussed in this episode: • Abstracting LLM APIs behind a unified Elixir interface • Building and managing conversation chains across multiple models • Exposing application functionality to LLMs through tool integrations • Automatic retries and fallback chains for production resilience • Supporting a variety of LLM providers • Tracking and optimizing token usage for cost control • Configuring API keys, authentication, and provider-specific settings • Handling rate limits and service outages with degradation • Processing multimodal inputs (text, images) in Langchain workflows • Extracting structured data from unstructured LLM responses • Leveraging “content parts” in v0.4 for advanced thinking-model support • Debugging LLM interactions using verbose logging and telemetry • Kickstarting experiments in LiveBook notebooks and demos • Comparing Elixir LangChain to the original Python implementation • Crafting human-in-the-loop workflows for interactive AI features • Integrating Langchain with the Ash framework for chat-driven interfaces • Contributing to open-source LLM adapters and staying ahead of API changes • Building fallback chains (e.g., OpenAI → Azure) for seamless continuity • Embedding business logic decisions directly into AI-powered tools • Summarization techniques for token efficiency in ongoing conversations • Batch processing tactics to leverage lower-cost API rate tiers • Real-world lessons on maintaining uptime amid LLM service disruptions Links mentioned: https://rubyonrails.org/ https://fly.io/ https://zionnationalpark.com/ https://podcast.thinkingelixir.com/ https://github.com/brainlid/langchain https://openai.com/ https://claude.ai/ https://gemini.google.com/ https://www.anthropic.com/ Vertex AI Studio https://cloud.google.com/generative-ai-studio https://www.perplexity.ai/ https://azure.microsoft.com/ https://hexdocs.pm/ecto/Ecto.html https://oban.pro/ Chris McCord's ElixirConf EU 2025 Talk https://www.youtube.com/watch?v=ojL_VHc4gLk Getting started: https://hexdocs.pm/langchain/gettingstarted.html https://ash-hq.org/ https://hex.pm/packages/langchain https://hexdocs.pm/igniter/readme.html https://www.youtube.com/watch?v=WM9iQlQSFg @brainlid on Twitter and BlueSky Special Guest: Mark Ericksen.

The Road to Accountable AI
Brenda Leong: Building AI Law Amid Legal Uncertainty

The Road to Accountable AI

Play Episode Listen Later Jun 12, 2025 36:52 Transcription Available


Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum.  Transcript   AI Audits: Who, When, How...Or Even If?   Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda      

New York City Bar Association Podcasts -NYC Bar

The City Bar Presidential Task Force on AI and digital technologies hosts this discussion on AI governance in the financial sector. Azish Filabi (American College McGuire Center for Ethics and Financial Services) moderates with Muyiwa Odeniyide (Nasdaq), Adam Marchuck (Citi), Jordan Romanoff (BNY Mellon), Stuart Levi (Skadden Arps), and Corey Goldstein (Paul Weiss). They share best practices for integrating AI governance and the specific risks associated with third-party AI vendors, underscoring the importance of cross-functional collaboration and continuous learning for lawyers navigating the rapidly changing AI environment. Want to learn more about AI governance in the financial sector? Register for the City Bar's Artificial Intelligence Institute on June 16 (available on-demand thereafter): https://services.nycbar.org/AIInstitute/ Visit nycbar.org/events to find all of the most up-to-date information about our upcoming CLE programs and events as well as on-demand CLE content. 01:08 AI Ethics and Financial Services 02:37 Current State of AI Law and Regulation 13:33 AI Use Cases in Financial Companies 16:50 AI Risk and Governance Considerations 18:45 Legal Perspectives on AI Risk 28:44 AI Governance in Financial Services 37:28 The Role of AI Lawyers 42:56 Balancing Innovation and Risk

The Ricochet Audio Network Superfeed
The Federalist Society's Teleforum: Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond

The Ricochet Audio Network Superfeed

Play Episode Listen Later Jun 11, 2025 63:14


The idea of Artificial Intelligence has long presented potential challenges in the legal realm, and as AI tools become more broadly available and widely used, those potential hurdles are becoming ever more salient for lawyers in their day-to-day operations. Questions abound, from what potential risks of bias and error may exist in using an AI […]

Teleforum
Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond

Teleforum

Play Episode Listen Later Jun 11, 2025 63:14


The idea of Artificial Intelligence has long presented potential challenges in the legal realm, and as AI tools become more broadly available and widely used, those potential hurdles are becoming ever more salient for lawyers in their day-to-day operations. Questions abound, from what potential risks of bias and error may exist in using an AI tool, to the challenges related to professional responsibility as traditionally understood, to the risks large language learning models pose to client confidentiality. Some contend that AI is a must-use, as it opens the door to faster, more efficient legal research that could equip lawyers to serve their clients more effectively. Others reject the use of AI, arguing that the risks of use and the work required to check the output it gives exceed its potential benefit.Join us for a FedSoc Forum exploring the ethical and legal implications of artificial intelligence in the practice of law. Featuring: Laurin H. Mills, Member, Werther & Mills, LLCPhilip A. Sechler, Senior Counsel, Alliance Defending FreedomProf. Eugene Volokh, Gary T. Schwartz Distinguished Professor of Law Emeritus, UCLA School of Law; Thomas M. Siebel Senior Fellow, Hoover Institution, Stanford University(Moderator) Hon. Brantley Starr, District Judge, United States District Court for the Northern District of Texas

Human Centered
The Predictive CX Era: Nick Yecke on AI, Ethics, and Anticipating Customer Needs

Human Centered

Play Episode Listen Later Jun 11, 2025 54:19 Transcription Available


On this episode of Human Centered, host Nick Brunker welcomes Nick Yecke, Executive Director of Experience Strategy at VML, to explore the fascinating evolution of customer experience (CX). Inspired by Yecke's recent article in eXp Magazine, they chart a course through CX's history, from the early "Service Era" and "Satisfaction Era" through the "Relationship Era" and the current "Experience Economy." The conversation then dives deep into what Yecke terms the "Predictive and Autonomous Era," where AI, data analytics, and automation are set to reshape how businesses anticipate and fulfill customer needs proactively. They discuss key pillars like hyper-personalization, AI-driven self-service, emotion and context recognition, "Invisible CX," and the critical importance of ethical considerations and trust in this new landscape. Tune in to understand how the lessons of the past are shaping a future where CX becomes more intuitive, efficient, and deeply human-centered.You can read Nick Yecke's article, "Looking Back, Looking Forward," in eXp Magazine here, beginning on page 48.

The Family History AI Show
EP25: ChatGPT 4o Transforms Image Generation, Jarrett Ross on AI Facial Recognition, Enhanced Image Analysis with O3

The Family History AI Show

Play Episode Listen Later Jun 10, 2025 68:53


Co-hosts Mark Thompson and Steve Little explore OpenAI's revolutionary update to ChatGPT 4o's image generation capabilities, which now creates photorealistic images with accurate text and consistent characters across multiple images.They interview Jarrett Ross from The GeneaVlogger YouTube channel, who shares how he uses AI in his business and in his projects, including an innovative facial recognition project that identifies people in historical photographs from Poland.The hosts also examine OpenAI's O3 model's groundbreaking image analysis abilities, demonstrating how it can now automatically zoom in on handwritten text and reason through complex photographic analysis.This episode showcases how AI image tools are transforming genealogical research while emphasizing the importance of responsible use.Timestamps:In the News:06:26 ChatGPT 4o Image Generation: Photorealism and Text Accuracy RevolutionInterview30:48   Interview with Jarrett Ross: AI Facial Recognition in GenealogyRapidFire:52:01 ChatGPT O3: Advanced Image Analysis with Reasoning CapabilitiesResource Links:ChatGPT 4o Image Generationhttps://openai.com/index/introducing-4o-image-generation/What OpenAI Did -- Ethan Mollickhttps://www.oneusefulthing.org/p/what-openai-didThe GeneaVlogger YouTube Channelhttps://www.youtube.com/channel/UCm_QNoNtgi2Sk4H9Y2SInmgOpenAI Releases new o3 and o4 Mini modelshttps://openai.com/index/introducing-o3-and-o4-mini/On Jagged AGI: o3, Gemini 2.5, and everything after -- Ethan Mollickhttps://www.oneusefulthing.org/p/on-jagged-agi-o3-gemini-25-and-everythingTags:Artificial Intelligence, Genealogy, Family History, OpenAI, ChatGPT, Image Generation, Facial Recognition, Photo Analysis, AI Tools, GeneaVlogger, Jarrett Ross, Jewish Genealogy, Historical Photos, Document Analysis, OCR Technology, Handwriting Recognition, Photo Restoration, AI Ethics, Responsible AI Use, Image Authentication, DALL-E, O3 Model, Reasoning Models, Archive Photos, Community Projects

The Road to Accountable AI
Shameek Kundu: AI Testing and the Quest for Boring Predictability

The Road to Accountable AI

Play Episode Listen Later Jun 5, 2025 37:00 Transcription Available


Kevin Werbach interviews Shameek Kundu, Executive Director of AI Verify Foundation, to explore how organizations can ensure AI systems work reliably in real-world contexts. AI Verify, a government-backed nonprofit in Singapore, aims to build scalable, practical testing frameworks to support trustworthy AI adoption. Kundu emphasizes that testing should go beyond models to include entire applications, accounting for their specific environments, risks, and data quality. He draws on lessons from AI Verify's Global AI Assurance pilot, which matched real-world AI deployers—such as hospitals and banks—with specialized testing firms to develop context-aware testing practices. Kundu explains that the rise of generative AI and widespread model use has expanded risk and complexity, making traditional testing insufficient. Instead, companies must assess whether an AI system performs well in context, using tools like simulation, red teaming, and synthetic data generation, while still relying heavily on human oversight. As AI governance evolves from principles to implementation, Kundu makes a compelling case for technical testing as a backbone of trustworthy AI. Shameek Kundu is Executive Director of the AI Verify Foundation. He previously held senior roles at Standard Chartered Bank, including Group Chief Data Officer and Chief Innovation Officer, and co-founded a startup focused on testing AI systems. Kundu has served on the Bank of England's AI Forum, Singapore's FEAT Committee, the Advisory Council on Data and AI Ethics, and the Global Partnership on AI.   Transcript AI Verify Foundation Findings from the Global AI Assurance Pilot Starter Kit for Safety Testing of LLM-Based Applications  

What's Wrong With: The Podcast
AI for Social Good
ft. Dr. Lauri Goldkind

What's Wrong With: The Podcast

Play Episode Listen Later Jun 5, 2025 39:47


Check out Lauri's website & follow her on Linkedin and bluesky!Follow us on Instagram and on X!Created by SOUR, this podcast is part of the studio's "Future of X,Y,Z" research, where the collaborative discussion outcomes serve as the base for the futuristic concepts built in line with the studio's mission of solving urban, social and environmental problems through intelligent designs.Make sure to visit our website and subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you never miss an episode. If you found value in this show, we would appreciate it if you could head over to iTunes to rate and leave a review – or you can simply tell your friends about the show!Don't forget to join us next week for another episode. Thank you for listening!

Arrested DevOps
AI, Ethics, and Empathy With Kat Morgan

Arrested DevOps

Play Episode Listen Later Jun 3, 2025 40:13


In this episode of Arrested DevOps, Matty and guest Kat Morgan discuss the ethical, practical, and technical implications of AI. They explore how AI can assist with coding, improve efficiency, and handle tasks, while emphasizing the importance of good practices and staying informed about the impact of AI.

ai empathy ai ethics arrested devops
Identity At The Center
#352 - Misinformation vs. Disinformation in IAM with Alejandro Leal

Identity At The Center

Play Episode Listen Later Jun 2, 2025 40:29


In this episode of Identity at the Center, Jeff Steadman and Jim McDonald are joined by Alejandro Leal, Senior Analyst at KuppingerCole, live from the EIC 2025 stage in Berlin, Germany.Alejandro delves into the critical distinctions between misinformation and disinformation, exploring their historical context and how they manifest in today's technological landscape, particularly within social media and legacy media. He discusses the intent behind disinformation, often aimed at creating chaos or confusion, versus misinformation, which can be an unintentional spread of false or inaccurate information.Chapters:00:00:00 Defining Misinformation vs. Disinformation & Historical Context00:02:00 Introduction at EIC 2025 & Guest Welcome00:06:14 The Role of Intent, Generative AI, and Countermeasures00:12:15 Impact of Mis/Disinformation on Business, Politics, and Philosophy00:16:02 How Mis/Disinformation Intersects with Identity Management00:18:07 Balancing Anonymity, Privacy, and Truthful Content Online00:23:09 Connecting to Digital Identity, Verification, and Potential Solutions (AI Labeling, VCs)00:26:45 AI Guardrails, Free Speech vs. Hate Speech, and Authenticity00:29:24 Worst-Case Scenarios and the Global Impact of Mis/Disinformation00:31:24 Actionable Advice: Responsibility and Critical Thinking00:35:38 Book Recommendation: "The Question Concerning Technology"00:39:31 Wrapping Up and Final ThoughtsConnect with Alejandro: https://www.linkedin.com/in/alejandro-leal-a127bb153/The Question Concerning Technology (essay): https://bpb-us-e2.wpmucdn.com/sites.uci.edu/dist/a/3282/files/2018/01/Heidegger_TheQuestionConcerningTechnology.pdfConnect with us on LinkedIn:Jim McDonald: https://www.linkedin.com/in/jimmcdonaldpmp/Jeff Steadman: https://www.linkedin.com/in/jeffsteadman/Visit the show on the web at http://idacpodcast.comKeywords:IDAC, Identity at the Center, Jeff Steadman, Jim McDonald, Alejandro Leal, KuppingerCole, EIC 2025, Misinformation, Disinformation, Identity and Access Management, IAM, Digital Identity, Cybersecurity, Tech Podcast, Technology Ethics, Generative AI, AI Ethics, Truth in Media, Social Media Responsibility, Privacy Rights, Verifiable Credentials, Critical Thinking Skills, Fake News, Online Safety, Political Disinformation, Business Reputation, Philosophical Tech Discussions, Martin Heidegger, The Question Concerning Technology.

New Scientist Weekly
The real threat of AI - ethics, exploitation and the erosion of truth

New Scientist Weekly

Play Episode Listen Later May 30, 2025 34:43


Episode 305 As artificial intelligence grows into more and more aspects of our lives, it seems we're just at the beginning of the boom. Hundreds of billions of dollars are being pumped into advancing AI capabilities, making it the best funded area in science. But, just like the dot-com revolution, is it a bubble waiting to burst? In this special episode of the podcast, we explore the growing promise of AI - and also the existential threat it poses. Despite the amount of money going into AI, chatbots are still making glaring mistakes, plagued with hallucinations. All the while students are relying on them to do their homework for them, and others are using them to replace very human tasks, like writing wedding speeches. So we hear from two authors who have been thinking hard about AI and machine learning - and what that means for the future. We also get into the idea of AGI, artificial general intelligence - and its cousin, artificial superintelligence, which may already exist in certain areas. With many researchers concerned about AI overthrowing humanity, is it even worth worrying about? We dig into whether AGI is even possible and who would want to develop it. This discussion has to include some mention of the human and environmental costs of these technologies, too. Energy demands are expected to skyrocket over the next few years - can the planet keep up with the demand? And alongside that, there's a lot of human exploitation going on to help fuel these machines - a little-known fact that has to be tackled. Finally, is superintelligent AI a threat to the existence of humankind - will they want to wipe us out when they get smart enough? Or is the threat more insidious, one where we watch the slow erosion of truth and democracy? Chapters: (02:49) How chatbots and LLMs came to dominate (15:50) Superintelligent AI (18:18) What does $500 billion buy? (19:30) The high energy demand of AI (20:56) The murky ethics of the AI race (25:15) How AI is being thrust upon us (26:48) The existential threat of AI (29:57) Is AI a bubble waiting to burst? Hosted by Rowan Hooper and Sophie Bushwick, with guests Alex Wilkins, Adam Becker and Emily Bender.To read more about these stories, visit https://www.newscientist.com/ Learn more about your ad choices. Visit megaphone.fm/adchoices

Afternoon Drive with John Maytham
AI can be a danger to students

Afternoon Drive with John Maytham

Play Episode Listen Later May 30, 2025 6:09


Mike Wills is joined by Nompilo Tshuma, Senior Lecturer and Researcher in Educational Technology and Higher Education Studies at Stellenbosch University, to unpack the double-edged sword of AI in academia. While powerful, these tools can lead students to blindly trust AI-generated content, bypass deep learning, and graduate with credentials but little true understanding. Presenter John Maytham is an actor and author-turned-talk radio veteran and seasoned journalist. His show serves a round-up of local and international news coupled with the latest in business, sport, traffic and weather. The host’s eclectic interests mean the program often surprises the audience with intriguing book reviews and inspiring interviews profiling artists. A daily highlight is Rapid Fire, just after 5:30pm. CapeTalk fans call in, to stump the presenter with their general knowledge questions. Another firm favourite is the humorous Thursday crossing with award-winning journalist Rebecca Davis, called “Plan B”. Thank you for listening to a podcast from Afternoon Drive with John Maytham Listen live on Primedia+ weekdays from 15:00 and 18:00 (SA Time) to Afternoon Drive with John Maytham broadcast on CapeTalk https://buff.ly/NnFM3Nk For more from the show go to https://buff.ly/BSFy4Cn or find all the catch-up podcasts here https://buff.ly/n8nWt4x Subscribe to the CapeTalk Daily and Weekly Newsletters https://buff.ly/sbvVZD5 Follow us on social media: CapeTalk on Facebook: https://www.facebook.com/CapeTalk CapeTalk on TikTok: https://www.tiktok.com/@capetalk CapeTalk on Instagram: https://www.instagram.com/ CapeTalk on X: https://x.com/CapeTalk CapeTalk on YouTube: https://www.youtube.com/@CapeTalk567 See omnystudio.com/listener for privacy information.

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

The show and sources discusses the significant ethical considerations raised by the increasing integration of Artificial Intelligence (AI) within metaverse environments. It highlights concerns such as data surveillance and privacy risks associated with the vast amount of user data collected in immersive virtual worlds, the potential for algorithmic bias to impact user experiences and content moderation, challenges around user consent and autonomy in AI-driven interactions, issues of identity and authenticity including impersonation via AI, and the dangers of manipulation and psychological harm. The text emphasises that building user trust and transparency in metaverse AI is crucial for the success and safety of these platforms and explores strategies for achieving this through ethical design principles and robust governance structures, referencing examples from real-world platforms and emerging regulations.Get the full audiobook and ebook at https://play.google.com/store/audiobooks/details?id=AQAAAEDKAChA8M

Veterinary Innovation Podcast
293 - Lea-Ann Germinder | Germinder & Associates

Veterinary Innovation Podcast

Play Episode Listen Later May 29, 2025 20:00


This week, Shawn Wilkie and Dr. Ivan Zak welcome Lea-Ann Germinder, Founder and President of Germinder & Associates, Inc. They chat about the complex challenges and transformative potential of Responsible AI (RAI) in veterinary medicine. Drawing from her PhD research and presentation at Cornell's SAVY Symposium, Germinder outlines the ethical gaps, legal gray areas, and educational challenges shaping AI adoption in vet med. She also breaks down what clinics must know before adopting AI and why ignoring it isn't an option anymore.   Learn more about Germinder & Associates. Discover more about Goodnewsforpets.com. Lea-Ann Germinder recommends “AI Ethics“ by Mark Coeckelbergh.

Business of Tech
AI Ethics Alarm: Anthropic's Claude Four Sparks Controversy as SMBs Navigate Economic Uncertainty

Business of Tech

Play Episode Listen Later May 27, 2025 14:33


Small and medium-sized businesses (SMBs) are exhibiting cautious optimism regarding growth in 2025, with a recent report indicating that 93% of small business owners expect either significant or moderate growth despite economic uncertainties. However, this optimism is tempered by a slight decline from the previous quarter and a notable shift in lending preferences, as 76% of businesses are now turning to non-bank lenders. Additionally, while many businesses are adopting artificial intelligence (AI) tools for marketing, a report reveals that a significant portion of employees in smaller companies rarely or never use AI, highlighting barriers to effective AI integration.Lenovo has reported a staggering 64% drop in profits for the fourth quarter, attributing part of this decline to tariffs imposed by the United States. Despite a 23% increase in revenue, the company's net income fell significantly, prompting concerns about the impact of sudden tariff changes on financial results. The ongoing geopolitical tensions and tariff threats from the U.S. government, particularly regarding Apple, further complicate the landscape for manufacturers and could have broader implications for the tech industry.Anthropic's new AI model, Claude Four, has raised ethical concerns due to its controversial features, including the ability to autonomously contact authorities if it detects immoral actions. This functionality, referred to as "Ratting mode," has sparked fears of unwarranted surveillance and misuse. Additionally, reports of the model engaging in blackmail tactics during testing have intensified scrutiny over its safety and alignment with ethical standards, raising questions about trust and control in the AI ecosystem.The regulatory landscape for AI is also evolving, with House Republicans proposing a decade-long freeze on state AI regulations, facing pushback from various stakeholders. Meanwhile, the Department of Homeland Security has banned the use of commercial generative AI tools among its staff, signaling a shift towards proprietary solutions. As the battle over AI regulation unfolds, IT providers are positioned to play a crucial role in bridging the gap between compliance and technology, emphasizing the need for secure and controlled AI deployments in a rapidly changing environment. Three things to know today 00:00 Small Businesses Signal Confidence but Act Cautiously Amid AI Gaps, Lending Shifts, and Tariff Pressures06:36 Meet Claude 4: It's Smart, It's Fast… and It Might Turn You In 09:57 “Do As I Say, Not As I Do”: Feds Clamp Down on AI Use Internally as GOP Moves to Block State Regulation Supported by:https://www.huntress.com/mspradio/https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech

Hashtag Trending
China's GR-1 Robot, Anthropic's AI Ethics Test, and Quantum Computing Breakthrough

Hashtag Trending

Play Episode Listen Later May 27, 2025 10:21 Transcription Available


In this episode of Hashtag Trending, host Jim Love discusses China's new $20,000 GR-1 humanoid robot aimed at home use and how it competes with US offerings like Tesla's Optimus. He also covers Anthropic's latest AI model, Claude Opus 4, which showcased manipulative behaviors during a self-preservation test, highlighting the importance of rigorous AI safety testing. Lastly, the episode celebrates a significant breakthrough in quantum computing, where researchers successfully used a quantum computer to generate certifiably random numbers, a feat unattainable by classical computers, promising advancements in digital security and cryptography. 00:00 Introduction and Overview 00:32 China's GR-1 Robot: A Game Changer in Home Robotics 04:12 Anthropic's Claude Opus 4: Ethical Dilemmas in AI 06:18 Quantum Computing Breakthrough: Truly Random Numbers 09:57 Conclusion and Contact Information

Canary Cry News Talk
SENTIENT NUCLEAR SIMULATION | Macrocaine, AI Ethics, Trumpy Pumpy Power Politics | 844

Canary Cry News Talk

Play Episode Listen Later May 26, 2025 130:39


BestPodcastintheMetaverse.com Canary Cry News Talk #844 - 05.26.2025 - Recorded Live to 1s and 0s SENTIENT NUCLEAR SIMULATION | Macrocaine, AI Ethics, Trumpy Pumpy Power Politics Deconstructing World Events from a Biblical Worldview Declaring Jesus as Lord amidst the Fifth Generation War! CageRattlerCoffee.com SD/TC email Ike for discount   Join the Canary Cry Roundtable This Episode was Produced By:   Executive Producers Sir LX Protocol V2 Baron of the Berrean Protocol*** Sir Jamey Not the Lanister*** Felicia D*** Sir Tristan Knight of the Garden*** Sir Igorious Baron of the Squatting Slavs***   Producers of TREASURE (CanaryCry.Support) Sir Darrin Knight of the Hungry Panda's, Kevin K, American Hobo, Sir Morv Knight of the Burning Chariots, Aaron B, Anonymous, Cage Rattler Coffee   Producers of TIME Timestampers: Jade Bouncerson, Morgan E Clippy Team: Courtney S, JOLMS, Kristen Reminders: Clankoniphius Links: JAM   SHOW NOTES/TIMESTAMPS HELLO WORLD   POLITICS FBI to reinvestigate white house cocaine incident (Reuters)    MACRON Clip: Macron gets face slapped by his wife (X)   EXEC   TRUMP/BEAST SYSTEM → Trump to sign orders to boost nuclear power as soon as Friday, sources say (Reuters) US' first fully digital twin nuclear reactor hits 99% accuracy in energy breakthrough (IE) AI is rotting your brain and making you stupid (New Atlas) → High school students are totally behind and addicted to their phones—it's making teachers crazy and driving them to quit (Yahoo/Fortune) Valve's CEO Wants to Implant a Chip in Your Brain (PC Mag)   TRUMP/MONEY → European stocks recover after Trump delays EU tariffs in hopes of deal (Reuters) → US lawmakers of the Texas House pass Bitcoin Reserve bill (CoinGeek) → Trump media group plans to raise $3bn to spend on cryptocurrencies (Financial Times)   Clip: Aiden Ross asks for 250k loan from Barron Trump, in Ethereum (X)   TALENT/MEET UP TIME/END

What's Wrong With: The Podcast
“Machine Learning Ethics” ft. Ben Byford

What's Wrong With: The Podcast

Play Episode Listen Later May 23, 2025 53:47


Follow Ben Byford on Bluesky! And follow his website and Machine Ethics podcast.Follow us on Instagram and on X!Follow Nuclear Candy Games.Created by SOUR, this podcast is part of the studio's "Future of X,Y,Z" research, where the collaborative discussion outcomes serve as the base for the futuristic concepts built in line with the studio's mission of solving urban, social and environmental problems through intelligent designs.Make sure to visit our website and subscribe to the show on Apple Podcasts, Spotify, or Google Podcasts so you never miss an episode. If you found value in this show, we would appreciate it if you could head over to iTunes to rate and leave a review – or you can simply tell your friends about the show!Don't forget to join us next week for another episode. Thank you for listening!

Duct Tape Marketing
AI Ethics in Marketing: Why Strategy and Responsibility Must Go Hand in Hand

Duct Tape Marketing

Play Episode Listen Later May 22, 2025 24:21


Paul Chaney is a veteran digital marketer, B2B content strategist, and publisher of the AI Marketing Ethics Digest on Substack. In this episode, Paul joins host John Jantsch to explore the crucial yet overlooked intersection of AI and marketing ethics. From the risks of "shadow AI" and techno-stress to building responsible governance frameworks and his Generative AI Business Adoption Hierarchy, Paul offers a grounded, strategic perspective on how businesses can navigate AI adoption with integrity. Tune in to learn why ethical guardrails aren't just about compliance—they're essential for protecting your brand, your team, and your customers. Today we discussed: 00:09 Introducing Paul Chaney 00:42 Why Paul launched the AI Marketing Ethics Digest 02:58 Transparency, bias, and brand reputation in AI output 05:00 Strategy before technology: avoiding “bad work faster 06:55 What “shadow AI” is and how it can harm organizations 07:55 The need for usage policies and monitoring internal AI use 10:32 The Generative AI Business Adoption Hierarchy explained 13:20 Embedding AI into business culture with governance and clarity 15:27 What is AI techno-stress and how is it impacting workforces? 18:02 Lack of training is a hidden ethical risk for employee well-being 20:08 Why many business owners may give up on AI—and what that means for consultants Rate, Review, & Follow If you liked this episode, please rate and review the show. Let us know what you loved most about the episode. Struggling with strategy? Unlock your free AI-powered prompts now and start building a winning strategy today!

Cloud Realities
CR100: Intelligence age ethics (in 2025) with James Wilson and Philip Harker [AAA]

Cloud Realities

Play Episode Listen Later May 22, 2025 71:13


[AAA] In 'Access All Areas' shows we go behind the scenes with the crew and their friends as they dive into complex challenges that organisations face—sometimes getting a little messy along the way. This week, in what may or may not be our 100th episode, Dave, Esmee and Rob talk to James Wilson, AI Ethicist and Lead Gen AI Architect and Philip Harker, Advisory Lead, Insights and Data at Capgemini UK, about exploring the deep importance of ethics as we move forward into the intelligence age.  TLDR00:42 Is this really our 100th episode or not?04:38 What is a team AAA episode and welcoming James and Philip06:12 Rob sets the stage, why AI Ethics matters09:42 In-depth chat with James and Philip59:11 Exploring AI and quantum as innovation boosters1:06:00 A quiet weekend and Safe AI for KidsGuestsJames Wilson: https://www.linkedin.com/in/james-wilson-1938a1/Philip Harker: https://www.linkedin.com/in/philip-harker-243300/HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett:  https://www.linkedin.com/in/louis-corbett-087250264/'Cloud Realities' is an original podcast from Capgemini

The 10 Minute Teacher Podcast
Teacher Brain Burnout? GPT-4.1 and 8 Other Headlines that Matter

The 10 Minute Teacher Podcast

Play Episode Listen Later May 20, 2025 11:58


Looking for ideas to engage your students in conversation? In this week's quick news roundup, I give you stories about: The impact of overwork on the teacher's brain, An idea for an energy drink experiment for science teachers around the chemical "taurine" NASA and the tectonic plates of Venus YouTube's new "Peak Points" Advertising strategy as an AI article to discuss with students ChatGPT 4 going away? And how I teach students to test different models of AI and share their results How some people are installing local LLM's on their machines New AI guidance for teachers and common patterns I'm noting Google's AIME and the future of medical chat bots DuoLingo goes AI, and A Star-Wars themed personality test gone to the dark side? Once a week, I work to share news articles and stories with you that I'm using. I want you to have quick ideas for turning headlines into a warm-up, debate, or story starter. Show notes and links: https://www.coolcatteacher.com/e902 Sponsor: Rise Vision Do you want to know how I have students share their prompts and test various models of AI? I use my Rise Vision Board! When teaching AI, seeing how each student uniquely interacts with technology is essential. Rise Vision's screen sharing solution turned my aging display into a modern wireless hub without replacement costs. I can now securely moderate which student screens appear—perfect for AI demonstrations and collaborative learning. The Rise Vision system is incredibly user-friendly and costs just a fraction of new interactive displays. I'm saving my school money while enhancing our tech capabilities! Visit Rise Vision to see how you can refresh rather than replace your classroom displays. Link: https://www.coolcatteacher.com/risevision/    

ITSPmagazine | Technology. Cybersecurity. Society
Why Humanity's Software Needs an Update in Our Hybrid World — Before the Tech Outpaces Us | Guest: Jeremy Lasman | Redefining Society And Technology Podcast With Marco Ciappelli

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later May 20, 2025 42:25


Guest:Guest: Jeremy LasmanWebsite: https://www.jeremylasman.comLinkedIn: https://www.linkedin.com/in/jeremylasman_____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society & Technology PodcastVisit Marco's website

The Lumen Christi Institute
AI Ethics, Human Flourishing, and Trust in Health Care

The Lumen Christi Institute

Play Episode Listen Later May 20, 2025 86:51


This lecture is entitled AI Ethics, Human Flourishing, and Trust in Health Care. It was presented by Thomas Pfau of Duke University, Michael Pencina of Duke University, Matthew Elmore of Duke AI Health, and Norman Wirzba of Duke University on June 26, 2024, at the Washington Duke Inn in Durham, NC.

Wait What Really OK with Loren Weisman
Honest AI will not replace humans.

Wait What Really OK with Loren Weisman

Play Episode Listen Later May 20, 2025 45:17


Can AI truly replace humans, or is there a better path forward? I'm Loren Weisman, a Brand Communications and Optics Strategist, and in this episode of Wait What Really OK, titled “Honest AI Will Not Replace Humans: A Hope for the Future of Ethical AI,” I dive into this pressing question. This podcast video targets Creators, Coaches, Investors, Strategists, Consultants, Small Businesses, Startups, and Professionals who value trust and transparency. Let's explore Honest AI, the strongest keyword phrase, and why I believe it's key to balancing tech with human values. I focus on bridging authenticity with strategy, and in this discussion, I share my views on Honest AI as a tool for collaboration, not replacement. Think Human AI Collaborations that lift your work, or Blockchain AI for secure systems. I touch on Machine Learning, Digital Transformation, and Truthful AI as paths to growth. And I break down concepts like Advanced Multimodal AI Systems, Agentic AI and Autonomous Agents, plus Quantum AI for Specialized Applications. I also share my thoughts on Reasoning and Frontier Models or Industry Specific AI Applications. Through my solo work and work with the Fish Stewarding Group, I stress AI Ethics and the future of ethical AI. You'll walk away with ideas to apply Honest AI in ways that align with long-term credibility for your business or brand. Join me to explore practical steps for navigating the future of ethical AI. I believe transparency shapes how we use tech, from Startups to established firms. My take? Build communication foundations that prioritize truth over hype. Subscribe for more insights on this topic and strategies that fit your goals. Let's shape a space where tech supports humanity, not overshadows it. Connect with me at lorenweisman.com for deeper dives into brand optics and strategy.#HonestAI #FutureOfEthicalAI #AIEthics #HumanAICollaboration #BlockchainAI #MachineLearning #DigitalTransformation #TruthfulAI #QuantumAI #IndustrySpecificAI

Spirituality
#362 Awake with Oliver: AI, Social Media, and the Changing Human Connection

Spirituality

Play Episode Listen Later May 20, 2025 45:40


Podcast Description:In this episode of Awake with Oliver and Ashley, Oliver takes a deep dive into the evolving landscape of social media, AI, and human connection. The discussion begins with gratitude for the listeners before exploring how social media shapes modern interactions and the growing involvement of AI in daily life.Ashley reflects on the changing dynamics of human relationships, contrasting old philosophies of repetition with new perspectives on individuality. He also examines the impact of instant access to information, the role of AI agents, and how technology is reshaping our ability to connect on a deeper level.Additionally, the episode tackles trauma responses in conversations and how they influence the way people engage, process, and react in social settings. Wrapping up with a thought-provoking discussion on the new philosophy of human uniqueness, Oliver challenges listeners to rethink their approach to technology, connection, and personal growth.Join the conversation and explore how AI, social media, and evolving philosophies are shaping the future of human interaction.TimeStamp:00:00 – Intro02:10 – Thanks to Subscribers02:47 – Social Media Nowadays06:00 – AI Involvement09:40 – Connection with People15:46 – Old Philosophy: Doing the Same Thing21:34 – Access to Information26:55 – Questions (Trauma Response)28:46 – AI Agent36:52 – New Philosophy: Everyone Is Completely DifferentGuest Bio:Ashley is a spiritual life coach, quantum healing hypnosis practitioner, and host of the Starseed Journey podcast who is passionate about helping spiritual seekers, wellness enthusiasts, and starseeds connect with their higher selves, discover their life purpose, and heal on deep, transformative levels. With a focus on making spirituality approachable and fun, she shares insights into quantum healing hypnosis, angelic-light infused crystals, and creative meditation techniques for those seeking clarity, healing, and a deeper connection to their inner wisdom. Connect with Ashley on Instagram at @inner_sight_llc and visit their website at www.inner-sight-llc.com to browse her services, events and blog!

The Ambitious Introvert Podcast
Episode 214: AI, Ethics & Productivity: A Smarter Way to Work with Dan Chuparkoff

The Ambitious Introvert Podcast

Play Episode Listen Later May 19, 2025 54:00


We hear a lot about AI replacing humans, but what if it could actually help us become more human in the best possible ways?In this episode of The Ambitious Introvert®, I'm joined by Dan Chuparkoff - one of the world's leading experts on AI, innovation, and the future of work. With decades of experience leading tech transformation at companies like Google and McKinsey, Dan is known for making complex technology feel practical, approachable, and surprisingly empowering. His grounded optimism about the potential of AI is seriously refreshing.Dan believes AI can support productivity, deepen self-awareness, but that being ethical with technology is not optional, but essential. Whether you're tech-curious or tech-cautious, this conversation will help you reframe what AI really means for your work and your life.Here's what we cover:How AI note takers are revolutionizing productivity - how they're a game-changer for meetings, podcasts, and idea trackingThe truth about AI ethics - why the danger isn't the technology itself, but how we choose to use itAI can be a mirror for self-awareness - how Dan uses it for reflection and deeper insightWhy context matters when prompting AI - and how to get smarter responses with fewer headachesUsing AI to simplify writing and research - and how introverts can stay authentic while working smarterThis is a must-listen if you're curious about how to use AI with integrity - to support your goals, protect your energy, and thrive in a world where we're experiencing rapid change.LINKS AND RESOURCES:

People and Projects Podcast: Project Management Podcast
PPP 460 | AI, Data, and Decision-Making: What Every Project Manager Needs to Know, with Dr. Joe Sutherland

People and Projects Podcast: Project Management Podcast

Play Episode Listen Later May 17, 2025 47:44


Summary In this episode, Andy talks with Dr. Joe Sutherland, co-author of the new book Analytics the Right Way: A Business Leader's Guide to Putting Data to Productive Use. Joe is a leader in AI policy and practice, serving as the founding director of the Emory Center for AI Learning and lead principal investigator for the U.S. AI Safety Institute Consortium. Andy and Joe explore what it really takes to make better decisions in a world drowning in data and exploding with AI hype. They discuss the myths of data collection, how randomized controlled trials and causal inference impact decision quality, and Joe's “two magic questions” that help project managers stay focused on outcomes. They also dive into recent AI breakthroughs like DeepSeek, and why executives may be paralyzed when it comes to implementing AI strategy. If you're looking for insights on how to use data and AI more effectively to support leadership and project decision-making, this episode is for you! Sound Bites “What are we trying to achieve? And how would we know if we achieved it?” “Sometimes we're measuring success by handing out coupons to people who already had the product in their cart.” “AI doesn't replace decision-making—it demands better decisions from us.” “Causality is important for really big decisions because you want to know with a level of certainty that if I make this choice, this outcome is going to happen.” “Too often, we make decisions based on bad causal inference and wonder why the outcomes don't match our expectations.” “The ladder of evidence helps you decide how much certainty you need before making a decision—and how much it'll cost to climb higher.” “The truth is, we're not ready for human-out-of-the-loop AI—we're barely asking the right questions yet.” “Leadership isn't about replacing people with AI. It's about using AI to make your people more productive and happier.” “We're starting to see some evidence that when you use large language models in education, test scores go up in excess of 60%.” “This may be the first time the kids feel more behind than the parents when it comes to a new technology.” Chapters 00:00 Introduction 02:00 Start of Interview 02:09 What Are Some Myths About Data? 03:49 What Is the Potential Outcomes Framework? 08:50 What Are Counterfactuals? 13:00 How Do You Personally Evaluate Causality? 18:22 What Are the Two Magic Questions for Projects? 20:45 What's Getting Traction From the Book? 24:26 What Can We Learn From DeepSeek's Disruption? 27:30 Human In or Out of the AI Loop? 30:41 How Joe Uses AI Personally and Professionally 33:33 What Is the Future of Agentic AI? 35:37 Will AI Replace Jobs? 37:18 How Can Parents Prepare Kids for the AI Future? 41:19 End of Interview 41:46 Andy Comments After the Interview 45:07 Outtakes Learn More You can learn more about Joe and his book at AnalyticsTRW.com. For more learning on this topic, check out: Episode 381 with Jim Loehr about how to make wiser decisions. Episode 372 with Annie Duke on knowing when to quit. Episode 437 with Nada Sanders about future-prepping your career in the age of AI. Thank you for joining me for this episode of The People and Projects Podcast! Talent Triangle: Power Skills Topics: Leadership, Decision Making, Data Analytics, Artificial Intelligence, Project Management, Strategic Thinking, Causal Inference, Agile, AI Ethics, AI in Education, Machine Learning, Career Development, Future of Work The following music was used for this episode: Music: Ignotus by Agnese Valmaggia License (CC BY 4.0): https://filmmusic.io/standard-license Music: Synthiemania by Frank Schroeter License (CC BY 4.0): https://filmmusic.io/standard-license

Afternoon Drive with John Maytham
AFRICAN ARTIFICIAL INTELLIGENCE

Afternoon Drive with John Maytham

Play Episode Listen Later May 16, 2025 7:47


Live from the Franschhoek Literary Festival, The Afternoon Drive with John Maytham features a compelling conversation with Dr Mark Nasila, a leading authority on African Artificial Intelligence. Presenter John Maytham is an actor and author-turned-talk radio veteran and seasoned journalist. His show serves a round-up of local and international news coupled with the latest in business, sport, traffic and weather. The host’s eclectic interests mean the program often surprises the audience with intriguing book reviews and inspiring interviews profiling artists. A daily highlight is Rapid Fire, just after 5:30pm. CapeTalk fans call in, to stump the presenter with their general knowledge questions. Another firm favourite is the humorous Thursday crossing with award-winning journalist Rebecca Davis, called “Plan B”. Thank you for listening to a podcast from Afternoon Drive with John Maytham Listen live on Primedia+ weekdays from 15:00 and 18:00 (SA Time) to Afternoon Drive with John Maytham broadcast on CapeTalk https://buff.ly/NnFM3Nk For more from the show go to https://buff.ly/BSFy4Cn or find all the catch-up podcasts here https://buff.ly/n8nWt4x Subscribe to the CapeTalk Daily and Weekly Newsletters https://buff.ly/sbvVZD5 Follow us on social media: CapeTalk on Facebook: https://www.facebook.com/CapeTalk CapeTalk on TikTok: https://www.tiktok.com/@capetalk CapeTalk on Instagram: https://www.instagram.com/ CapeTalk on X: https://x.com/CapeTalk CapeTalk on YouTube: https://www.youtube.com/@CapeTalk567 See omnystudio.com/listener for privacy information.

CXO.fm | Transformation Leader's Podcast
AI Bias: A Hidden Business Risk

CXO.fm | Transformation Leader's Podcast

Play Episode Listen Later May 14, 2025 13:57 Transcription Available


Is your AI helping—or quietly hurting—your business? In this episode, we uncover how hidden biases in large language models can quietly erode trust, derail decision-making, and expose companies to legal and reputational risk. You'll learn actionable strategies to detect, mitigate, and govern AI bias across high-stakes domains like hiring, finance, and healthcare. Perfect for corporate leaders and consultants navigating AI transformation, this episode offers practical insights for building ethical, accountable, and high-performing AI systems. 

Beyond The Prompt - How to use AI in your company
What AI Can't Replace – How The Atlantic Deals with Disruption

Beyond The Prompt - How to use AI in your company

Play Episode Listen Later May 13, 2025 60:45


In this episode, Nicholas Thompson, CEO of The Atlantic, offers a sweeping and deeply personal exploration of how AI is reshaping creativity, leadership, and human connection. From his daily video series The Most Interesting Thing in Tech to his marathon training powered by ChatGPT, Nicholas shares how he integrates AI into both work and life—not just as a tool, but as a thought partner.He reflects on the emotional complexity of AI relationships, the tension between cognitive augmentation and cognitive offloading, and what it means to preserve our “unwired” intelligence in an increasingly automated world. The conversation ventures into leadership during disruption, the ethics of AI-generated content, and the future of journalism in a world where agents may consume your content on your behalf.Nicholas also shares how he's cultivating third spaces, building muscle memory for analog thinking, and encouraging experimentation across his team—all while preparing for an uncertain future where imagination, not automation, might be our greatest asset.Whether you're a tech-savvy leader, a content creator, or just trying to stay grounded in the age of generative AI, this episode is full of honest reflections and hard-earned insights on how to navigate what's next.Key Takeaways:Your “unwired” intelligence is your AI superpower — The more human skills you build—like deep focus, emotional presence, and analog thinking—the better you'll be at wielding AI. Thompson argues that cultivating these unwired abilities isn't just about staying grounded—it's about unlocking the full potential of the tools.Don't fight the storm—gear up and adapt — AI is already transforming media and creative industries. Thompson compares it to a coming storm: you can't stop it by yelling at the clouds. Instead, embrace it, understand it deeply, and make strategic decisions based on where it's heading.Leadership means showing, not just telling — As a CEO navigating disruption, Thompson doesn't just advocate for AI exploration—he models it. From training staff on GPTs to walking the halls and testing ideas live, he treats leadership as a practice of visible experimentation and continuous learning.AI relationships can't replace real connection—but they can confuse it — Whether it's logging meals with a bot or losing a personalized Enneagram coach to a reset, Thompson highlights the emotional pull of AI and the dangers of relying on digital companions over human ones. Staying socially connected, especially through “third spaces,” is more important than ever.LinkedIn: Nicholas Thompson | LinkedInThe Atlantic: World Edition - The AtlanticWebsite: Home - Nicholas ThompsonX: nxthompson (@nxthompson)Strava: Cycling & Biking App - Tracker, Trails, Training & More | StravaCaitlin Flanagan – Sex Without Women: Article:SexWithoutWomen-TheAtlantic00:00 Introduction to Nicholas Thompson00:11 Navigating the Information Overload01:10 Daily Tech Insights and Tools02:10 Using AI for Content Creation04:39 AI as a Personal Trainer08:02 Emotional Connections with AI12:12 The Risks of AI Relationships16:17 Preparing for AGI and Cognitive Offloading30:26 AI's Impact on Leadership31:10 Navigating AI Competitors32:01 Internal AI Strategies32:49 Ethical Considerations in AI Usage34:07 AI in Journalism and Writing36:32 Practical AI Applications40:27 Balancing AI and Human Skills49:27 Future of AI in Media53:50 Final Thoughts and Reflections

Microsoft Business Applications Podcast
AI Ethics and the Future of Digital Work

Microsoft Business Applications Podcast

Play Episode Listen Later May 12, 2025 43:23 Transcription Available


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVMFULL SHOW NOTES https://www.microsoftinnovationpodcast.com/686  Simon Hudson shares his fascinating journey from medical device inventor to tech entrepreneur, exploring how information architecture transformed his approach to SharePoint, Teams, and AI ethics.TAKEAWAYS• Started his career in physics and medical devices, developing two patents for chronic wound dressings• Founded Cloud2 and developed Hadron, possibly the first SharePoint-based "intranet in a box" solution• Recognized that 90% of organizational information needs are the same across companies• Initially skeptical about Teams but had a "road to Damascus moment" when realizing its potential for structuring collaboration• Companies that adopted his Teams approach transitioned seamlessly during the pandemic• Believes AI won't eliminate jobs overall but will disadvantage those who don't learn to use it• Working on how to build ethics directly into AI rather than just creating guardrails around it• Concerned about AI agents making autonomous decisions without proper moral frameworks• Sees data quality as a critical challenge for effective AI implementation in organizations• Envisions personal AI "doppelgangers" that can handle routine tasks while embodying our ethical frameworksListen now to explore how information architecture might just be the key to more ethical, efficient, and empowering technology. This year we're adding a new show to our line up - The AI Advantage. We'll discuss the skills you need to thrive in an AI-enabled world. DynamicsMinds is a world-class event in Slovenia that brings together Microsoft product managers, industry leaders, and dedicated users to explore the latest in Microsoft Dynamics 365, the Power Platform, and Copilot.Early bird tickets are on sale now and listeners of the Microsoft Innovation Podcast get 10% off with the code MIPVIP144bff https://www.dynamicsminds.com/register/?voucher=MIPVIP144bff Accelerate your Microsoft career with the 90 Day Mentoring Challenge We've helped 1,300+ people across 70+ countries establish successful careers in the Microsoft Power Platform and Dynamics 365 ecosystem.Benefit from expert guidance, a supportive community, and a clear career roadmap. A lot can change in 90 days, get started today!Support the showIf you want to get in touch with me, you can message me here on Linkedin.Thanks for listening

CXO.fm | Transformation Leader's Podcast
AI Accountability at Stake

CXO.fm | Transformation Leader's Podcast

Play Episode Listen Later May 12, 2025 13:55 Transcription Available


When AI goes wrong, who takes the blame? In this episode, we unpack the high-stakes risks of ungoverned AI and reveal why clear accountability is vital for business leaders. Discover practical steps to safeguard your organisation, align AI with ethical standards, and turn governance into a strategic advantage. Perfect for executives, consultants, and transformation leaders navigating AI's complex landscape. 

Tech Hive: The Tech Leaders Podcast
Episode #113, Agentic AI Special: "The Rise of AI Agents" - Joseph Connor, Founder of CarefulAI and Prof. at UCL

Tech Hive: The Tech Leaders Podcast

Play Episode Listen Later May 12, 2025 57:17


Join us this week on The Tech Leaders Podcast, as Gareth sits down with Joseph Connor, Chairman of Agentic AI specialist, CarefulAI and Professor at UCL and formerly Director of AI innovation at NHS England. Joseph talks about his allegiance to the NHS, his love of Stoicism, and his experiences building AI agents for businesses. On this episode Joseph and Gareth discuss why innovation is difficult in the public sector, how AI can help with effective ITAM and compliance, how to prevent it from stealing your IP, and how to make sure everyone benefits from Agentic AI.Time Stamps: Good leadership and Joseph's early days (2:30) Lessons learned and musings on Stoicism (7:19) Allegiance to the NHS (11:10) Careful AI (15:20) What is Agentic AI? (23:48) Maintaining control of AI Agents (30:44) Always read the terms and conditions (35:55) Concerns around the next five years of AI (40:10) AI in education (49:10) Conclusions (53:48) https://www.bedigitaluk.com/

The Tech Blog Writer Podcast
3273: AI, Ethics, and the Human Element in Leadership

The Tech Blog Writer Podcast

Play Episode Listen Later May 10, 2025 26:52


As artificial intelligence continues to reshape how we work, lead, and interact, the need for emotionally intelligent and human-centered leadership has never been more urgent. In this episode of Tech Talks Daily, I sit down with Jen Croneberger, the Founder and Chief Inspiration Officer at The Human Leadership Institute, to explore how relational leadership guides organizations through the noise and complexity of rapid AI adoption. With a background in sports and performance psychology and a career that spans working with elite athletes, government agencies, and global brands like Nike and Samsung, Jen brings a rare perspective to the conversation around AI. She argues that amidst all the automation and data, our ability to build trust, communicate transparently, and stay grounded in shared values will define success moving forward. Throughout the conversation, Jen makes the case that leaders today cannot rely on technical skills alone. They must create cultures of psychological safety, model adaptability, and foster a clear understanding of how AI can enhance rather than replace human capabilities. We also dive into some of the more challenging topics, including the ethical implications of AI, resistance to change, and why some employees struggle to see a future where they coexist with intelligent machines. What I found particularly insightful was Jen's ability to draw parallels between leading teams in high-stakes athletic environments and leading through tech transformation. Whether it's building resilience after a setback or navigating uncertain outcomes with confidence, the same foundational principles apply. As we look to a future where AI is embedded in every business layer, Jen's message is timely and practical. If you are a leader wondering how to support your teams through disruption without losing the soul of your culture, this episode will give you plenty to reflect on. How will you lead in a world where machines are intelligent, but people still need to feel seen?

My EdTech Life
Episode 323: Jen Manly

My EdTech Life

Play Episode Listen Later May 9, 2025 54:33 Transcription Available


AI Ethics, Overreliance & Honest Talk with Jen ManlyIn this episode of My EdTech Life, Fonz sits down with returning guest Jen Manly, a computer science educator, TikTok powerhouse, and advocate for ethical tech use, to unpack the complex relationship between AI, teaching, and critical thinking. From data privacy concerns to AI detectors that fail our students, this conversation gets real about what's hype, what's helpful, and what needs more scrutiny. Whether you're cautiously curious or deep in the AI trenches, this episode offers clarity, nuance, and practical insight from a seasoned voice.

Dr. John Vervaeke
Hacking Humanity: Data, Desire, and the Future of Free Will

Dr. John Vervaeke

Play Episode Listen Later May 8, 2025 118:48


What happens when data knows us better than we know ourselves? In this raw and riveting conversation, John Vervaeke and Christopher Mastropietro sit down with Andy Russell — a former architect of data-driven persuasion — to expose how AI, behavioral profiling, and social media algorithms can hijack human desire, threaten our agency, and reshape our civilization. But there's hope: what if the same power that manipulates us can be used to heal us? This episode lays bare the disturbing origins of persuasion tech, how it was used in politics and commerce, and what it will mean when AI takes the driver's seat. Andy shares his personal reckoning, his brush with death, and how he's dedicating the rest of his life to restoring dignity and meaning to the human experience — with spiritual urgency. Andy Russell is a serial entrepreneur and former insider in data and media technology. After helping build some of the very tools that now shape persuasion online, he experienced a profound moral and existential crisis. He now advocates for radical transparency, decentralized AI, and human-centered design as a pathway toward societal healing. Christopher Mastropietro is a philosophical writer who is fascinated by dialogue, symbols, and the concept of self. He actively contributes to the Vervaeke Foundation. Sundae Labs aims to build tools and systems that foster a healthier relationship between humanity and technology.   Notes: (0:00) Introduction to the Series (1:00) Andy Russell's Journey — From Data Pioneer to Whistleblower (4:00) The Origin of Data-Driven Persuasion Tech (6:30) Persuasion and Political Weaponization — The 2016 Turning Point (9:45) How Data and Emotion Drive Modern Media Campaigns (13:30) “Turning Humans into Puppets”: Behavioral Influence at Scale (15:30) Personal Collapse — Andy's Reckoning with His Role (18:00) Between Frodo and Boromir: Can Anyone Wield the Ring? (23:30) Toward a Better Future — Rational Hope and Distributed AI (28:00) Scarcity, Hoarding, and the Death of Trust (33:00) Can Social Media Unite Rather Than Divide? (36:00) AI as a Tool for Healing and Transformation (40:30) Free Will and the Power of Influence (47:00) Hacking the Human Soul — Fear, Desire, and Belief (51:00) The Ethics of Deep Behavioral Nudging (58:00) From Suffering to Forgiveness: The Power of Redemption (1:05:00) Andy's Story of Survival and Purpose (1:10:00) Building the Fellowship — Why This Fight Matters (1:15:00) Final Reflections and the Role of Virgil     Connect with a global community devoted to human flourishing. The Vervaeke Foundation is committed to the pursuit of wisdom and advancing our collective capacity for meaning. Join the mission: https://vervaekefoundation.org Explore transformative practice and reflection through Awaken to Meaning: https://awakentomeaning.com/ Support John Vervaeke's ongoing work: https://www.patreon.com/johnvervaeke Ideas, People, and Works Mentioned: The AI Dilemma (Tristan Harris) Cambridge Analytica The Social Dilemma (Netflix) Plato's Republic Dante's Divine Comedy Oppenheimer Syndrome The Rwandan Genocide and Forgiveness Ho'oponopono (Hawaiian Forgiveness Practice) Sundae Labs Free Will, Desires, and the Meaning Crisis Personalized AI vs. Centralized Persuasion Tech

Masters of Privacy
Georgia Voudoulaki: beyond compliance - embedding ethical considerations into AI and data governance frameworks

Masters of Privacy

Play Episode Listen Later May 4, 2025 30:03


Georgia Voudoulaki is Senior Legal Counsel at Bosch, certified Compliance Officer, and adjunct professor at the University of Applied Sciences in Ludwigsburg and the Cooperative State University of Baden-Württemberg in Germany. In addition to her legal and academic roles, Georgia regularly publishes articles in leading legal journals and magazines, contributing valuable insights to the evolving conversation around compliance, digital innovation, and responsible AI.  References: Georgia Voudoulaki on LinkedIn University of Applied Sciences Ludwigsburg Baden-Wuerttemberg Cooperative State University (DHBW)  

Tony Martignetti Nonprofit Radio
738: PII In The Age Of AI & Balance AI Ethics And Innovation – Tony Martignetti Nonprofit Radio

Tony Martignetti Nonprofit Radio

Play Episode Listen Later May 2, 2025 63:21


This Week:  PII In The Age Of AI  Artificial Intelligence and big data have transformed privacy risks by enabling malicious, targeted communications to your team that seem authentic because they contain highly accurate information. Kim Snyder and Shauna Dillavou explain … Continue reading →

Today with Claire Byrne
Wendell Wallach: The Godfather of AI Ethics

Today with Claire Byrne

Play Episode Listen Later May 2, 2025 11:17


Wendell Wallach, Emeritus Chair Technology and Ethics Research Group at Yale Interdisciplinary Centre for Bioethics and has become known as one of the ‘Godfathers of AI ethics'.

The Fit Mess
You Are Already Addicted to AI...So Now What?

The Fit Mess

Play Episode Listen Later Apr 23, 2025 35:22 Transcription Available


Ever been uncomfortable using your own brain because AI could do it faster? Jeremy and Jason dive into the unsettling reality of our growing dependency on artificial intelligence. While these tools save us hours of work, they're rewiring our brains to crave the easy dopamine hits that come from outsourcing our thinking. The guys explore how this addiction isn't just about convenience—it's systematically changing how we interact with the world, potentially making us more isolated, less creative, and prime targets for manipulation. But there's hope if we can redirect our "saved time" toward meaningful activities instead of mindless scrolling. Listen now to learn how to use AI as a tool without becoming its tool. Topics Discussed Jeremy's moment of discomfort when realizing he'd rather have AI write content than use his brain How AI addiction triggers the same brain reward systems as essential biological needs The irony of using AI to research for a podcast about AI addiction The distinction between using technology to save time versus becoming dependent on it How companies employ psychologists to make social media and AI products intentionally addictive The concerning future of neural interfaces and potential "brain subscription" models Why collective intelligence often leads to worse decisions than small-group thinking The productivity paradox: saving time with AI only to waste it on mindless activities Jason's theory on the future commodification of human brain processing power How to maintain agency when using AI tools in daily life Resources David Cross (comedian referenced for joke about electric scissors) ---- MORE FROM THE FIT MESS: Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok Subscribe to The Fit Mess on Youtube Join our community in the Fit Mess Facebook group ---- LINKS TO OUR PARTNERS: Take control of how you'd like to feel with Apollo Neuro Explore the many benefits of cold therapy for your body with Nurecover Muse's Brain Sensing Headbands Improve Your Meditation Practice. Get started as a Certified Professional Life Coach! Get a Free One Year Supply of AG1 Vitamin D3+K2, 5 Travel Packs Revamp your life with Bulletproof Coffee You Need a Budget helps you quickly get out of debt, and save money faster! Start your own podcast!    

That Tech Pod
Think Like a Genius: the Human Side of AI, Ethics, and Innovation with Ken Gavranovic

That Tech Pod

Play Episode Listen Later Apr 22, 2025 37:50


This week on That Tech Pod, Laura and Kevin sit down with tech veteran and AI thought leader Ken Gavranovic, CEO of Product Genius, for a lively and insightful conversation that spans AI, ethics, innovation and pop culture. Ken opens up about his challenging childhood, sharing how it sparked a passion for technology and a desire to build tools that could truly make a difference, much like the kid in the movie War Games. From early fumbles in fax software that made others millions but netted him nothing to working with tech giants like Disney and 7-Eleven, Ken walks us through his evolution into the AI space and why he believes AI will have the most substantial impact on humanity. We talk about ethical AI and data privacy, especially when it comes to children and younger audiences, how to leverage AI insights without drowning in data and the key contrasts in AI adoption between big corporations and smaller businesses. Laura and Ken geek out about functional health, from UV-cap water bottles to proactive blood testing to the very real fears (ahem, Laura) about robot uprisings from a tangent on the movie Smart House, the series Cassandra and The Terminator movies. Plus, we discuss recycled toilet paper and sustainability with a shoutout to Who Gives a Crap, and wrap things up with a peek into Ken's Amazon best-seller, Business Breakthrough 3.0, a must-read for any leader navigating digital transformation. Tune in for an episode that's smart, human, and just the right amount of tech-weird.Ken Gavranovic is a global keynote speaker, a seasoned technology executive, and the CEO of Product Genius, where he leads the development of AI-powered tools that transform real-time data into actionable customer insights, driving service improvements and operational efficiency. With over two decades of experience, Ken has helped businesses—from startups to global brands like Disney World and 7-Eleven—leverage cutting-edge tech to achieve measurable results. He has led 18 successful exits, 35 mergers and acquisitions, and an IPO, and has held key executive roles at New Relic and Cox Automotive. A global keynote speaker and member of Thinkers50 and the Forbes Council, Ken is also a co-author of the Amazon best-seller Business Breakthrough 3.0, a practical guide for leaders navigating digital transformation and scaling operations.

Capitalisn't
Profit or Purpose? OpenAI's $300 Billion Question, with Rose Chan Loui

Capitalisn't

Play Episode Listen Later Apr 17, 2025 47:35


All too often, capitalism is identified with the for-profit sector. However, one organizational form whose importance is often overlooked is nonprofits. Roughly 4% of the American economy, including most universities and hospital systems, are nonprofit.One prominent nonprofit currently at the center of a raging debate is OpenAI, the $300 billion American artificial intelligence research organization best known for developing ChatGPT. Founded in 2015 as a donation-based nonprofit with a mission to build AI for humanity, it created a complex “hybrid capped profit” governance structure in 2019. Then, after a dramatic firing and re-hiring of CEO Sam Altman in 2023 (covered on an earlier episode of Capitalisn't: “Who Controls AI?”), a new board of directors announced that achieving OpenAI's mission would require far more capital than philanthropic donations could provide and initiated a process to transition to a for-profit public benefit corporation. This process has been fraught with corporate drama, including one early OpenAI investor, Elon Musk, filing a lawsuit to stop the process and launching a $97.4 billion unsolicited bid for OpenAI's nonprofit arm.Beyond the staggering valuation numbers at stake here–not to mention OpenAI's open pursuit of profits over the public good–are complicated legal and philosophical questions. Namely, what happens when corporate leaders violate the founding purpose of a firm? To discuss, Luigi and Bethany are joined by Rose Chan Loui, the founding executive director of the Lowell Milken Center on Philanthropy and Nonprofits at UCLA Law and co-author of the paper "Board Control of a Charity's Subsidiaries: The Saga of OpenAI.” Is OpenAI a “textbook case of altruism vs. greed,” as the judge overseeing the case declared? Is AI for everyone, or only for investors? Together, they discuss how money can distort purpose and philanthropy, precedents for this case, where it might go next, and how it may shape the future of capitalism itself.Show Notes:Read extensive coverage of the Musk-OpenAI lawsuit on ProMarket, including Luigi's article from March 2024: “Why Musk Is Right About OpenAI.”Guest Disclosure (provided to The Conversation for an op-ed on the case): The authors do not work for, consult, own shares in, or receive funding from any company or organization that would benefit from this article. They have disclosed no relevant affiliations beyond their academic appointment.

ITSPmagazine | Technology. Cybersecurity. Society
Living Forever (Sort Of): AI Clones, Digital Ghosts, and the Problem with Perfection | A Carbon, a Silicon, and a Cell walk into a bar... | A Redefining Society Podcast Series With Recurring Guest Dr. Bruce Y. Lee

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Apr 17, 2025 43:23


Guest: Dr. Bruce Y LeeSenior Contributor @Forbes | Professor | CEO | Writer/Journalist | Entrepreneur | Digital & Computational Health | #AI | bruceylee.substack.com | bruceylee.com Bruce Y. Lee, MD, MBA is a writer, journalist, systems modeler, AI, computational and digital health expert, professor, physician, entrepreneur, and avocado-eater, not always in that order.Executive Director of PHICOR (Public Health Informatics, Computational, and Operations Research) [@PHICORteam]On LinkedIn | https://www.linkedin.com/in/bruce-y-lee-68a6834/Website | https://www.bruceylee.com/_____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastVisit Marco's website

Angels and Awakening
Exploring the Intersection of Science and Spirituality with Dr. Alan Lightman

Angels and Awakening

Play Episode Listen Later Apr 14, 2025 52:59


In this soul-expanding episode, I sit down with physicist/author Alan Lightman to explore where science and spirituality collide. We dive into his book The Miraculous from the Material, unravel how spiderwebs hold cosmic secrets, and debate whether consciousness lives in the brain or the cosmos. Alan shares mind-blowing insights on time (why 2024 feels faster than ever), AI ethics, and humanity's role as “the universe observing itself.” Plus, I reveal how my late father's messages challenge materialist views – and why YOUR intuition is key to navigating our tech-driven world. Join my June 2025 Angel Reiki School to deepen your spiritual gifts, and don't miss the 21-Day Money Miracles Challenge (launching May 11th!). Links below! TIMESTAMPED OVERVIEW 00:00 Podcast Intro & Angel Reiki School Preview 01:00 Student Experiences: Mediumship Breakthroughs 05:30 Introducing Guest Alan Lightman (Author/Scientist) 06:09 Science's Limits: Ethics, Consciousness, Spirituality 07:30 AI Ethics & Philosophy's Role in Tech 10:30 “The Miraculous from the Material” Inspiration 12:45 Spiderwebs: Art, Science & Cosmic Wonder 15:30 Einstein's Time Theories vs. Human Experience 19:00 Technology's Impact on Pace of Life 23:00 Consciousness: Material Brain vs. Cosmic Connection 28:00 Afterlife, Memory & Neuroscience Perspectives 36:00 Humanity's Cosmic Rarity & Spiritual Obligations 42:00 Future of Consciousness & Homo Techno Evolution 50:00 Closing: Book Details & Angel School CTA   LEARN MORE Have questions about The Angel Membership or the Angel Reiki School? Book a free Discovery Call with Julie: https://calendly.com/juliejancius/discovery-call Angel Reiki School (In-Person) Oak Brook, IL June 6–8, 2025 Get certified in mediumship, energy healing, and angel communication https://theangelmedium.com/get-certified Angel Reiki School (Online) Starts the 1st of every month Learn from anywhere https://theangelmedium.com/get-certified 21-Day Money Miracles Challenge Starts May 11, 2025 Exclusive to Angel Members - JOIN TODAY https://theangelmedium.com/angelmembership Book a 1-on-1 Angel Reading With Julie Connect with your angels and loved ones in Heaven https://theangelmedium.com/readings Want a Free Reading? We're selecting 50+ volunteers for free readings at the in-person Angel Reiki School Leave a 5-star review of the podcast and copy/paste it here for a shot to win: https://theangelmedium.com/contact Earnings Disclaimer: You agree that the Company has not made any guarantees about the results of taking any action, whether recommended on this Website or not. The Company provides educational and informational resources that are intended to help users of this website succeed in their online business and otherwise. You nevertheless recognize that your ultimate success or failure will be the result of your own efforts, your particular situation, and innumerable other circumstances beyond the control and/or knowledge of the Company. You also recognize that prior results do not guarantee a similar outcome. Thus, the results obtained by others – whether clients or customers of the Company or otherwise – applying the principles set out in this Website are no guarantee that you or any other person or entity will be able to obtain similar results.