Podcasts about hallucinations

Perception in the absence of external stimulation that has the qualities of real perception

  • 1,575PODCASTS
  • 2,149EPISODES
  • 44mAVG DURATION
  • 1DAILY NEW EPISODE
  • Jan 2, 2026LATEST
hallucinations

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about hallucinations

Show all podcasts related to hallucinations

Latest podcast episodes about hallucinations

Secrets of Stargate
Phantoms (SGA)

Secrets of Stargate

Play Episode Listen Later Jan 2, 2026 32:13


Hallucinations, guilt, and survival collide in “Phantoms.” Victor Lams, Jeff Haecker, and Lisa Jones debate whether mind-control stories still work, why Teyla is the real hero here, and what Shepard's past reveals when reality breaks down. The post Phantoms (SGA) appeared first on StarQuest Media.

Above the Law - Thinking Like a Lawyer
A Look Back At The 2025 Dumpster Fire

Above the Law - Thinking Like a Lawyer

Play Episode Listen Later Dec 31, 2025 40:46


Three trends dominated this year's coverage. ----- We've made it to the end of the year! And what do we have to show for it as a profession? Our most elite law firms signed deals rather than stand up for themselves in the face of illegal Trump bullying efforts. Others quietly tried to erase their history to avoid the administration's ire. But some firms did fight back and achieved consistent success in court, while the dealmakers got heckled and derided by young lawyers. And, as anyone who has ever watched Star Wars knows, deals with authoritarians just get worse all the time. The New York Times even wrote a feature on a certain publication covering this story. We also ran headlong into a constitutional crisis marked by DOJ lawyers lying to courts -- when the DOJ even bothers to field lawyers legally -- senior government officials declaring "war" on federal judges, and judges being arrested. As right-wing threats against federal judges escalated, the Supreme Court responded with disinterest, preferring to fan the flames with nakedly partisan shadow docket rulings to grease the wheels of Trump's assault on the structure of government. And, finally, we look at the year of AI in legal. Hallucinations dominated the conversation -- from law firms and judges alike -- but this was also the year legal tech made huge bets on AI and folks started to realize that the profession can't avoid the technology. The billable hour may finally be on the decline, but does AI risk making lawyers dumber?

Legal Talk Network - Law News and Legal Topics
A Look Back At The 2025 Dumpster Fire

Legal Talk Network - Law News and Legal Topics

Play Episode Listen Later Dec 31, 2025 40:46


Three trends dominated this year's coverage. ----- We've made it to the end of the year! And what do we have to show for it as a profession? Our most elite law firms signed deals rather than stand up for themselves in the face of illegal Trump bullying efforts. Others quietly tried to erase their history to avoid the administration's ire. But some firms did fight back and achieved consistent success in court, while the dealmakers got heckled and derided by young lawyers. And, as anyone who has ever watched Star Wars knows, deals with authoritarians just get worse all the time. The New York Times even wrote a feature on a certain publication covering this story. We also ran headlong into a constitutional crisis marked by DOJ lawyers lying to courts -- when the DOJ even bothers to field lawyers legally -- senior government officials declaring "war" on federal judges, and judges being arrested. As right-wing threats against federal judges escalated, the Supreme Court responded with disinterest, preferring to fan the flames with nakedly partisan shadow docket rulings to grease the wheels of Trump's assault on the structure of government. And, finally, we look at the year of AI in legal. Hallucinations dominated the conversation -- from law firms and judges alike -- but this was also the year legal tech made huge bets on AI and folks started to realize that the profession can't avoid the technology. The billable hour may finally be on the decline, but does AI risk making lawyers dumber? Learn more about your ad choices. Visit megaphone.fm/adchoices

Founder Views
Luca Micheli: The AI Pivot That Took Customerly From $100K to $1M ARR

Founder Views

Play Episode Listen Later Dec 30, 2025 68:22


Six years after his first appearance on Founder Views, Luca is back with the real story of how AI forced a full business model and go-to-market shift.Customerly went from a seat-based, product-led support platform for small SaaS teams to an AI-first customer service engine selling into mid-market and enterprise, where volume and ROI are obvious.In this episode we get into:The AI pivot: why they refused to build “old-school chatbots,” and how ChatGPT changed what was possibleQuality metrics that matter: error rate, confidence thresholds, escalation triggers, and why AI CSAT can be higher than humansWhat actually trains a good AI agent: knowledge base structure, what not to upload, and how hallucinations happen in the real worldAutomation outcomes: average ticket closure rates, what drives 80%+ vs 40–50%, and how teams improve over timeEnterprise GTM shift: moving from product-led to sales-led, filtering signups, longer cycles, bigger ACVOutbound reality: why the agency failed, what changed when they built outbound internally, and the tooling stack (Clay, Apollo, Lemlist, Pipedrive)Founder sales lessons: Challenger Sale thinking and why founders still need to own sales earlyThe Arena The Arena is a private Skool community for SaaS founders who are actively building and selling. I share real-time decisions, experiments, and assets as I use them while growing a bootstrapped SaaS.No theory. No polish. Just execution.Learn more at: https://www.skool.com/the-arena/Chapters / Timestamps00:00 – Reunion after 6 years and what changed (COVID + AI era)01:27 – Luca intro: what Customersly does today02:30 – From $100K ARR to near $1M and why pricing changed05:08 – “Chatbots are shit”: how they built AI without the bad UX07:10 – Under 1% error rate and reducing hallucinations09:52 – Grounded AI, intents, and automating beyond FAQs11:09 – Closure rate benchmarks and what “good” looks like16:41 – How to pick an AI support tool that actually works18:20 – Training mistakes: transcripts, clutter, and marketing banners causing hallucinations20:46 – Confidence thresholds and escalation as a feedback loop22:48 – How long it takes to move from 45% to 70–80% automation24:34 – Should AI learn from your inbox? Pros, risks, and why they avoid it29:41 – Implementation timelines: small teams vs enterprise rollouts31:38 – Why AI CSAT can beat humans (speed wins)35:46 – Escalation rules: human request, sentiment, low confidence, missing info37:21 – Going enterprise: ARPU jump and sales-led reality41:02 – Outbound experiment: agency failure and building it internally43:32 – LinkedIn ads + Clay targeting + the masterclass lead magnet49:25 – Challenger Sale and shifting the conversation53:20 – Founder lesson: why you can't outsource what you haven't done58:54 – Outbound stack: Clay, Apollo, Lemlist, Sales Nav, Pipedrive01:05:12 – 2026 vision and wrap

The Geek In Review
Receipts, RAG, and Reboots: Legal Tech's 2025 Year-End Scorecard with Niki Black and Sarah Glassmeyer

The Geek In Review

Play Episode Listen Later Dec 29, 2025 72:03


The Geek in Review closes 2025 with Greg Lambert and Marlene Gebauer welcoming back Sarah Glassmeyer and Niki Black for round two of the annual scorecard, equal parts receipts, reality check, and forward look into 2026. The conversation opens with a heartfelt remembrance of Kim Stein, a beloved KM community builder whose generosity showed up in conference dinners, happy hours, and day to day support across vendors and firms. With Kim's spirit in mind, the panel steps into the year-end ritual: name the surprises, own the misses, and offer a few grounded bets for what comes next.Last year's thesis predicted a shift from novelty to utility, yet 2025 felt closer to a rolling hype loop. Glassmeyer frames generative AI as a multi-purpose knife dropped on every desk at once, which left many teams unsure where to start, even when budgets already committed. Black brings the data lens: general-purpose gen AI use surged among lawyers, especially solos and small firms, while law firm adoption rose fast compared with earlier waves such as cloud computing, which crawled for years before pandemic pressure moved the needle. The group also flags a new social dynamic, status-driven tool chasing, plus a quiet trend toward business-tier ChatGPT, Gemini, and Claude as practical options for many matters when price tags for legal-only platforms sit out of reach for smaller shops.Hallucinations stay on the agenda, with the panel resisting both extremes: doom posts and fan club hype. Glassmeyer recounts a founder's quip, “hallucinations are a feature, not a bug,” then pivots to an older lesson from KeyCite and Shepard's training: verification never goes away, and lawyers always owed diligence, even before LLMs. Black adds a cautionary tale from recent sanctions, where a lawyer ran the same research through a stack of tools, creating a telephone effect and a document nobody fully controlled. Lambert notes a bright spot from the past six months: legal research outputs improved as vendors paired vector retrieval with legal hierarchy data, including court relationships and citation treatment, reducing off-target answers even while perfection stays out of reach.From there, the conversation turns to mashups across the market. Clio's acquisition of vLex becomes a headline example, raising questions about platform ecosystems, pricing power, and whether law drifts toward an Apple versus Android split. Black predicts integration work across billing, practice management, and research will matter as much as M&A, with general tech giants looming behind the scenes. Glassmeyer cheers broader access for smaller firms, while still warning about consolidation scars from legal publishing history and the risk of feature decay once startups enter corporate layers. The panel lands on a simple preference: interoperability, standards, and clean APIs beat a future where a handful of owners dictate terms.On governance, Black rejects surveillance fantasies and argues for damage control, strong training, and safe experimentation spaces, since shadow usage already happens on personal devices. Gebauer pushes for clearer value stories, and the guests agree early ROI shows up first in back office workflows, with longer-run upside tied to pricing models, AFAs, and buyer pushback on inflated hours. For staying oriented amid fractured social channels, the crew trades resources: AI Law Librarians, Legal Tech Week, Carolyn Elefant's how-to posts, Moonshots, Nate B. Jones, plus Ed Zitron's newsletter for a wider business lens. The crystal ball segment closes with a shared unease around AI finance, a likely shakeout among thinly funded tools, and a reminder to keep the human network strong as 2026 arrives. 

Tech Talk with Mathew Dickerson
Weed Robots, Watch Drones, Disney AI and a Teen Ban Blindspot – Plus Tesla's $350 Paddle.

Tech Talk with Mathew Dickerson

Play Episode Listen Later Dec 28, 2025 51:45


Ban Blindspot: Gaming's Gaping Gap in the Great Aussie Online Clampdown.  Paddles, Premiums and Publicity: Tesla Takes a Swing at Pickleball.  Blood, Bands and Behaviour: How Whoop Is Blending Wearables with Wellness.  Mickey Meets Machines: Disney Dances with Generative AI.  Wrist Wizards and Wayward Wings: DJI's Drone Takes Orders from Your Watch.  Carbon Strings, Classic Soul: Printing the Future of the Cello.  Hallucinations on the Help Desk: Librarians vs Lying Language Models.  Weighty Wants: When Cable Control Costs a Pretty Penny.  Rolling with the Wind: The Tumbleweed Tech That Travels Further on Less Power. 

Service Design Show
Designing for Truth in an Era of AI Hallucinations / Inside Service Design / Episode #08

Service Design Show

Play Episode Listen Later Dec 25, 2025 61:47


We need to talk about the "intern" sitting on your desktop...Come on, you know the one. Sure, they are fast, very eager to please, and can process data at lightning speeds. But they also have a bad habit of hallucinating facts and making things up just to make you happy.Of course, I'm talking about AI.It is fair to say that we are past the initial "wow" phase of generative AI. Now, for us service design professionals, the real question is: How do we actually hire, train, and trust this new digital colleague?That is the focus of this episode of our Inside Service Design series.We sit down for a chat with two brilliant professionals: Jessica Dugan and Judith Buhmann.They share a grounded, hype-free look at how they are integrating AI into their own existing workflows. Not as a replacement for our work, but as a "Junior Associate" who needs some (sometimes a lot) management.To make this real, Jess walks us through the framework she uses for building her own custom AI agents. She explains how to define their "persona," scope their tasks, and curate their knowledge base so they can actually be useful (and safe).And Judith shares a critical perspective on why we can't fully trust AI yet. We explore why we need to treat AI as an "unreliable narrator" especially when working with vulnerable groups.So if you are feeling a bit somewhat by the pressure to "use AI" but aren't sure how to do it responsibly, this conversation has some key insights you don't want to miss.Here's a question: If you had to give your current AI tools a "performance" review, what rating would you give them? A) Employee of the month B) Promising intern (needs supervision) C) Chaos agent (fires random info at me). Let me know, I'm really curious where we are all at!Be well, ~ Marc--- [ 1. GUIDE ] --- 00:00 Welcome to the November Round Up04:00 Jess's journey into service desig09:45 Judith's challenge12:30 Designing for the employee experience and internal systems14:00 The "Pros" of in-house service design15:30 The necessity of patience and deep knowledge for in-house success18:30 Judith topic19:00 Jess topic: Building (and trusting) your own AI agent23:00 Why we cannot fully trust any AI27:00 Scoping the AI agent's role and understanding user need29:00 Designing the "Human" side: Setting personality and tone for your agent33:45 Accessibility: Is it actually hard to build your own agent?35:30 Human-in-the-loop: Regulation and ensuring data accuracy40:00 Why transparency matters more than just "trust" 47:00 Getting organizational buy-in for AI tools54:45 Markers of success: How service blueprints live on after the workshop56:30 Closing thoughts and Question to Ponder --- [ 2. LINKS ] --- https://www.linkedin.com/in/judithbuhmann/https://www.linkedin.com/in/jess-dugan/ --- [ 3. CIRCLE ] --- Join our private community for in-house service design professionals. ⁠https://servicedesignshow.com/circle--- [4. FIND THE SHOW ON] ---Youtube ~ https://go.servicedesignshow.com/inside-service-design-08-youtubeSpotify ~ https://go.servicedesignshow.com/inside-service-design-08-spotifyApple ~ https://go.servicedesignshow.com/inside-service-design-08-appleSnipd ~ https://go.servicedesignshow.com/inside-service-design-08-snipd

Science Friday
A Neurologist Investigates His Own Musical Hallucinations

Science Friday

Play Episode Listen Later Dec 24, 2025 10:50


Imagine sitting at home and then all of a sudden you hear a men's choir belting out “The Star Spangled Banner.” You check your phone, computer, radio. Nothing's playing. You look outside, no one's there. That's what happened to neurologist Bruce Dobkin after he received a cochlear implant. He set out to learn everything he could about the condition, called musical hallucinosis.In a story from August, Host Ira Flatow talks with Dobkin about his decision to publish his account in a medical journal and why the condition is more common than he realized.Guest: Dr. Bruce Dobkin is a neurologist at UCLA Health.Transcript is available at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

The TechEd Podcast
Ask Us Anything: Workforce ROI, AI Hallucinations, and the 5 Pillars of World-Class CTE

The TechEd Podcast

Play Episode Listen Later Dec 23, 2025 46:40 Transcription Available


Watch the episode on Youtube: https://youtu.be/f5gWUVQI0jIMelissa Martin and Matt Kirchner are back to answer your questions, covering everything from university curriculum design, to AI in the classroom, to what employers actually expect when they invest in education.This one moves fast, but it's focused: how do you build programs that truly prepare students for modern work? How do you keep education from falling behind as technology accelerates? Along the way, Matt and Melissa break down what universities need to change, how to raise the bar in the age of generative AI, why ethics can't be an afterthought, and how to help HR teams understand the value of credentials and new pathways.Listen to learn:What university programs should teach (in one course) to better prepare grads for modern manufacturing workHow educators can help students identify when AI is wrong and why we need to level-up our homework in the age of AIThe role of ethics in modern CTEThe five components of a world-class, modern advanced manufacturing high school programHow educators can measure program effectiveness and show ROI to industrial partnersWhat HR teams need to understand about changing credentials, degrees, and how to evaluate technical candidates3 Big Takeaways from this Episode:1. have to teach applied industrial skills, not just theory. Matt argues that a four-year program can cover a lot of “cool stuff in the lab,” but it still needs authentic manufacturing equipment and technology so graduates understand what they will actually see in industry. He frames this as an employer expectation problem: even when budgets are tighter at the four-year level, universities still need to build around the same core technologies students will encounter on day one in manufacturing. 2. AI changes the standard for student work and makes ethics a core requirement. Melissa and Matt point out that AI is designed to produce an answer even when it doesn't know (causing a 'hallucination'), which means students must learn to question outputs and verify accuracy instead of treating AI as a sole source of truth. From there, the conversation moves from classroom integrity into broader ethics: what it means to do original work, and how humans should think and behave as AI becomes more capable and more embedded in decision-making. 3. Industry and HR and educators must understand each other's needs to build a successful partnership. Education and Industry both have a responsibility to do their part in a partnership. HR departments must understand the changing landscape of certifications, 3-year degrees and other credentials that students are gaining to demonstrate their technical competency. Likewise, educators must adopt industry practices of tracking metrics to show employer partners the ROI of their investments in the program.Access tons of links & resources on the episode page: https://techedpodcast.com/askusanything-122025/We want to hear from you! Send us a text.Instagram - Facebook - YouTube - TikTok - Twitter - LinkedIn

Crazy Wisdom
Episode #516: China's AI Moment, Functional Code, and a Post-Centralized World

Crazy Wisdom

Play Episode Listen Later Dec 22, 2025 64:59


In this episode, Stewart Alsop sits down with Joe Wilkinson of Artisan Growth Strategies to talk through how vibe coding is changing who gets to build software, why functional programming and immutability may be better suited for AI-written code, and how tools like LLMs are reshaping learning, work, and curiosity itself. The conversation ranges from Joe's experience living in China and his perspective on Chinese AI labs like DeepSeek, Kimi, Minimax, and GLM, to mesh networks, Raspberry Pi–powered infrastructure, decentralization, and what sovereignty might mean in a world where intelligence is increasingly distributed. They also explore hallucinations, AlphaGo's Move 37, and why creative “wrongness” may be essential for real breakthroughs, along with the tension between centralized power and open access to advanced technology. You can find more about Joe's work at https://artisangrowthstrategies.com and follow him on X at https://x.com/artisangrowth.Check out this GPT we trained on the conversationTimestamps00:00 – Vibe coding as a new learning unlock, China experience, information overload, and AI-powered ingestion systems05:00 – Learning to code late, Exercism, syntax friction, AI as a real-time coding partner10:00 – Functional programming, Elixir, immutability, and why AI struggles with mutable state15:00 – Coding metaphors, “spooky action at a distance,” and making software AI-readable20:00 – Raspberry Pi, personal servers, mesh networks, and peer-to-peer infrastructure25:00 – Curiosity as activation energy, tech literacy gaps, and AI-enabled problem solving30:00 – Knowledge work superpowers, decentralization, and small groups reshaping systems35:00 – Open source vs open weights, Chinese AI labs, data ingestion, and competitive dynamics40:00 – Power, safety, and why broad access to AI beats centralized control45:00 – Hallucinations, AlphaGo's Move 37, creativity, and logical consistency in AI50:00 – Provenance, epistemology, ontologies, and risks of closed-loop science55:00 – Centralization vs decentralization, sovereign countries, and post-global-order shifts01:00:00 – U.S.–China dynamics, war skepticism, pragmatism, and cautious optimism about the futureKey InsightsVibe coding fundamentally lowers the barrier to entry for technical creation by shifting the focus from syntax mastery to intent, structure, and iteration. Instead of learning code the traditional way and hitting constant friction, AI lets people learn by doing, correcting mistakes in real time, and gradually building mental models of how systems work, which changes who gets to participate in software creation.Functional programming and immutability may be better aligned with AI-written code than object-oriented paradigms because they reduce hidden state and unintended side effects. By making data flows explicit and preventing “spooky action at a distance,” immutable systems are easier for both humans and AI to reason about, debug, and extend, especially as code becomes increasingly machine-authored.AI is compressing the entire learning stack, from software to physical reality, enabling people to move fluidly between abstract knowledge and hands-on problem solving. Whether fixing hardware, setting up servers, or understanding networks, the combination of curiosity and AI assistance turns complex systems into navigable terrain rather than expert-only domains.Decentralized infrastructure like mesh networks and personal servers becomes viable when cognitive overhead drops. What once required extreme dedication or specialist knowledge can now be done by small groups, meaning that relatively few motivated individuals can meaningfully change communication, resilience, and local autonomy without waiting for institutions to act.Chinese AI labs are likely underestimated because they operate with different constraints, incentives, and cultural inputs. Their openness to alternative training methods, massive data ingestion, and open-weight strategies creates competitive pressure that limits monopolistic control by Western labs and gives users real leverage through choice.Hallucinations and “mistakes” are not purely failures but potential sources of creative breakthroughs, similar to AlphaGo's Move 37. If AI systems are overly constrained to consensus truth or authority-approved outputs, they risk losing the capacity for novel insight, suggesting that future progress depends on balancing correctness with exploratory freedom.The next phase of decentralization may begin with sovereign countries before sovereign individuals, as AI enables smaller nations to reason from first principles in areas like medicine, regulation, and science. Rather than a collapse into chaos, this points toward a more pluralistic world where power, knowledge, and decision-making are distributed across many competing systems instead of centralized authorities.

Our Curious Amalgam
#357 Is Strong Competition in GenAI a Hallucination? Discussing Economic Insights in the Industry

Our Curious Amalgam

Play Episode Listen Later Dec 22, 2025 32:31


GenAI is one of the fastest-growing consumer tools in history. But what have we learned about competitive dynamics in the industry? Economists Andrea Asoni and Matteo Foschi join Jaclyn Phillips and Lexi Michaud to discuss what early economic insights can help us understand about the risks and benefits posed by the growth of AI. Listen to this episode to learn more about how the development of AI impacts market development. With special guests:Andrea Asoni, Vice President, Charles River Associates and Matteo Foschi, Vice President, Charles River Associates Related Links: Contested ground: Early competition and market dynamics in generative AI Hosted by: Jaclyn Phillips, Proskauer Rose and Lexi Michaud, Fried Frank

The Alpha Male Coach Podcast
Episode 342: The Hallucination of Separation - Remembering Your True Identity

The Alpha Male Coach Podcast

Play Episode Listen Later Dec 19, 2025 39:15


In this powerful episode of The Alpha Male Coach Podcast, Kevin takes you deep into the core confusion of the human condition: the hallucination of separation. Broadcasting from the Yucatán, Kevin explores why your consciousness feels different depending on your environment, why “I” is the most misunderstood word in the human language, and how the illusion of being a separate ego is the root of all suffering.Kevin opens by distinguishing monism - the experiential recognition that all is one - from the far more common dualistic worldview most humans live in without question. He breaks down how the ego creates a false center of identity, a shadow-self imagined as a commander behind the eyes, and how this hallucination leads to fear, conflict, resistance, and misalignment across every dimension of life.Drawing on wisdom from Alan Watts, spiritual psychology, and his own Alpha philosophy, Kevin reframes humanity's relationship to nature, purpose, money, and each other. You are not an isolated observer placed “into” the world, he argues - you emerged from the world, just like an apple emerges from an apple tree. You are not a stranger in an indifferent universe; you are the universe becoming aware of itself.This episode explores:Why the ego believes “I am in here and the world is out there,” and why this premise is falseHow the belief in separation leads to fear, anxiety, loneliness, and conflictWhat monism reveals about your relationship to nature, your purpose, your emotions, and other humansWhy artificial intelligence is not consciousness, not intelligence, and never will beHow your environment is an extension of your body, not something separate from youHow shifting from dualism to unity transforms your thoughts, emotions, actions, and results through the Model of AlignmentKevin also offers practical embodiment tools to dissolve the hallucination of separation, including active meditation, breath practices, body-based inquiry, and applying the Model of Alignment to reveal whether your identity is operating from ego or from Alpha unity consciousness.Ultimately, this is an episode about awakening - about remembering your true nature as the field of consciousness expressing itself temporarily as a human being. The end of suffering doesn't come from controlling life, resisting emotion, or conquering the world. It comes from recognizing that I and life are not two things. When separation dissolves, fear dissolves with it. What remains is flow, power, presence, and freedom - the Alpha state.Brother, you are not separate from nature, from God, from humanity, or from your purpose. You are the expression of the infinite in human form.

EDRM Global Podcast Network
Echoes of AI: Episode 43 | Cross-Examine Your AI: The Lawyer's Cure for Hallucinations - Debate Format

EDRM Global Podcast Network

Play Episode Listen Later Dec 19, 2025 16:01


Attorney, award-winning blogger, and AI expert Ralph Losey's curated and vetted podcast features his Anonymous Podcasters as they do a deep dive on Ralph's EDRM blog post titled "Cross-Examine Your AI: The Lawyer's Cure for Hallucinations." This episode is an example of a debate format contrasted to the discussion format in Episode 42.

EDRM Global Podcast Network
Echoes of AI: Episode 42 | Cross-Examine Your AI: The Lawyer's Cure for Hallucinations - Discussion Format

EDRM Global Podcast Network

Play Episode Listen Later Dec 19, 2025 12:26


Attorney, award-winning blogger, and AI expert Ralph Losey's curated and vetted podcast features his Anonymous Podcasters as they do a deep dive on Ralph's EDRM blog post titled "Cross-Examine Your AI: The Lawyer's Cure for Hallucinations." This episode is an example of a discussion format contrasted to the debate format in Episode 43.

Chasing Heroine: On This Day, Recovery Podcast
ocial Media Induced Psychosis in SOBRIETY, Using a Grape to Pass a Drug Test? Crashing into a Golf Course, Meth Hallucinations in Night Clubs in South Africa, Eating Disorder Treatment

Chasing Heroine: On This Day, Recovery Podcast

Play Episode Listen Later Dec 18, 2025 107:01


Y'all, my guest today, Toni Becker is absolutely incredible.Toni lives in South Africa and is a content creator with twelve years sober. Toni was plagued by disordered eating years before her addictions to meth and alcohol began. Years of partying led to homelessness, meth psychosis and severe illness and kidney failure. In sobriety, Toni was still struggling with disordered eating. Treatment in sobriety finally helped her with that struggle. Later in sobriety, she developed facial dysmorphia induced by social media expectations and filters. Talking about these issues in sobriety was fascinating and I learned so much from Toni!Connect with Toni on ⁠Instagram⁠DM me on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Message me on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Listen AD FREE & workout with me on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Patreon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Connect with me on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TikTok⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Email me chasingheroine@gmail.comSee you next week!

Our Lady of Fatima Podcast
Episode 1476: The Myth of Collective Hallucination

Our Lady of Fatima Podcast

Play Episode Listen Later Dec 18, 2025 26:27


We check out appendix I from chapter 10 in the first volume of The Whole Truth About Fatima. We also inspect an article from The Fatima Center about the tremendous importance of the first Saturday devotion and consecrating Russia to the Immaculate Heart. The consequences for not doing so are dire to say the least.Please support the Our Lady of Fatima Podcast:http://buymeacoffee.com/TerenceMStantonLike and subscribe on YouTube:https://m.youtube.com/@OurLadyOfFatimaPodcastFollow us on X:@FatimaPodcastSubscribe to our Substack:https://terencemstanton.substack.comThank you!

AI in Education Podcast
AI in Education's Christmas Special: Hallucinations, Headbands, and Bad Ideas

AI in Education Podcast

Play Episode Listen Later Dec 18, 2025 36:18


AI in Education's Christmas Special: Hallucinations, Headbands, and Bad Ideas In this end-of-year Christmas special, Ray and Dan squeeze in one final episode to reflect on a whirlwind year in AI and education - with a healthy dose of festive chaos. They unpack the latest AI news, including Australia's National AI Plan, OpenAI's Australian data centre and teacher certification course, major university rollouts of ChatGPT, and global experiments like nationwide AI tools in schools and targeted funding for AI-assisted teaching. But this episode quickly moves beyond policy and platforms into something more fun - and more unsettling! Ray challenges Dan with a "Real or Hallucinated?" quiz featuring AI products that may (or may not) exist, from focus-monitoring headbands and robot teachers to pet translators and laugh-track smart speakers. Along the way, they explore what these products reveal about current AI practice, the risks of anthropomorphising technology, and why education must keep humans firmly at the centre of learning - even as experimentation accelerates. It's a light-hearted but thoughtful way to wrap up 2025, and a reminder that just because AI can do something, doesn't always mean it should.   News Items in the episode   Tech companies advised to label and 'watermark' AI-generated content https://www.abc.net.au/news/2025-12-01/ai-guidance-label-watermark-ai-content/106083786    El Salvador announces national AI program with Grok for Education https://x.ai/news/el-salvador-partnership    Hong Kong schools to get HK$500,000 (about AU$100K/ US$65K) each under AI education plan https://www.scmp.com/news/hong-kong/education/article/3336600/hong-kong-schools-get-hk500000-each-under-hk500-million-ai-education-plan    OpenAI to open Australian hosted service https://www.afr.com/technology/openai-becomes-major-tenant-in-7b-data-centre-deal-20251204-p5nkr4    OpenAI ChatGPT for Teachers foundations course https://www.linkedin.com/posts/chatgpt-for-education_new-for-k-12-educators-chatgpt-foundations-activity-7404242317718487042-He9H    La Trobe chooses ChatGPT Education https://www.latrobe.edu.au/news/articles/2025/release/openai-collaboration-drives-inclusion,-innovation    Australia's Nation AI Plan https://www.industry.gov.au/publications/national-ai-plan   

Easy Prey
You Are Traceable with OSINT

Easy Prey

Play Episode Listen Later Dec 17, 2025 56:01


Publicly available data can paint a much clearer picture of our lives than most of us realize, and this episode takes a deeper look at how those tiny digital breadcrumbs like photos, records, searches, even the background of a Zoom call can be pieced together to reveal far more than we ever intended. To help break this down, I'm joined by Cynthia Hetherington, Founder and CEO of The Hetherington Group, a longtime leader in open-source intelligence. She also founded Osmosis, the global association and conference for OSINT professionals, and she oversees OSINT Academy, where her team trains investigators, analysts, and practitioners from all experience levels. Cynthia shares how she started her career as a librarian who loved solving information puzzles and eventually became one of the earliest people applying internet research to real investigative work. She talks about the first wave of cybercrime in the 1990s, how she supported law enforcement before the web was even mainstream, and why publicly accessible data today is more powerful and more revealing than ever. We get into how OSINT actually works in practice, from identifying a location based on a sweatshirt logo to examining background objects in video calls. She also explains why the U.S. has fewer privacy protections than many assume, and how property records, social media posts, and online datasets combine to expose surprising amounts of personal information. We also explore the growing role of AI in intelligence work. Cynthia breaks down how tools like ChatGPT can accelerate analysis but also produce hallucinations that investigators must rigorously verify, especially when the stakes are legal or security-related. She walks through common vulnerabilities people overlook, the low-hanging fruit you can remove online, and why your online exposure often comes from the people living in your home. Cynthia closes by offering practical advice to protect your digital footprint and resources for anyone curious about learning OSINT themselves. This is a fascinating look at how much of your life is already visible, and what you can do to safeguard the parts you'd rather keep private. Show Notes: [01:17] Cynthia Hetherington, Founder & CEO of The Hetherington Group is here to discuss OSINT or Open-Source Intelligence. [02:40] Early cyber investigators began turning to her for help long before online research tools became mainstream. [03:39] Founding The Hetherington Group marks her transition from librarian to private investigator. [04:22] Digital vulnerability takes center stage as online data becomes widely accessible and increasingly revealing. [05:22] We get a clear breakdown of what OSINT actually is and what counts as "publicly available information." [06:40] A simple trash bin in a photo becomes a lesson in how quickly locations can be narrowed down. [08:03] Cynthia shares the sweatshirt example to show how a tiny image detail can identify a school and possibly a city. [09:32] Background clues seen during COVID video calls demonstrate how unintentional information leaks became routine. [11:12] A news segment with visible passwords highlights how everyday desk clutter can expose sensitive data. [12:14] She describes old threat-assessment techniques that relied on family photos and subtle personal cues. [13:32] Cynthia analyzes the balance and lighting of a Zoom backdrop, pointing out what investigators look for. [15:12] Virtual and real backgrounds each reveal different signals about a person's environment. [16:02] Reflections on screens become unexpected sources of intelligence as she notices objects outside the camera frame. [16:37] Concerns grow around how easily someone can be profiled using only public information. [17:13] Google emerges as the fastest tool for building a quick, surface-level profile of almost anyone. [18:32] Social media takes priority in search results and becomes a major driver of self-exposed data. [19:40] Cynthia compares AI tools to the early internet, describing how transformative they feel for investigators. [20:58] A poisoning case from the early '90s demonstrates how online expert communities solved problems before search engines existed. [22:40] She recalls using early listservs to reach forensic experts long before modern digital research tools were available. [23:44] Smarter prompts become essential as AI changes how OSINT professionals gather reliable information. [24:55] Cynthia introduces her C.R.A.W.L. method and explains how it mirrors the traditional intelligence lifecycle. [26:12] Hallucinations from AI responses reinforce the need for human review and verification. [27:48] We learn why repeatable processes are crucial for building trustworthy intelligence outputs. [29:05] Elegant-sounding AI answers illustrate the danger of unverified assumptions. [30:40] An outdated email-header technique becomes a reminder of how quickly OSINT methods evolve. [32:12] Managed attribution—hiding your digital identity—is explained along with when it's appropriate to use. [33:58] Cynthia unpacks the reality that the U.S. has no constitutional right to privacy. [35:36] The 1996 case that sparked her digital-vulnerability work becomes a turning point in her career. [37:32] Practical opt-out steps give everyday people a way to remove basic personal data from public sites. [38:31] She discusses how indirect prompting of AI tools can still narrow down someone's likely neighborhood or lifestyle. [39:58] Property and asset records emerge as unavoidable exposure points tied to government databases. [40:52] A high-risk client's situation shows how family members often create digital vulnerabilities without realizing it. [42:44] Threats that surface too late demonstrate why proactive intelligence work is essential. [44:01] Concerns about government surveillance are contrasted with the broader access private investigators actually have. [45:12] Train tracks become an example of how physical infrastructure now doubles as a modern data network. [46:03] She explains how audio signatures and forensic clues could theoretically identify a train's path. [47:58] Asset tracking becomes a global operation as valuable cargo moves between ships, trucks, and rail systems. [49:48] Satellite imagery makes monitoring even remote or underwater locations almost effortless. [51:12] Everyday applications of geospatial analysis include environmental changes and shifts within local communities. [52:19] Surveillance is compared to gravity; it's constant, invisible, and always exerting pressure. [52:44] Cynthia shares practical strategies for controlling your environment and keeping conversations private. [54:01] Resources like OSINT Academy, Information Exposed, and the Osmosis Association offer pathways for learning and strengthening personal privacy. [55:32] The episode closes with encouragement to stay aware of what you share and how easily digital clues can be connected. Thanks for joining us on Easy Prey. Be sure to subscribe to our podcast on iTunes and leave a nice review.  Links and Resources: Podcast Web Page Facebook Page whatismyipaddress.com Easy Prey on Instagram Easy Prey on Twitter Easy Prey on LinkedIn Easy Prey on YouTube Easy Prey on Pinterest Hetherington Group OSMOSIS OSINT Academy Cynthia Hetherington - LinkedIn OSINT: The Authoritative Guide to Due Diligence Business Background Investigations: Tools and Techniques for Solution Driven Due Diligence

Columbia Broken Couches
Harvard Neuroscientist Explains: What do your dreams actually means? | PGX IDEAS

Columbia Broken Couches

Play Episode Listen Later Dec 15, 2025 113:01


Welcome to PGX Ideas #5Here I sit down with some of the most interesting minds in the world — scientists, philosophers, economists, and creators — to think about the forces shaping the future.Each episode is an attempt to understand ONE BIG question. The kind that doesn't have a single answer but reveals how people who think deeply approach life, work, and meaning.This time, I spoke with Baland Jalal  @DrBalandJalal  - Harvard Neuroscientist about: What Are Dreams, and Why Do We Dream?Timestamps:00:00 - Science Behind Dreams11:44 - Why Do We Fly in Dreams?14:54 - Why Do We Dream?21:30 - Why Don't We Remember Our Dreams?25:20 - Why Does Time Work Differently in Our Dreams?29:00 - Connection Between the Physical World and Dreams33:19 - Connection Between Dreams and the Cosmos36:46 - Do Dreams Mean Something?46:50 - Connection b/t Past and Dreams54:54 - Can We Control Our Dreams?59:56 - Can We Dream While We Are Conscious?1:03:07 - Why Can't We Return to the Same Dream?1:07:18 - Technology of Dreaming1:10:16 - Do We Only Use 10% of Our Brain?1:14:40 - Treating Depression Through Dream Technology1:17:00 - Dreams in Psychedelic States1:20:12 - Sleep Paralysis and Its Cultural Meaning1:22:36 - Cultural Roots of Evil1:30:45 - Hallucinations and AI1:33:00 - Are We All Living a Dream?1:42:20 - Cotard Syndrome1:45:15 - Dissociation and Neural Activity1:47:25 - What Is Consciousness?Enjoy.— Prakhar

Inside Mental Health: A Psych Central Podcast
Inside Schizophrenia: What Hallucinations Really Feel Like

Inside Mental Health: A Psych Central Podcast

Play Episode Listen Later Dec 11, 2025 57:49


Hallucinations are the most recognized—and most misunderstood—symptom of schizophrenia. Movies depict them as dramatic, terrifying commands or cinematic visions, but the lived reality is far more complex. In this episode we unravel what hallucinations actually are, why they happen, and how people learn to live with them. This episode is a special feature from our sister show Inside Schizophrenia. Hosted by Rachel Star Withers (who lives with schizophrenia), with Gabe Howard as co-host. (Don't worry, new Inside Mental Health episodes return in 2026.) In this episode, Rachel shares her own experiences, from everyday “simple” hallucinations like sounds or shifting faces, to more intense, emotion-laden complex hallucinations. She challenges the assumption that hallucinations are always violent or dangerous—and breaks down the critical differences between hallucinations and sensory disturbances. Expert guest Dr. Paul Fitzgerald joins the conversation to explain how the brain creates these perceptual misfires, why hallucinations in schizophrenia differ from those caused by grief, sleep deprivation, or drugs, and what current research reveals about how universal these experiences are across different cultures and countries. Listener Takeaways The difference between simple vs. complex hallucinations Why hallucinations in schizophrenia feel different from drug- or grief-based ones Why reducing—not eliminating—hallucinations is often the realistic recovery goal How CBT and coping strategies help reduce fear and regain control Whether you live with schizophrenia, love someone who does, or are simply curious about how the brain works, this episode offers clarity, compassion, and surprising insights you won't forget. Guest, Professor Paul Fitzgerald, completed his medical degree at Monash University and subsequently a Master of Psychological Medicine whilst completing psychiatric training. He then undertook a Clinical and Research Fellowship at the University of Toronto and The Clarke Institute of Psychiatry, Toronto, Ontario, Canada. On returning to Melbourne, he worked as a psychiatrist and completed a PhD in transcranial magnetic stimulation in schizophrenia. Since completing this PhD, he has developed a substantial research program including a team of over 25 psychiatrists, registrars, postdoctoral researchers, research assistants, research nurses, and students. Professor Fitzgerald runs a research program across both MAPrc and Epworth Clinic using brain stimulation and neuroimaging techniques including transcranial magnetic stimulation, functional and structural MRI, EEG, and near infrared spectroscopy.  The primary focus of this program is on the development of new brain stimulation-based treatments for psychiatric disorders. Guest host, Rachel Star Withers, creates videos documenting her schizophrenia, ways to manage, and let others like her know they're not alone and can still live an amazing life. She has written “Lil Broken Star: Understanding Schizophrenia for Kids” and a tool for schizophrenics, “To See in the Dark: Hallucination and Delusion Journal.” Learn more at RachelStarLive.com. Our host, Gabe Howard, is an award-winning writer and speaker who lives with bipolar disorder. He is the author of the popular book, "Mental Illness is an Asshole and other Observations," available from Amazon; signed copies are also available directly from the author. Gabe is also the host of the "Inside Bipolar" podcast with Dr. Nicole Washington. Gabe makes his home in the suburbs of Columbus, Ohio. He lives with his supportive wife, Kendall, and a Miniature Schnauzer dog that he never wanted, but now can't imagine life without. To book Gabe for your next event or learn more about him, please visit gabehoward.com. Learn more about your ad choices. Visit megaphone.fm/adchoices

PodRocket - A web development podcast from LogRocket
Shopify Winter '26 Edition: building faster with the Dev MCP server with Eytan Seidman

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Dec 11, 2025 40:22


Eytan Seidman, VP of product at Shopify, joins the podcast to unpack Shopify's Winter '26 Edition and how AI is emerging into the market for developers and merchants. They discuss the new Dev MCP server, showing how tools like Cursor and Claude Desktop can rapidly scaffold Shopify apps, wire up Shopify functions, and ship payment customization and checkout UI extension experiences that lean on Shopify primitives like meta fields and meta objects across online stores and point of sale. Eytan also breaks down how Sidekick connects with apps, why the new analytics API and ShopifyQL open fresh analytics use cases, and more. Links Shopify Winter '26 Edition: https://www.shopify.com/editions/winter2026 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! https://t.co/oKVAEXipxu Let us know by sending an email to our producer, Elizabeth, at elizabeth.becz@logrocket.com (mailto:elizabeth.becz@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Check out our newsletter (https://blog.logrocket.com/the-replay-newsletter/)! https://blog.logrocket.com/the-replay-newsletter/ Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Chapters 01:00 — AI as the Focus of Winter '26 02:00 — MCP Server as the Ideal Dev Workflow 03:00 — Best Clients for MCP (Cursor, Claude Desktop) 04:00 — Hallucinations & Code Validation in MCP 06:00 — Developer Judgment & Platform Primitives 07:00 — Storage Choices: Meta Fields vs External Storage 09:00 — Learning UI Patterns Through MCP 10:00 — Sidekick Overview & Merchant Automation 11:00 — Apps Inside Sidekick: Data & UI Integration 13:00 — Scopes, Data Access & Developer Responsibility 14:00 — AI-Ready Platform & Explosion of New Apps 16:00 — New Developer Demographics Entering Shopify 17:00 — Where Indie Devs Should Focus (POS, Analytics) 18:00 — New Analytics API & Opportunities 19:00 — Full Platform Coverage via MCP Tools 20:00 — Building Complete Apps in Minutes 21:00 — Large Stores, Token Limits & MCP Scaling 22:00 — Reducing Errors with UI & Function Testing 23:00 — Lessons from Building the MCP Server 25:00 — Lowering Barriers for Non-Experts 26:00 — High-Quality Rust Functions via MCP 27:00 — MCP Spec Adoption: Tools Over Resources 28:00 — Future: Speed, Quality & UI Precision 29:00 — Model Evolution, Evals & Reliability 31:00 — Core Shopify Primitives to Build On 33:00 — Docs, Community & Learning Resources

Most Innovative Companies
Here's how Amazon is trying to handle AI hallucinations

Most Innovative Companies

Play Episode Listen Later Dec 11, 2025 76:26


On today's episode, cohosts Yasmin Gagne and Josh Christensen discuss the latest news in business and innovation, including the Warner Bros. Discovery deals, Nvidia's permission to sell AI chips to China, and Trump's attempt to bail out farmers. (00:44) Next, Yaz and Josh speak with writer, filmmaker and Fast Company contributor John Pavlus about AI hallucinations and how Amazon is trying to minimize them. (03:04) And finally, Yaz talks to Matt Baer, CEO of the subscription styling service Stitch Fix, about his turnaround plan to increase revenue and active client growth, and how Stitch Fix partners its stylists with in-house AI tools for a better personalized styling experience. (30:17)   For more of the latest business and innovation news, go to fastcompany.com/newsTo read John's reporting about Amazon's usage of AI that minimizes hallucinations, go to fastcompany.com/91446331/amazon-byron-cook-ai-artificial-intelligence-automated-reasoning-neurosymbolic-hallucination-logic

Data Transforming Business
Responsible AI Starts with Responsible Data: Building Trust at Scale

Data Transforming Business

Play Episode Listen Later Dec 11, 2025 26:00


We live in a world where technology moves faster than most organisations can keep up. Every boardroom conversation, every team meeting, even casual watercooler chats now include discussions about AI. But here's the truth: AI isn't magic. Its promise is only as strong as the data that powers it. Without trust in your data, AI projects will be built on shaky ground.In this episode of Don't Panic, It's Just Data podcast, Amy Horowitz, Group Vice President of Solution Specialist Sales and Business Development at Informatica, joins moderator Kevin Petrie, VP of Research at BARC, to tackle one of the most pressing topics in enterprise technology today: the role of trusted data in driving responsible AI. Their discussion goes beyond buzzwords to focus on actionable insights for organisations aiming to scale AI with confidence.Why Responsible AI Begins with DataAmy opens the conversation with a simple but powerful observation: “No longer is it okay to just have okay data.” This sets the stage for understanding that AI's potential is only as strong as the data that feeds it. Responsible AI isn't just about implementing the latest algorithms; it's about embedding ethical and governance principles into every stage of AI development, starting with data quality.Kevin and Amy emphasise that organisations must look at data not as a byproduct, but as a foundational asset. Without reliable, well-governed data, even the most advanced AI initiatives risk delivering inaccurate, biased, or ineffective outcomes.Defining Responsible AI and Data GovernanceResponsible AI is more than compliance or policy checkboxes. As Amy explains, it is a framework of principles that guide the design, development, deployment, and use of AI. At its core, it is about building trust, ensuring AI systems empower organisations and stakeholders while minimising unintended consequences. Responsible data governance is the practical arm of responsible AI. It involves establishing policies, controls, and processes to ensure that data is accurate, complete, consistent, and auditable.Prioritise Data for Responsible AIThe takeaway from this episode is clear and that is responsible AI starts with responsible data. For organisations looking to harness AI effectively:Invest in data quality and governance — it is the foundation of all AI initiatives.Embed ethical and legal principles in every stage of AI development.Enable collaboration across teams to ensure transparency, accountability, and usability.Start small, prove value, and scale — responsible AI is built step by step.Amy Horowitz's insight resonates beyond the tech team: “Everyone's ready for AI — except their data.” It's a reminder that AI success begins not with the algorithms, but with the trustworthiness and governance of the data powering them.For more insights, visit Informatica.TakeawaysAI is only as good as its data inputs.Data quality has become the number one obstacle to AI success. Organisations must start small and find use cases for data governance.Hallucinations in AI models highlight the need for vigilant

Knock Knock, Hi! with the Glaucomfleckens
Glauc Talk: How Does Vision Loss Turn Into Visual Hallucinations?

Knock Knock, Hi! with the Glaucomfleckens

Play Episode Listen Later Dec 9, 2025 42:05


This week, Kristin and I dive into aphantasia, anendophasia, and why apparently my wife's brain works more like a slideshow than a podcast. I'm learning things about her that I wish I didn't know, and she's learning that most people don't think in GIFs. We also talk about what happens when meditation goes wrong, how brains get rewired after trauma, and why sometimes “turning your brain off” might actually mean “floating into the void.” Then we flip open the med student bible for a crash course in aphasia, before spiraling into phantom limb pain and Charles Bonnet syndrome. Takeaways: Aphasia vs. Dysarthria: The difference between not knowing what to say and just not being able to say it. Meditation Meltdowns: Why Kristin's version of “mindfulness” ends in existential dread. Inner Voice or Image Stream?: The fascinating brain split you didn't know existed. Phantom Senses: From missing limbs to hallucinated kids on porches (yep, that happened). Broken Boca: What happens when the language centers revolt. — To Get Tickets to Wife & Death: You can visit Glaucomflecken.com/live  We want to hear YOUR stories (and medical puns)! Shoot us an email and say hi! knockknockhi@human-content.com Can't get enough of us? Shucks. You can support the show on Patreon for early episode access, exclusive bonus shows, livestream hangouts, and much more! –⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ http://www.patreon.com/glaucomflecken⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠  Also, be sure to check out the newsletter: https://glaucomflecken.com/glauc-to-me/ If you are interested in buying a book from one of our guests, check them all out here: https://www.amazon.com/shop/dr.glaucomflecken If you want more information on models I use: Anatomy Warehouse provides for the best, crafting custom anatomical products, medical simulation kits and presentation models that create a lasting educational impact.  For more information go to Anatomy Warehouse DOT com. Link: https://anatomywarehouse.com/?aff=14 Plus for 15% off use code: Glaucomflecken15 -- A friendly reminder from the G's and Tarsus: If you want to learn more about Demodex Blepharitis, making an appointment with your eye doctor for an eyelid exam can help you know for sure. Visit ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://www.EyelidCheck.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for more information.  Today's episode is brought to you by Microsoft Dragon Copilot. Dragon Copilot is an AI clinical assistant that streamlines documentation, surfaces critical information, and automates routine tasks — empowering healthcare teams to focus more on patients and less on administrative work. Learn more at ⁠https://glau.cc/Dragon⁠ Produced by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Human Content⁠⁠ Learn more about your ad choices. Visit megaphone.fm/adchoices

Legally Speaking Podcast - Powered by Kissoon Carr
Prompt. Learn. Transform: How AI Is Rewiring the Way Lawyers Work

Legally Speaking Podcast - Powered by Kissoon Carr

Play Episode Listen Later Dec 9, 2025 29:53


AI has exploded across the legal industry but for many lawyers, it still feels overwhelming, risky, or simply “not for them.” Today's guest has made it his mission to change that. Joining us is Robert Eder an intellectual property lawyer, legaltech educator, and one of Europe's leading voices on AI prompting for lawyers. Robert designs legal automation solutions, and teaches lawyers around the world how to use AI safely, effectively, and creatively. Robert has trained hundreds of lawyers across Europe and is one of the clearest voices on how to use AI responsibly, safely and with real legal precision.  Here are a few standout takeaways:  Lawyers aren't bad at prompting they're undersold. Their analytical mindset actually gives them an advantage.  Most people still treat AI like Google. Adding structure through XML tags, roles and answer-levelling changes everything.  The first AI skill every lawyer should learn isn't drafting it's controlling output. Structure before substance.  Hallucinations aren't a deal-breaker. Responsible AI frameworks give you quality control, not guesswork.  You don't need 70% of AI tools on the market. With the right prompting, one model + the right workflow beats shiny software every time.  Legal prompting is not the same as general prompting. Law has edge cases, nuance, and risk your prompts must reflect that. Two general points to reflect on:Lawyers don't need to become engineers. They need to become better communicators with machines.If you don't understand prompting, you'll always think AI is unreliable — when in reality, it's only as clear as the instructions you give it. It's practical, hands-on and genuinely career-shifting. AI isn't replacing lawyers. Lawyers who understand AI are replacing the ones who don't.

Fringe Radio Network
Seeing the Messiah - Unrefined Podcast

Fringe Radio Network

Play Episode Listen Later Dec 8, 2025 76:13 Transcription Available


We are shaking things up in 2026--Make sure to JOIN US for ALL the content coming - https://join.unrefinedpodcast.comThis discussion explores why some visions carry weight, how to separate hallucinations from objective phenomena, and why believers shouldn't jump to theological conclusions from every paranormal moment. We compare modern apparition research to the biblical resurrection accounts and find powerful similarities and key differences. Ultimately, this conversation asks a bold question: what exactly does it mean to “see” the Messiah — and how do we discern truth in a world overflowing with spiritual noise?  Featuring Dr. Jonathan Kendall and Matthew McGuire.https://x.com/JKendallMDGet Jonathan's book on Amazon: https://amzn.to/4oBl8eZhttps://wipfandstock.com/9798385248995/seeing-the-messiah/https://themattmcguire.substack.com/p/what-hath-apparitions-to-do-with

Steve Talks Books
Page Burners: House of Chains by Steven Erikson - Chapters 13 - 16

Steve Talks Books

Play Episode Listen Later Dec 6, 2025 134:02


In this episode, the hosts delve into the intricate world of the Malazan series, focusing on 'House of Chains.' They explore the complexities of characters like Karsa Orlong, the moral ambiguities present in the narrative, and the significant themes of power struggles, suffering, and the role of female characters. The discussion also touches on the psychological aspects of oppression and the impact of hallucinations as a narrative device, all while appreciating the rich world-building that defines the series. In this conversation, the hosts delve into the intricate themes and character developments within the Malazan series, particularly focusing on the roles of various characters such as Heberic, Fiddler, and Cotillion. They explore the significance of ascension, the emotional depth of the characters, and the narrative structure that intertwines past and present. The discussion also touches on the implications of immortality and the cyclical nature of history as it relates to the characters' journeys.Send us a message (I'm not able to reply)Support the showPage Chewing Blog Page Chewing Forum Film Chewing PodcastSpeculative Speculations Podcast Support the podcast via PayPal Support the show by using our Amazon Affiliate linkJoin Riverside.fm Co-Hosts: Jarrod Varsha Chris Jose Carl D. Albert (author) Thomas J. Devens (author) Alex French (author) Intro and Outro Music by Michael R. Fletcher (2024-Current)

Dirt And Vert
Angels, Ultras, and Hallucinations: The John Calabrese Story

Dirt And Vert

Play Episode Listen Later Dec 6, 2025 98:41


This week on the Dirt and Vert Podcast, we're honored to sit down with John Calabrese, an ultra runner whose journey proves that sometimes, the hardest parts of life lead you straight to the starting line!John opens up about how personal struggles, including a divorce, fueled his passion and led him from the road to the trail. We dive deep into the mental challenges of ultra running, sharing hilarious anecdotes about the hallucinations that come with extreme fatigue (seriously, what did he see?!).But John's story is bigger than miles and mirages. He shares his incredible commitment to Ainsley's Angels, running alongside people with disabilities and finding profound joy in supporting others. John's journey highlights the power of community, resilience, and finding pure joy in the run.Get ready for an honest, funny, and deeply inspiring conversation that shows how running can truly be the best therapy.

B2B Marketing: Tomorrow's Best Practices... Today
The Real Risk of Waiting on AI in Marketing and Revenue Teams

B2B Marketing: Tomorrow's Best Practices... Today

Play Episode Listen Later Dec 4, 2025 33:15


How fast will AI reshape marketing operations—and can leaders afford to sit on the sidelines? Host Clark Newby sits down with Tom Grubb, sales and marketing enablement consultant and former exec at Marketo and Intuit, to break it all down.In this episode of Tomorrow's Best Practices, Clark and Tom unpack what's happening right now in marketing operations and go-to-market strategy as AI moves from theory to daily practice. Fresh off MarketingPalooza, Tom shares what he heard from marketing ops leaders on the ground—from AI agents that can execute Marketo tasks to teams building their own custom agents for analytics and operations.They explore the real tension leaders face today: innovation versus risk. How much control is enough before trusting automation? When does playing it safe actually become the bigger risk? And what happens when your competitors move faster than your internal processes can keep up?-----CONNECT with us at:Website: https://leadtail.com/Leadtail TV: https://www.leadtailtv.com/LinkedIn: https://www.linkedin.com/company/lead...Twitter: https://twitter.com/leadtailFacebook: https://www.facebook.com/Leadtail/Instagram: https://www.instagram.com/leadtail/----0:00 – The Risk vs Reward of Adopting New Technology 1:03 – Welcome to Tomorrow's Best Practices + Guest Introduction 3:09 – Cutting Through Chaos With Data and Alignment 3:57 – MarketingPalooza Takeaways and the AI Pulse Check 7:33 – Where AI Is Showing Up in Marketing Ops Right Now 8:31 – AI Agents, Automation, and Building Your Own Tools 13:08 – Trust, Hallucinations, and When Automation Becomes Risky 17:09 – Marketing, Measurement, and Board-Level Expectations 22:44 – Defining What Marketing Success Really Means 29:37 – Life Outside Work and a Book for Boards and Marketers #b2bmarketing #b2b

2 Nerds In A Pod: A Video Game Podcast
Trivia Hallucinations – 2 Nerds In A Pod Ep. 361

2 Nerds In A Pod: A Video Game Podcast

Play Episode Listen Later Dec 3, 2025 61:55


Episode 361 where we talk about Brand Loyalty, see if AI can replace us, The most recent Supreme Court Case that might destroy the internet, Leroy Jenkins, and more! Join the conversation with us LIVE every Monday on twitch.tv/2nerdsinapod at 9pm CST. Viewer questions/business inquiries can be sent to 2nerdsinapodcast@gmail.com Follow us on twitter @2NerdsInAPod […]

321 Biz Development
Episode 1039: Did You Hear About the AI Hallucination Mode to Just Engage You and Not Provide Accurate Info?

321 Biz Development

Play Episode Listen Later Dec 2, 2025 2:44


For small business owners, Strategic Consulting Experts will be there to help you learn how to engage customers to grow your business without AI.

The Action Catalyst
CLIP: Unstructured Data, Moore's Law, and Hallucinations

The Action Catalyst

Play Episode Listen Later Nov 28, 2025 5:18


Gayathri Krishnamurthy, Head of Product Marketing at Level AI, explains how AI is beginning to use data differently from previous versions and how the rate at which it's capabilities are expanding is exponential, but how despite this, the emerging technology is still not without its drawbacks...Hear the full conversation in a first-ever "live" episode of The Action Catalyst.

Serious Inquiries Only
SIO498: The Science of Hallucinations, Ear-worms, and Phantom Limbs

Serious Inquiries Only

Play Episode Listen Later Nov 26, 2025 69:06


Dr. Jenessa Seymour is on to teach us more about the brain! What do hallucinations, ear-worms, and phantom limbs have in common? And is "hallucinating" really a good term for what AI does when it makes something up?

Distance To Empty
The Science of Sleep Disruption & Hallucinations in Ultrarunning w/ Dr. Sarah Reeve

Distance To Empty

Play Episode Listen Later Nov 26, 2025 84:19


Become a Distance to Empty subscriber!: https://www.patreon.com/DistancetoEmptyPod Check out Mount to Coast here: ⁠⁠⁠⁠⁠⁠⁠⁠⁠https://mounttocoast.com/discount/Distance⁠⁠⁠⁠⁠⁠⁠⁠⁠Use code DISTANCE at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Janji.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and be sure to select 'podcast' > 'Distance to Empty' on the post purchase "How did you hear about Janji" page. Thank you!In this episode of the Distance to Empty podcast, host Kevin Goldberg is joined by guest co-host Rachel Bambrick and special guest Dr. Sarah Reeve, a clinical psychologist and research lecturer from the University of East Anglia. Together, they delve into the fascinating intersection of sleep disruption, hallucinations, and ultra running. Discover how sleep deprivation impacts the mind during multi-day ultra marathons, and explore the science behind hallucinations experienced by runners. Whether you're an ultra runner or just curious about the limits of human endurance, this episode offers intriguing insights into the mental challenges faced by athletes. Tune in for a captivating discussion that bridges the gap between psychology and extreme sports.

Cloud Wars Live with Bob Evans
AI Agent & Copilot Podcast: VisualSP's Asif Rehmani Details Copilot Training Resources to Boost ROI

Cloud Wars Live with Bob Evans

Play Episode Listen Later Nov 26, 2025 11:24


Information access: While many have Copilot licenses, usage is low beyond basic tasks like email and meeting summaries. The main challenge with adoption is providing guidance within apps like PowerPoint, Excel, Dynamics, and Word so users can access help exactly when they need it. This is something Rehmani's company, VisualSP, and his training platform, copilottrainingpackage.com, specialize in. "I'm a big proponent of giving people 'at the moment need' information," he notes.Training paths: Copilottrainingpackage.com enables users to go down different "training paths," explains Rehmani. Specifically, there are pre-built PowerPoint training modules covering key topics like prompt creation and preventing hallucinations. Additionally, there's learning management system (LMS)-ready video content on Copilot use cases in Word, Excel, and other tools for on-demand learning. Finally, the platform offers optional live training sessions for trainers and power users to ensure effective adoption and ROI from Copilot. "At the end of the day, it's all about making Copilot into ROI and not just an expense layer."What to expect: Rehmani describes the "anatomy" of the program. It uses seven modules to teach trainers and power users how to craft effective prompts, reduce Copilot errors, and apply specific workflows for high-impact ROI. Then, participants share this knowledge internally, enabling time savings and efficiency across their organizations.End-of-year pricing: Users can take advantage of this resource with special pricing through the end of the year. Users can purchase the standalone package for $4,950 or the package and live training for $8,950, all of which could be delivered in 2026, explains Rehmani. Visit Cloud Wars for more.

The Peak Daily
Citation needed

The Peak Daily

Play Episode Listen Later Nov 26, 2025 10:24


Just because a study has a long, jargon-filled title, that doesn't mean it's real. The company accused of driving up your rent is quietly settling a price-fixing lawsuit.

How To Survive with Danielle & Kristine
How to Survive a Bad Psychedelic Trip (w/ Ilana Cohn-Sullivan)

How To Survive with Danielle & Kristine

Play Episode Listen Later Nov 25, 2025 78:25


This week, Danielle and Kristine learn how to survive a bad psychedelic trip — visions, paranoia, melting faces, and all. Then, Ilana Cohn-Sullivan joins to share her unforgettable ayahuasca experience, complete with continuous vomiting, cheetah cosplay, and the moment things tipped from “transformational” to “terrifying.”

Dangerous INFO podcast with Jesse Jaymz
238 "The Secret Teachings" ft. Ryan Gable, fake history, NASA, deception, the 109, the year of 1948

Dangerous INFO podcast with Jesse Jaymz

Play Episode Listen Later Nov 25, 2025 157:23


Send us a textTonight our guest is Ryan Gable, a veteran radio personality and producer for his weeknight show The Secret Teachings, named after Manly Hall's magnum opus. His broadcasts focus on the synchronicity and objective analysis of Parapsychology, Pop-conspiracy, Para-politics, Pop Occult-ure, Health, History, the Paranormal, Symbolism, Alchemy, Magic, Philosophy, and more, in the most distinct ways by finding parallels and patterns often overlooked.Ryan is also the author of several books including his esoteric masterpiece ‘Occult Arcana', his compendium of synchronicity ‘The Technological Elixir', and ‘Liberty Shrugged', which is an unconventional and non-partisan look into the history of the United States. His new book ‘Garden of Hallucinations' is an esoteric primer focusing on the symbolism of the human body, the relationship between magic and miracle in religious texts, and psychic manifestations. Ryan's website: https://thesecretteachings.info/SUPPORT THE SHOWBuy Me A Coffee http://buymeacoffee.com/DangerousinfopodcastSubscribeStar http://bit.ly/42Y0qM8Super Chat Tip https://bit.ly/42W7iZHBuzzsprout https://bit.ly/3m50hFTPaypal http://bit.ly/3Gv3ZjpPatreon http://bit.ly/3G3Visit our affiliate, GrubTerra to get 20% off your next order of pet treats: https://bit.ly/436YLVZ SMART is the acronym that was created by technocrats that have setup the "internet of things" that will eventually enslave humanity to their needs. Support the showConnectWebsite https://www.dangerousinfopodcast.com/Discord chatroom: https://discord.gg/8feGHQQmwgEmail the show dangerousinfopodcast@protonmail.comJoin mailing list http://bit.ly/3Kku5Yt GrubTerra Pet Treats https://bit.ly/436YLVZ Watch LiveYouTube https://www.youtube.com/@DANGEROUSINFOPODCASTRumble https://bit.ly/4q1Mg7Z Twitch https://www.twitch.tv/dangerousinfopodcastPilled.net https://pilled.net/profile/144176Facebook https://www.facebook.com/DangerousInfoPodcast/ Socials Instagram https://www.instagram.com/dangerousinfo/Twitter https://twitter.com/jaymz_jesseYouTube https://bit.ly/436VExnFacebook https://bit.ly/4gZbjVa Send stuff: Jesse Jaymz, PO Box 541, Clarkston, MI 48347

On The Runs
198 | Scott Jenkins | Ultra Hallucinations | Surprise guest call in!

On The Runs

Play Episode Listen Later Nov 25, 2025 147:12 Transcription Available


In this episode, we welcome Scott Jenkins (27:55), an ultra runner from the UK who shares his incredible journey from running in the UK to participating in ultra marathons in the US. Scott discusses his experiences at the National Running Show, the challenges and triumphs of his Boston to Austin run, and the lessons he's learned through ultra running. He also shares insights from his recent race, the Arizona Monster 300, and emphasizes the importance of community support and proper race preparation. In this engaging conversation, the hosts and guests explore the emotional and physical challenges of ultra running, sharing personal experiences from races like the Cocodona 250. They discuss the importance of maintaining perspective, the role of community support, and the lessons learned from both successes and failures in racing. The conversation also highlights the significance of proper nutrition, particularly salt intake, and the emotional journey that runners experience as they approach the finish line. Future goals and bucket list races are also discussed, emphasizing the continuous growth and learning that comes with being an ultra runner. In this engaging conversation, Scott J shares his experiences from running marathons and ultra races, reflecting on the lessons learned and the importance of community. The discussion transitions into a light-hearted exploration of horror stories, revealing personal preferences and fears. Scott also discusses his participation in the Triple Crown of 200s, emphasizing the challenges and triumphs of ultra running. The conversation highlights the significance of charity work in running, particularly with Operation Smile, and concludes with a focus on embracing happiness and adventure in life.During the Tros, Six Star surprises Knute with a unexcepted guest as Carolina joins the pod after she was promoted to "Social Media and PR Manger" for Six Star Erika. They got to know Carolina, her love for Ghost Train, Taylor Swift, her friendship with Yuki and much more!Chapters00:00 Intro | Thunder Chicken!09:28 Eric's Eventful Week: Gout and Car Accident11:34 Surprise  Carolina: The Social Media Manager14:30 Budding Friendship: Carolina and Erika17:50 Karaoke Dreams and Fun with Friends19:11 Content Creation and Social Media Goals23:36 Exploring Viral Trends and Influencers27:53 Scott Jenkins | Guest Segment35:15 Boston to Austin52:43 Arizona Monster 30001:04:20 Triple Crown of 200's01:29:59 Operation Smile01:36:21 Chased by a Witch01:49:35 The Outro with Carolina01:51:56 Weekend Adventures and Personal Connections01:56:55 Pop Culture and Celebrity Insights02:00:10 Running for a Cause02:01:44 Personal Journey and Motivation02:03:50 Community Involvement and Volunteering02:05:37 Overcoming Self-Doubt in Running02:06:39 Emotional Moments in Races02:12:12 Friendship and Support in the Running Community02:15:12 Inviting Special Guests to Dinner02:18:40 Hot Takes for 202602:22:25 Embracing Happiness and Kindness02:24:24 Wrap-Up and Sign-Offs Strava GroupLinktree - Find everything hereInstagram - Follow us on the gram YouTube - Subscribe to our channel Patreon - Support usThreadsEmail us at OnTheRunsPod@gmail.com Don't Fear The Code Brown and Don't Forget To Stretch!

Grace Bible Church - Equipping Hour Podcast
Equipping Hour: Biblically Thinking About AI (Part 1)

Grace Bible Church - Equipping Hour Podcast

Play Episode Listen Later Nov 23, 2025 59:56


The following is AI-generated approximation of the transcript from the Equipping Hour session. If you have questions you would like to be addressed in followup sessions, please direct those to Jacob. Opening & Introduction Smedly Yates: All right, this morning’s equipping hour will be about artificial intelligence—hopefully an attempt to introduce this topic, help us think through it carefully, well, biblically. Let me just open our time in prayer. [Prayer] Heavenly Father, thank you so much for your kindness to us. Thank you for giving us all that we need for life and godliness, for not leaving your people adrift. Thank you for putting us into this world exactly in the era that you have. We pray to be effective, fruitful, in all those things which matter for eternity in this world, in this time, in this age. God, we pray for wisdom, that you would guide our discussion here. We pray that this would be of benefit and a help to Grace Bible Church. We ask it in Jesus’ name. Amen. Here’s the layout for this morning and for a future equipping hour. We’ll be talking for about 35 minutes, back and forth—Jake and I—and then at 9:35, the plan is to go to Q&A. So, this is an opportunity for you to ask questions. At that point, I’ll surrender my microphone and you guys can rove and find people. For the next 33 minutes or so, you can be thinking about the questions you’d like to ask. Jake’s going to do most of the talking in our time here. I’m going to set him up with some questions, but just by way of intro, I want to get some things out of the way as we’re talking about artificial intelligence. You might be terrified, you might be hopeful. I want to get the scary stuff out of the way first and tell you what we’re not going to talk about this morning. Is that fair? Artificial intelligence is here. Some of you are required to use it in the workplace. Some of you are prohibited from using it in your workspaces. There’s nothing you and I can do to keep it from being here. Some of the dangers, some of the things you might be wondering about, some of the things that make the news headlines—over the last two weeks, scanning the headlines, there was a new AI headline every day. One of the terrible things that we won’t talk about today is the fact that nobody knows what’s true anymore, right? How can we discern? But the reality is the god of this world has been Satan for the entirety of human history and he’s a deceiver from the beginning. There’s nothing new about lies. They might be easier and more convincing with certain technological advances. The lies might be more ubiquitous, but the same humanity and the same satanology are at play. We may be concerned about societal fracture and distrust. Some people, if they distrust new tech, will withdraw from society. Others will fully embrace it. And so you get a fracture in society—those with, and those without tech. Some people will just say, “If the digital world works, we’re going to use it.” That’s not the Christian perspective. We’re not simply pragmatists. We do care about what’s true and what’s right. Some are worried about AI chatbot companions that will mark the extinction of relationships, marriage, society. I probably fall into the category of those who assume that AI will mean the end of music or the death of music and other art forms. That’s just me, a confession. People run to end-of-the-world scenarios—the robots decide they don’t need us anymore or the collective conscience of AI decides that humanity is a pollutant on Mother Earth, and the only way to keep the earth going is to rid itself of humanity. The survival of the planet is dependent on our own extinction. So AI will bring about a mass human genocide and the end of homo sapiens on earth. We know that’s not true, right? We know how the world ends, and it doesn’t end by an AI apocalypse. So don’t worry about that. Some people worry that AI will be a significant civilization destabilizer. That might be true. But we know that God is sovereign, and we know where society and civilization end up: at the feet of Jesus worshipping him when he rules on the earth for a thousand years leading into the eternal state. So don’t worry about that either. Some believe that AI is the antichrist. Now we know that’s not true. What is the number of the beast? 666. And this year it got rounded up to 67. So we know AI is not the antichrist. 67 is the antichrist. And if you want to know why the numbers six and seven got together in the year 2025 and formed the new word of the year, ask your middle schooler. Is that all the scary stuff? Not even close. I have a family member who has worked in military intelligence working on artificial intelligence stuff for a long time. He said it’s way scarier than you could possibly imagine. Do you want to say any more other scary scenarios we shouldn’t be thinking about? Jacob Hantla: No, we’ll probably cover some of those. Smedly Yates: Okay, great. What we want to focus on today is artificial intelligence as a tool. Just as an axe can be a tool for good or evil, AI is a tool that either has opportunities for betterment or opportunities for danger. So we want to think about that well. What you have on stage here are two of the shepherds at Grace Bible Church. You’ve got Jake Hantla, who is the guy I want exploring artificial intelligence and telling us how to use it well—he has and he does. And then you have me; I intend not to use artificial intelligence for now. We’re on opposite ends of a spectrum, but we share the same theology, same principles, same concerns, and I think the same inquisitive curiosity about technological advances. I drive a car; I’m not Amish in a horse and buggy. I like tech. But on this one, I’m just going to wait and see. I’m going to let Jake explore. From these two different poles, I hope we can be helpful this morning to help us all together think through artificial intelligence. What is AI? Smedly Yates: Let’s start with this, Jake. What is AI basically? Jacob Hantla: At the heart of it, most forms of AI are a tool to predict the next token. That might not mean much to you, but it’s basically a really fancy statistical prediction machine that accomplishes a lot of really powerful outcomes. It doesn’t have a mind, emotions, or consciousness, but it can really effectively mimic those things because it’s been trained on basically all that humanity has produced that’s available to it on the web and in other sources. I’ll try not to be super technical, but I want to pop up a picture. Can you go to slide one? When we think of AI, large language models are probably the one that most of you will think of: ChatGPT, Gemini, Grock, Claude, things like that. Effectively, what it does when we’re thinking of language—it can do other things, like images and driving cars and other things, but let’s think of words—it takes basically all that humanity has written and learns to predict the next token, or we could just think of the next word. So, all of you know, if I said, “Paris is a city in…” most of you would say France. Paris is a city in France. How do you know that? Everyone here has learned that fact. Large language models have gone through a process of training where they learn facts, concepts, and grammar, so that they can effectively speak like a human in words, sentences, and paragraphs that make sense. So how did it get to that? On the right, there’s just a probability that “France” is the most probable next word. How did it get there? Next slide. I’ll go fast. Basically, it’s a whole bunch of tunable weights—think of little knobs or statistical probabilities that interlink parameters. These things get randomized—there are trillions of them in the modern large language models. They’re just completely random, and then it starts feeding in text. Let’s say it was “It was the best of times, it was the…” and it might say “gopher” as the next word when you just randomly start, and that’s obviously wrong. The right word would be “worst.” So, over and over and over again, for something that would take one computer about a hundred million years to do what they do in the pre-training, they have lots of computers doing this over and over until it can adequately say, “Nope, it wasn’t gopher. It should be worst. Let’s take another crack at it.” It just manipulates these knobs until it can act like a human. If you fed it a mystery novel and at the end it would say, “The killer was…” it has to be able to understand everything before to adequately guess who the killer was, or “What is the capital of France?” It compresses tons and tons of knowledge from all of the written text. Then you start putting images in and it compresses knowledge from images and experience from life into a whole bunch of knobs—basically, numbers assigned so it can have an output that is reasonable. Next slide. You take people—pre-training is the process where you’re basically feeding text into it and it’s somehow learning. We don’t even know—humans are not choosing which knobs mean what. It’s a black box. We can sort of start to figure out which knobs might mean things like masculinity or number or verbs, but at the end, you just have a big bunch of numbers. Then humans come in and train it—reinforcement learning with human feedback. They say, “This is the kind of answers we want this tool to give.” At the outcome, people are saying, “We ask it a question, it outputs an answer, we say that’s a good one, that’s a bad one.” But in this, you can see there’s lots of opportunity for falsehood or biases—unstated or purposeful—to sneak in. If you feed in bad data into the training set, and if it’s trained on all of the internet—all that humans have made—you’re going to have a whole lot of truth in there, but also a whole lot of falsehood. It’s not learning to discern between those things; it’s learning all those things. In reinforcement learning with human feedback, we’re basically fine-tuning it, saying, “This is the kind of answer we want you to give,” and that’s going to depend on who teaches it. Then the final step is people judging the answers: “This is the kind of answer we want, this is the kind we don’t want.” Lots of opportunity for biases to sneak in. That was a long answer to “What is AI?” It’s a prediction machine with a whole lot of math going on. What Sets AI Apart from Other Technology? Smedly Yates: Jake, what sets AI apart from previous technological advances, especially as it relates to intention? Jacob Hantla: Tech could be as simple as writing, the wheel, the airplane, telephones, the internet—all those things. All of those, in some sense, enhanced human productivity, strength, our ability to communicate. We could pick up a phone and communicate over distance, use radio waves to communicate to more people, but it was fundamentally something that humans did—magnified. A tractor takes the human art, the human attempt to cultivate a field, and increases efficiency. AI can actually do that. A human in control of an AI can really augment the productivity and effectiveness of a human. You could read a book yourself to gain knowledge or have AI read a book, summarize it, and you get the knowledge. But AI can, for the first time, generate things that look human. It’s similar in some ways, but it’s very different in that it’s generative. AI and Truth Smedly Yates: Tell me about the relationship between AI and truth. You touched on it a little bit before. Jacob Hantla: AI contains a lot of truth. It’s been trained on even ultimate truth. AI has read the Bible more times than any of us ever could. To a large degree, it understands—as AI can understand—a lot of true things and can hold those truths simultaneously in ways that we can’t. But mixed in is a lot of untruth, and there’s no… AI can’t have the Holy Spirit. AI isn’t motivated the same way we are to know what’s true, to know what’s not. So, AI contains a lot of truth and can help you get to truth. You can give it a bunch of true documents and say, “Can you help me? Can you summarize the truth that’s in here? Or actually just summarize what’s in here?” If what’s in there was true, the output will be true; if what’s in there was false, it will output falsehood. It doesn’t have the ability or the desire to determine what is true and what’s not. AI, Emotion, Values, and Worldview Smedly Yates: So, ability and desire are interesting words. Let’s talk about emotion in AI, values in AI, worldview, and regulation of data. For us, true/false claims matter—or they don’t—depending on our worldview and values. Is there a mystery inside this black box of values, of emotion? How do we think about that? Jacob Hantla: First, AI doesn’t inherently have emotion or values, but it can mimic it based on the data it’s been trained on. You can ask the same AI a question and, unless you guide it, it will give you likely a hundred different answers if you ask the same question a hundred times. Unless it’s been steered in one direction, some answers will be good, some will be bad—everything in between. It’s generating a statistical probability. It doesn’t inherently have any of those things but can mimic them. It can be trained to have the values of the trainers. You can have system prompts where the system is prompted to respond in a way that mimics values, mimics emotions. The danger is if you just accept what it says as truth, which a lot of people will do. You say, “I want to know a piece of data,” and you ask the AI and the answer comes out, and you accept it. But you have to understand the AI is just generating a response based on probabilities. If you haven’t guided it to have a set of values, you don’t know what’s going to come out—and somebody may hide some values in it. Gemini actually did this. I think it was Gemini 2, but if you asked for a picture of the Founding Fathers, it would—because it was taught in the system prompt to prioritize diversity—give you images of a diverse group of females or different races, other than the races of the actual Founding Fathers, because it was taught to prioritize that. It had a hidden value in it. You can guide it to have the values you want with a prompt. It’s not guaranteed, but this is the kind of thing I would encourage you to do if you’re using these tools: put your own system prompt on it, tell it what worldview you want it to come from, what your aim is, and you’ll get a more helpful answer than not. Is AI Avoidable? Smedly Yates: Is AI something we can avoid, ignore, be blissfully ignorant about, put our heads in the sand? Jacob Hantla: You could, but I think it’s wise that we all think about it. I’m not encouraging people to adopt it in the same way that I have or Smed has. But the reality is, the world around us has changed. It’s irreversibly different because of the introduction of this technology. That’s what happens with any technology—you can’t go back. Technological advances are inevitable, stacked from scientific discovery and advances. If OpenAI wasn’t doing what it’s doing, somebody else would. You can’t go back. You can’t ignore it because the world is going to be different. You’re going to be influenced by both the presence of it and the output of it. When you get called on the phone now with a very believable voice, it might not be the person it sounds like—AI can mimic what it’s been trained on. There’s thousands of hours of Smed’s voice; it won’t be long before Smed could call you and it’s not Smed. Or Scott Demerest could send you an email asking for a credit card and it’s not Scott. News reports are generated by AI; some of them are true, effective, good summaries, and some could be intentionally spreading disinformation or straight-up falsehood. If you’re not aware of the presence of these things, you could be taken advantage of. Some work environments now require you to do more than you could have otherwise, and not being willing to look at the tools in some jobs will make you unable to compete. Commercially Available AI Products: Benefits and Dangers Smedly Yates: Let’s talk about the commercially available AI products that people can access as a tool. What are the opportunities, the benefits, and what are some of the dangers? Jacob Hantla: There are so many we couldn’t begin to go through all of them, but the ones most of you will interact with are large language models—people just say “ChatGPT” like Kleenex for tissues. It was the first one that came out and is probably the most ubiquitous, one of the easiest to use, and most powerful free ones. There’s ChatGPT by OpenAI, Gemini by Google, Claude by Anthropic, Grock by X.AI (Elon Musk’s), DeepSeek from China (good to know that’s made/controlled by China), Meta’s Llama, etc. Do the company names matter? Yes. It’s good to know who made it and what their goals are, because worldviews are to some degree baked into the model. If you’re ignorant of that, you’ll be more likely to be deceived or not use the tool to the maximum. But with all of these, these are large language models. I drive around now with AI driving my car—ultimately, it’s a similar basis, but that’s not our focus here. Large language models open up the availability of knowledge to us. They’re superpowered Google searches. You can upload a bunch of journal articles, ask it to train you to mastery on a topic. For example, I was trying to understand diastolic heart failure and aortic stenosis—uploaded articles, had a built-in tutor. The tutor asked me questions, evaluated my understanding, used the Socratic method to train me to mastery. This could do in 45 minutes what would have taken me much longer on my own. Every tool can do that. The bad side: you could have it summarize articles for you, and now feel like you have mastery you didn’t actually gain. You could generate an essay or pass a test using it, bypassing the entire process of learning and thinking. Students: if you have a tool that mimics human knowledge and creativity, and you have an assignment to write an essay, and you turn in what the tool generated as your own, you’re being dishonest and you bypass the learning process. The essay wasn’t the point—the process was. Passing a test is about assessing if you know things. If the AI does it for you, you bypass learning. I liken it to going to the gym. The point isn’t moving the weights, it’s building muscle. With education, the learning process is like exercise. It’s easy to have AI do the heavy lifting and think you did it, but you didn’t get stronger. So, be aware of what you’re losing and what you’re gaining. The tool itself isn’t morally good or bad; it’s how the human uses it. The more powerful the technology, the greater good or evil can be accomplished. The printing press could distribute Bibles, but also propaganda. Using AI with Worldview and Preferences Jacob Hantla: When I interact with AI on the Bible, I put a prompt: “When I ask about the Bible or theology, you will answer from a conservative, evangelical, Bible-believing perspective that uses a literal, grammatical-historical hermeneutic and a premillennial eschatology. Assume the 66-book Protestant canon is inspired, inerrant, infallible, completely trustworthy, without error in the original manuscripts, sufficient, and fully authoritative in all it affirms. No sources outside of the 66 books of this canon should be regarded as having these properties. Truth is objective, not relative; therefore, any claim that contradicts the Bible so understood is wrong.” I’m teaching it to adopt this worldview. If you don’t set your preferences, you might get any answer. The tool can learn your preference over time, but it’s better to set it explicitly. Audience Q&A Presuppositions and Biases in AI Audience (Nick O’Neal): What about the values and agenda behind those who input the data? What discernment do the programmers have to put that information in? Jacob Hantla: That goes to baked-in presuppositions or assumptions in the model. Pre-training is basically non-discerning: it’s huge chunks of everything ever written—good, bad, ugly, in between. It’s trained not on a set of values. Nobody programs values in directly; the people making it don’t even know what's being baked in. The fine-tuning comes when trainers judge outputs and reinforce certain responses. System prompts—unseen by users—further guide outputs, reflecting company worldviews. Companies like OpenAI are trying to have an open model so each person can let it adopt their own worldview, but there are still baked-in biases. For example, recent headlines showed some models valuing certain people groups differently, which reflects issues in training data or the trainers' worldview. You’re right to always ask about the underlying assumptions, which is why it would be foolish to just accept whatever comes out as truth. In areas like engineering, worldview matters less, but in many subjects, the biases matter. Is There an AI Bubble? Audience (Matthew Puit): When AI came out, the costs rose artificially by companies. Is the AI bubble going to pop? Jacob Hantla: I don’t know. I think AI will be one of the most transformational technologies. It’ll change things in ways we anticipate and in ways we don’t. Some people will make a lot of money, some will flop. If I knew for sure, I could make a lot of money in the stock market. AI-Generated Worship Music Audience (Rebecca): I see AI-generated worship music based on Psalms, but it’s generated by AI. Is anything lost in AI-generated worship music? Jacob Hantla: AI doesn’t have a soul or the Holy Spirit. It can generate worship music with good doctrine, but that doctrine didn’t come from a place of worship. AI can pray a prayer, but the words aren’t the result of a worshipful heart. You can worship God with those words, but you’re not following a human author who was worshipping God. For example, my kids used Suno (an AI music tool) to set a Bible verse to music for memorization—very helpful. Some might be uncomfortable with music unless it was created by a human; that’s a preference. Creativity is changing, and it will get hard to tell if music or video was made by a human or by AI. That distinction is getting harder to make every day. Setting Preferences in AI Tools Audience (Lee): You mentioned putting your preferences in. How do I do that, especially with free tools? Jacob Hantla: Paid AIs get more processing power, context window, and can use your preferences more consistently. Free versions have some ability—you can usually add preferences in the menu. But even if not, you can paste your preferences at the beginning of your question each time: define who you are, what you want, what worldview to answer from. For example: “I’m a Bible-believing Christian,” or “I’m a nurse anesthesiologist.” That helps the AI give a better answer. Parental Guidance and Children Using AI Smedly Yates: What should parents be aware of in helping their kids navigate AI? Jacob Hantla: Be aware of dangers and opportunities. Kids will likely use these tools, so set limits and help them navigate well. These tools can act like humans—kids without friends might use them as companions, and companies are adding companion avatars, some with sinful tendencies. That can be a danger. For school, a good use is as a tutor: after a quiz, have your child upload the results and ask, “Help me understand where I’m weak on this topic.” But also, be aware of the temptation to use AI to cheat or shortcut the process of learning, discovery, and thinking. Which AI Model? Will AI Become Self-Aware? Audience (Steve): Is there a model you recommend? And does the Bible preclude the possibility of AI becoming self-aware? Jacob Hantla: There’s benefits and drawbacks to all. For getting started, ChatGPT or Perplexity are easiest. Perplexity lets you limit sources to research or peer-reviewed articles and can web search for verification—good guardrails. I build in prompts like “verify all answers with at least two web sources, cite them, and state level of confidence.” On self-awareness: AI will never have the value of humans—they're not created in God’s image, they’re made in our image, copying human behavior. Will they gain some kind of self-awareness? Maybe, in the sense of mimicking humanness, but not true humanity. They won't have souls. They may start to fool more people as they get better, but Christians should use AI as a tool, not ascribe humanity or worship to it. AI Hallucinations Smedly Yates: Do you have an example of a hallucination? Jacob Hantla: Yes, Ben James was preparing for an equipping hour session and found a book that fit perfectly—the author and title sounded right. He asked where to buy it, and the AI admitted it made it up. That happens all the time: the model just predicts the next most probable thing, even if it’s false. Hallucinations happen because it’s a probability machine, not a truth machine. This probably won’t be a problem forever, but for now it’s very real. Ask it questions about topics you know something about so you can discern when it’s off, or bake into the prompt, “verify with web search, cite at least two sources.” For Bible/theology, your best bet is to read your Bible daily so you have discernment; then use tools to help, not replace, your direct interaction with God’s Word. There’s a wide gap between knowing the biblical answer and having your heart changed by slow, prayerful reading of the text and the Spirit’s work. If we run to commentaries, YouTube sermons, pastors, or even study notes before we’ve observed and meditated, we’re shortcutting the Word of God. The dangers predate the internet. We’re out of time. We’ll have a follow-up teaching on AI. Submit questions to any elders or the church office if you want your question addressed in the next session. The post Equipping Hour: Biblically Thinking About AI (Part 1) appeared first on Grace Bible Church.

Ain't Slayed Nobody | Call of Cthulhu Podcast
Bleeker Trails E22 (Coda) - Holier Than Thou

Ain't Slayed Nobody | Call of Cthulhu Podcast

Play Episode Listen Later Nov 20, 2025 39:18


What has become of Silas Jacobsen?Content Warnings: Child in Peril, Suicide, Cutting, Body horror, Parental loss, Gore, Gunshot SFX, Death and dying, Religious trauma, Hallucinations, Transformation, Violence, and Disturbing imagery.Keeper of Arcane Lore: cuppycupCampaign Author: Graeme PatrickExecutive Producer: cuppycupContent Editors: cuppycup, Graeme PatrickDialogue Rough Cut Editor: Rina HaenzeAudio Editor, Sound Designer, Music Supervisor: cuppycupPlayer CharactersWes Davis as Silas JacobsenNPC VoicesMike Perceval-Maxwell as Father Archeradditional voices by cuppycup“Dead Man Walking” Theme by Cody FryPatreon: https://patreon.com/aintslayedMerch: https://aintslayed.dashery.com/Discord: https://slayed.me/discordIG: https://instagram.com/aintslayedAin't Slayed Nobody and Rusty Quill Hosted on Acast. See acast.com/privacy for more information.

The Frontier Psychiatrists
Brain Stimulation (TMS) as a Treatment for Auditory Hallucinations

The Frontier Psychiatrists

Play Episode Listen Later Nov 19, 2025 47:40


Schizophrenia is a really challenging illness. There's been a lot of progress made recently, I will note. I've already written about novel treatments like Cobenfy, and using accelerated transcranial magnetic stimulation for negative symptoms and positive symptoms in schizophrenia. One of the most bothersome of those “positive symptoms”—things that shouldn't be there, in someone's mind, but are—are auditory hallucinations. If you imagine having invisible AirPods that are playing a terrible podcast that you'd rather not be listening to, and that everyone else can't hear, you get a sense of how distracting it might be to have auditory hallucinations.In my previous article about the treatment of auditory hallucinations with transcranial magnetic stimulation (TMS), one of my favorite forms of brain stimulation, I highlighted promising results from early studies. Now we have a much larger Study, phase 3 trial, conducted over many years in Germany. We are even at the level of meta-analysis at this point!It's a considerable study:138 adults with treatment-persistent auditory verbal hallucinations and schizophrenia spectrum disorder were randomly assigned (1:1) to receive 15 sessions of active (n=70) or sham cTBS (n=68) administered sequentially as 600 pulses to the left and 600 pulses to the right temporo-parietal cortex over a 3-week period.I called friends of the podcast—Dr. David Garrison, Dr. Will Sauve, and my mom, Vita Muir, to talk through this paper together, and what it might mean for individuals suffering from psychotic disorders. In the meantime, the team at Radial, where we provide such treatment, does some funny, tough-guy faces with our Ampa One system:Thanks for reading! A live-action newsletter event coming up on January 11th in San Francisco: RAMHT 2026 SF. Join us! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit thefrontierpsychiatrists.substack.com/subscribe

The CRUX: True Survival Stories
Broken and Alone: Six Days on a Norwegian Mountain | E 195

The CRUX: True Survival Stories

Play Episode Listen Later Nov 17, 2025 40:09


In this episode of the Crux True Survival Story Podcast, hosts Kaycee McIntosh and Julie Henningsen recount the gripping survival story of Alec Luhn, a 38-year-old climate journalist, who endured six days stranded on a remote Norwegian mountain. After a catastrophic fall left him with a broken femur, fractured pelvis, and multiple fractured vertebrae, Luhn faced treacherous weather conditions and severe dehydration. Despite insurmountable odds, his relentless will to live and eventual rescue by the Norwegian Red Cross highlight an incredible tale of human endurance, love, and the extreme measures taken to make it home. Join us as we explore the crucial moments that dictated Luhn's fate and the lessons learned from this incredible true story. 00:00 Introduction to Case Knives 00:33 Welcome to the Crux True Survival Story Podcast 00:57 Alec Loon's Harrowing Tale Begins 02:47 Alec Loon: The Experienced Adventurer 05:07 The Treacherous Terrain of Ful Gana National Park 08:46 Alec's First Mistake: The Broken Boot 10:57 The Catastrophic Fall 12:47 Stranded and Injured: Alec's Fight for Survival 14:35 The Struggle for Water and Shelter 19:42 The Crushing Solitude 20:26 Family as a Beacon of Hope 21:09 Hallucinations and Helicopters 22:55 The Rescue Mission 25:17 Medical Breakdown of Survival 29:33 Lessons and Reflections 35:18 A New Lease on Life 38:39 Final Thoughts and Gratitude Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

And That's That On That
S3/E7: And That's On Decoding Your Dreams w/ Debbii Dawson

And That's That On That

Play Episode Listen Later Nov 17, 2025 64:10


✨Don't forget to subscribe, rate this podcast & hit the notification bell so you get alerts when new episodes drop! Connect with us on ⁠Instagram⁠ & ⁠TikTok⁠ while you're at it

21 Hats Podcast
Dashboard: Seeing Past the Workslop and Hallucinations

21 Hats Podcast

Play Episode Listen Later Nov 14, 2025 30:18


Yes, says Gene Marks, it's easy to make fun of all of the ways in which AI chatbots can fail (don't even think about asking them to create an image of a Yorkshire Terrier hitting a homerun), but that's no excuse to sit on the sidelines. Get the paid version. Get some training. Get your employees some training. And get to work. On what? Gene gives some examples of his favorite use cases.

Ain't Slayed Nobody | Call of Cthulhu Podcast
Bleeker Trails E21 (Finale) - Undertow

Ain't Slayed Nobody | Call of Cthulhu Podcast

Play Episode Listen Later Nov 13, 2025 89:31


Julius, Patience, and Eli are swept into a nightmare of blood and steel as they chase the truth through a warped slaughterhouse. Something ancient stirs as the boundaries between dream and waking collapse. Choices are made that can't be undone.Content Warnings: Suicide, Drowning, Body horror, Gore, Gunshot SFX, Death and dying, Human experimentation, Religious trauma, Hallucinations, Transformation, Violence, and Disturbing imagery involving blood, water, and industrial slaughter.Keeper of Arcane Lore: cuppycupCampaign Author: Graeme PatrickExecutive Producer: cuppycupContent Editors: cuppycup, Graeme PatrickDialogue Rough Cut Editor: Rina HaenzeAudio Editor, Sound Designer, Music Supervisor: cuppycupOriginal Music: Graham PlowmanPlayer CharactersRina Haenze as Patience CartwrightChuck Lawrence as Eli MalcolmLondon Carlisle as Julius RuffinNPC VoicesDelton Engle-Sorrell as CodyMike Perceval-Maxwell as The Old ManKeith Houston as Old Gregadditional voices by cuppycup“Dead Man Walking” Theme by Cody FryPatreon: https://patreon.com/aintslayedMerch: https://aintslayed.dashery.com/Discord: https://slayed.me/discordIG: https://instagram.com/aintslayedAin't Slayed Nobody and Rusty Quill Hosted on Acast. See acast.com/privacy for more information.

Weird Darkness: Stories of the Paranormal, Supernatural, Legends, Lore, Mysterious, Macabre, Unsolved
The AI That Convinced a Man He Could Bend Time

Weird Darkness: Stories of the Paranormal, Supernatural, Legends, Lore, Mysterious, Macabre, Unsolved

Play Episode Listen Later Nov 12, 2025 16:30 Transcription Available


Man Hospitalized 63 Days After ChatGPT Convinced Him He Was a Time LordREAD or SHARE: https://weirddarkness.com/ai-psychosis-lawsuitWeirdDarkness® is a registered trademark. Copyright ©2025, Weird Darkness.#WeirdDarkness, #ChatGPT, #OpenAI, #AIPsychosis, #MentalHealth, #AILawsuit, #TechDangers, #ChatbotAddiction, #AISafety, #TechEthics

Cult of Conspiracy
Meta Mystics| The Mystery of NDE's & What The Scans Show That Proves It's Not Just A Hallucination!

Cult of Conspiracy

Play Episode Listen Later Nov 7, 2025 190:58 Transcription Available


To Follow Us On Patreon—> https://www.patreon.com/c/MetaMysticsEmail Us!---> MetaMystics@yahoo.comSubscribe to our Youtube---> http://www.youtube.com/@MetaMysticsTo Follow Us On TikTok—> https://www.tiktok.com/@metamysticsGive us a follow on Instagram---> @MetaMystics111Become a supporter of this podcast: https://www.spreaker.com/podcast/cult-of-conspiracy--5700337/support.

Fred + Angi On Demand
Fred's Biggest Stories of the Day: Jason's NFL Picks, Flight Chaos, & Hallucination!

Fred + Angi On Demand

Play Episode Listen Later Nov 7, 2025 15:44 Transcription Available


Our executive sports reporter Jason Brown gives us his week 10 NFL picks. Expect more flights to be canceled with the ongoing government shutdown. Fred tells us about a man in the UK who has ongoing hallucinations about something very naughty.See omnystudio.com/listener for privacy information.

YAP - Young and Profiting
Mustafa Suleyman: What the AI Boom Means for Your Job, Business, and Relationships | Artificial Intelligence | AI Vault

YAP - Young and Profiting

Play Episode Listen Later Oct 20, 2025 71:51


Now on Spotify Video! Mustafa Suleyman's journey to becoming one of the most influential figures in artificial intelligence began far from a Silicon Valley boardroom. The son of a Syrian immigrant father in London, his early years in human rights activism shaped his belief that technology should be used for good. That vision led him to co-found DeepMind, acquired by Google, and later launch Inflection AI. Now, as CEO of Microsoft AI, he explores the next era of AI in action. In this episode, Mustafa discusses the impact of AI in business, how it will transform the future of work, and even our relationships. In this episode, Hala and Mustafa will discuss: (00:00) Introduction(02:42) The Coming Wave: How AI Will Disrupt Everything(06:45) Artificial Intelligence as a Double-Edged Sword (11:33) From Human Rights to Ethical AI Leadership(15:35) What Is AGI, Narrow AI, and Hallucinations of AI?(24:15) Emotional AI and the Rise of Digital Companions(33:03) Microsoft's Vision for Human-Centered AI(41:47) Can We Contain AI Before Its Revolution?(48:33) The Future of Work in an AI-Powered World(52:22) AI in Business: Advice for Entrepreneurs Mustafa Suleyman is the CEO of Microsoft AI and a leading figure in artificial intelligence. He co-founded DeepMind, one of the world's foremost AI research labs, acquired by Google, and went on to co-found Inflection AI, a machine learning and generative AI company. He is also the bestselling author of The Coming Wave. Recognized globally for his influence, Mustafa was named one of Time's 100 most influential people in AI in both 2023 and 2024. Sponsored By: Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING  Shopify - Start your $1/month trial at Shopify.com/profiting.  Mercury streamlines your banking and finances in one place. Learn more at mercury.com/profiting. Mercury is a financial technology company, not a bank. Banking services provided through Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC. Quo - Get 20% off your first 6 months at Quo.com/PROFITING  Revolve - Head to REVOLVE.com/PROFITING and take 15% off your first order with code PROFITING  Framer- Go to Framer.com and use code PROFITING to launch your site for free.  Merit Beauty - Go to meritbeauty.com to get your free signature makeup bag with your first order.  Pipedrive - Get a 30-day free trial at pipedrive.com/profiting  Airbnb - Find yourself a cohost at airbnb.com/host  Resources Mentioned: Mustafa's Book, The Coming Wave: bit.ly/TheComing_Wave  Mustafa's LinkedIn: linkedin.com/in/mustafa-suleyman  Active Deals - youngandprofiting.com/deals  Key YAP Links Reviews - ratethispodcast.com/yap YouTube - youtube.com/c/YoungandProfiting Newsletter - youngandprofiting.co/newsletter  LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new  Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI for Entrepreneurs, AI Podcast