Perception in the absence of external stimulation that has the qualities of real perception
POPULARITY
Categories
While I'm out chasing a self-supported FKT on the Appalachian Trail, we're throwing it back to one of the weirdest solo episodes yet.This one's all about hallucinations—those sleep-deprived visions that show up when the miles get long and the sleep gets short. I break down my top five all-time hallucinations from FKTs and multi-day races like Nolan's 14, Cocodona, the Arizona Trail, and the Colorado Trail.Chapters00:00 Introduction and Context of the Podcast02:11 Hallucinations from Sleep Deprivation07:22 First Hallucination: The Safari Guy11:59 Second Hallucination: Witches in the Woods19:32 Third Hallucination: The Robot in the Cliff23:13 Fourth Hallucination: Black Bears and Stumps27:17 Conclusion and Future PlansYou'll meet:The Safari Man who's been following me for yearsWitches in the forest waiting to sweep me upA three-story robot stuck in a rock wallA pirate ship in the desert for some reasonAnd chicken feet sprouting from the EarthWe also touch on why hallucinations happen, what triggers them, and how I'm turning them into AI-generated images—because if I have to see these things, so do you.Sponsors:Janji • Garage Grown Gear • CS Coffee • Mount to CoastSupport the Appalachian Trail FKT attempt + the show: Patreon.com/freeoutsideSubscribe to Substack: http://freeoutside.substack.comSupport this content on patreon: HTTP://patreon.com/freeoutsideBuy my book "Free Outside" on Amazon: https://amzn.to/39LpoSFEmail me to buy a signed copy of my book, "Free Outside" at jeff@freeoutside.comWatch the movie about setting the record on the Colorado Trail: https://tubitv.com/movies/100019916/free-outsideWebsite: www.Freeoutside.comInstagram: thefreeoutsidefacebook: www.facebook.com/freeoutside
「幻覺」(Hallucination)是指大語言模型(LLM)在回答問題時,產生聽起來很有說服力,但卻是無中生有的資訊 。為什麼會產生幻覺?OpenAI 的研究報告指出,幻覺是語言模型本身的結構問題。其運作機制是透過不斷預測下一個最合理的字詞來生成內容。當訓練資料中沒有固定答案的問題時,模型為了獲得「獎勵」(即生成回應),就會像猜測選擇題一樣,胡亂編造一個答案 。新一代的模型如 GPT-5,已透過改善其評分機制,減少了幻覺的出現。幻覺是語言模型固有的結構問題,因此無法百分之百根治 。從使用者角度如何減少幻覺?使用者可以透過以下方法降低幻覺的發生機率:* 限定資料範圍:提供特定的文件(如論文)給模型,並要求它只在這些資料的範圍內回答問題 。* 明確提問:在提問時,清楚地描述背景、問題,並說明你期望的答案類型,例如「不要亂編」、「需要精準答案」等 。* 要求事實核查:可以要求模型自己進行事實核查,或請它提供資料來源 。如何讓大語言模型有效地發揮創意?* 結合特定情境:將兩個看似不相關的概念或領域(例如佛學與人工智慧)關聯起來,引導模型從新穎的角度進行分析 。* 協作式提問:當問題不夠明確時,讓模型反過來提問以幫助您收窄問題範圍,共同協作釐清問題 。* 資訊整理與分析:利用模型將多個來源的資訊(如公司財報、網路論壇評論)進行綜合和摘要,以獲得更精簡和全面的答案 。 This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit leesimon.substack.com/subscribe
For individuals living with schizophrenia, the world can be populated by voices, visions, or other sensory experiences that are intensely real yet imperceptible to others. These hallucinations, which are a defining feature of schizophrenia, are not mere figments of imagination but rather vivid, often distressing, and deeply influential aspects of their daily reality. Today's exploration aims to shed light on the phenomenon of hallucinations in schizophrenia, going beyond a diagnostic perspective to consider the actual experience, and the profound impact on an individual's sense of self and connection to the world around them. Guest host, Rachel Star Withers, a diagnosed schizophrenic, and guest Christopher Grant have a real, raw discussion about what experiencing schizophrenia hallucinations is like.Please Note: Our regular host, Gabe Howard, will be back next week! “Just because something is in the uncanny valley, it doesn't always mean it's terrifying or deserves demonization. I'm not blind to that fact that it's off-putting to people through the perspective of a neurotypical person. I'm glad to say today most of my hallucinations are pretty, I don't even want to say friendly. They're just content, like a neutral energy. But, for a long time, I was constantly in a state of like terror, like fight or flight terror when I'd experience it.” ~Christopher Grant aka XO RAD Magical Our guest, Christopher Grant, is a Mi'gmaw filmmaker from Pabineau First Nation, New Brunswick. Inspired by the intense absurdity of life and death, Chris animates to express the humour and terror of existence. He has worked with the NFB's Hothouse project, and his work has been exhibited at galleries, museums and film festivals internationally. He is also known for his role on TikTok as XORADMAGICAL, where he creates art and answers questions all relating to his experiences with schizophrenia in daily life. Our guest host, Rachel Star Withers, (www.rachelstarlive.com) is an entertainer, international speaker, video producer, and schizophrenic. She has appeared on MTV's Ridiculousness, TruTV, NBC's America's Got Talent, Marvel's Black Panther, TUBI's #shockfight, Goliath: Playing with Reality, and is the host of the Healthline podcast “Inside Schizophrenia”. She grew up seeing monsters, hearing people in the walls, and having intense urges to hurt herself. (Our regular host, Gabe Howard, will be back next week!) Rachel creates videos documenting her schizophrenia, ways to manage, and letting others like her know they are not alone and can still live an amazing life. She has created a kid's mental health comic line, The Adventures of ____. (Learn more at this link: https://www.amazon.com/Adventures-Fearless-Unstoppable-Light-Ambitious/dp/B0FHWK4ZHS ) Fun Fact: She has wrestled alligators. The Inside Schizophrenia podcast is available on your favorite podcast player. Learn more about your ad choices. Visit megaphone.fm/adchoices
Can AI be used to improve patient experiences? This week, Technology Now explores how AI is being used to streamline data collection in the healthcare industry, how data should be treated to avoid bias in AI, and the benefits this brings to patients. Derek B. Howard, Programme Manager for the HPE Digital Health Foundry Programme, tells us more.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Aubrey Lovell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations.About Derek B. Howard:https://www.linkedin.com/in/derek-howard1/Sources:https://www.britannica.com/technology/MYCINhttps://www.mghlcs.org/projects/dxplainhttps://www.cedars-sinai.org/discoveries/ai-ascendance-in-medicine.html
Open Source bi-weekly convo w/ Bill Gurley and Brad Gerstner on all things tech, markets, investing & capitalism. This week, guest host Altimeter's Apoorv Agrawal explores how OpenAI is reshaping enterprise with Sherwin Wu, Head of Engineering OpenAI Platform, and Olivier Godement, Head of Product OpenAI Platform. From T-Mobile's AI voice support to Amgen's drug breakthroughs to Los Alamos' air-gapped supercomputer—this episode dives into the real world of AI at scale. Enjoy another episode of BG2!Timestamps:(00:00) Intro(01:50) OpenAI's Enterprise Mission: Beyond ChatGPT(06:00) Case Study: T-Mobile Voice & Support (11:30) Case Study: Amgen Accelerating Drug Development(13:45) Case Study: Los Alamos National Lab (17:00) Why 95% of AI Deployments Fail?(20:30) Physical vs Digital Autonomy: Scaffolding & Infrastructure(26:00) GPT-5: Release, Benchmarks vs Behavior(30:00) GPT-5 Feedback: Instruction Following, Hallucinations, Code Quality(33:00) Multimodality: Text, Voice, and Video(35:30) Audio: Realtime API vs Stitched Audio(38:00) Model Customization & Reinforcement Fine-Tuning (RFT)(43:00) Rapid Fire: Long/Short Picks (1:03:00) Highlights and Lowlights @ OpenAIShow Notes:T-Mobile Partnership: https://www.t-mobile.com/news/business/t-mobile-launches-intentcx-with-openaiAmgen Partnership: https://openai.com/index/gpt-5-amgen/Los Alamos Partnership: https://www.lanl.gov/media/news/0130-open-aiMIT AI Report: https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdfProduced by Dan ShevchukMusic by Yung SpielbergAvailable on Apple, Spotify, www.bg2pod.comFollow:Brad Gerstner @altcap https://x.com/altcapBill Gurley @bgurley https://x.com/bgurleyBG2 Pod @bg2pod https://x.com/BG2PodApoorv Agrawal @apoorv03 https://x.com/apoorv03Sherwin Wu @sherwinwu https://x.com/sherwinwuOlivier Godement @oliviergodement https://x.com/oliviergodement
Ultramarathoner Patrick Hallahan returns to the Co-Movement Gym Podcast after an epic finish at the Bigfoot 200. In Part 1 of this two-part episode, Patrick walks us through the grueling 200-mile race across Washington's Cascade Mountains, where bonks, hallucinations, wildlife encounters, and one-hour of total sleep tested his limits. We dive deep into fueling struggles, mindset shifts, and the lessons only a 200-miler can teach.Patrick is an ultramarathon athlete, residing in Syracuse, NY, who recently completed the Bigfoot 200, a grueling 200-mile trail race in Washington state. He finished 22nd overall, in a time of 2d:17h:54m:56s – an incredible performance. Chapters00:00 Welcome back Patrick!01:13 Bigfoot 20005:04 Training Climbing vs Descents06:21 GI Issues & Pacing Lessons13:20 Goggins21:24 Night On The Ridge24:23 The Burger Strategy32:42 Mountain Lions37:07 Heat, Hallucinations & Grind49:45 Final Push & Key TakeawaysAbout UsThe Co-Movement Gym Podcast is supported by Native Path Supplements and Lombardi Chiropractic.
From niggles to the big picture: we cover injury coping, recovery tools, and how to adjust training and goals—plus a look ahead to Hallucination 100.
Paul Watkins set out alone into the Arctic wilderness.No food. No shelter. No backup plan.What started as an adventure became a near-death fight to survive.Eight days. Frostbite. Starvation. Isolation.Hallucinations so vivid he couldn't tell what was real.He saw trucks and workers that weren't there.He heard fighter jets that never came.He believed a woman was following him the polar explorer “third man effect.”
En 2024, Damien s'est lancé dans la première édition d'un monstre de l'ultra-trail : la SwissPeak 660.Le principe est simple : parcourir 330 kilomètres pour rejoindre et prendre le départ de la SwissPeak 360, la « petite » sœur historique du format 660. Les chiffres sont impressionnants :660 km dans le Valais, en Suisse, du Léman au glacier du Rhôneenviron 49 000 à 50 000 m de D+ (soit 6 fois l'Everest)55 passages à plus de 3 000 m d'altitudeDepuis, la 660 a pris du galon puisque la dernière trace en date atteint 700 kilomètres. Difficile de concevoir le niveau d'engagement qu'une telle distance, qu'un effort aussi long, peut exiger.On parle de nutrition et d'hydratation, de gestion de course, de fatigue, d'hallucinations et de moments humains forts, comme il n'en arrive que sur ce type d'épreuves.Un épisode qui ravira celles et ceux que l'ultra-endurance fascine (et qui hésitent peut-être encore à se lancer sur ces formats gargantuesques !)Excellente écoute à vous.
A new research paper from OpenAI asks why large language models like GPT-5 and chatbots like ChatGPT still hallucinate, and whether anything can be done to reduce those hallucinations. In a blog post summarizing the paper, OpenAI defines hallucinations as plausible but false statements generated by language models, and it acknowledges that despite improvements, hallucinations remain a fundamental challenge for all large language models, one that will never be completely eliminated. Learn more about your ad choices. Visit podcastchoices.com/adchoices
For individuals living with schizophrenia, the world can be populated by voices, visions, or other sensory experiences that are intensely real yet imperceptible to others. These hallucinations, which are a defining feature of schizophrenia, are not mere figments of imagination but rather vivid, often distressing, and deeply influential aspects of their daily reality. Today's exploration aims to shed light on the phenomenon of hallucinations in schizophrenia, going beyond a diagnostic perspective to consider the actual experience, and the profound impact on an individual's sense of self and connection to the world around them. Host Rachel Star Withers, a diagnosed schizophrenic, and guest Christopher Grant have a real, raw discussion about what experiencing schizophrenia hallucinations is like. “By the time I was in university, I started hearing whispering in the vents of my apartment. Then it suddenly became more focused. Like Command, go take a walk right now or this will happen to you. It continued on and it intensified all the way until as recently as 2022 I've had a lot of ups and downs. I've had a lot of moments where I hit rock bottom personally and then had to find a way to swim back up from that.” — Christopher Grant “We would have classically called it spirits. My grandfather, before he died, when he heard that I was going through the symptoms of schizophrenia, he said, there's no word for it in Mi'kmaq. They just would have said, you're going through a test of the spirits. You're going on this experience where you have to, learn yourself, learn your strengths, learn your boundaries, and don't give up no matter how turbulent it gets.” — Christopher Grant Our Guest, Christopher Grant, is a Mi'gmaw filmmaker from Pabineau First Nation, New Brunswick. Inspired by the intense absurdity of life and death, Chris animates to express the humour and terror of existence. He has worked with the NFB's Hothouse project, and his work has been exhibited at galleries, museums and film festivals internationally. He is also known for his role on TikTok as XORADMAGICAL, where he creates art and answers questions all relating to his experiences with schizophrenia in daily life. Our host, Rachel Star Withers, (Link: www.rachelstarlive.com) is an entertainer, international speaker, video producer, and schizophrenic. She has appeared on MTV's Ridiculousness, TruTV, NBC's America's Got Talent, Marvel's Black Panther, TUBI's #shockfight, Goliath: Playing with Reality, and is the host of the HealthLine podcast “Inside Schizophrenia”. She grew up seeing monsters, hearing people in the walls, and having intense urges to hurt herself. Rachel creates videos documenting her schizophrenia, ways to manage, and letting others like her know they are not alone and can still live an amazing life. She has created a kid's mental health comic line, The Adventures of ____. (Learn more at this link: https://www.amazon.com/Adventures-Fearless-Unstoppable-Light-Ambitious/dp/B0FHWK4ZHS ) Fun Fact: She has wrestled alligators. Learn more about your ad choices. Visit megaphone.fm/adchoices
Ben Luria is the CEO of Hirundo, which recently raised $8 million in seed funding to advance its "machine unlearning" technology. This technology is designed to address the issue of AI models generating false or misleading information ("hallucinations") by enabling them to "forget" bad, biased, or malicious data. In this episode, Luria joins host Heather Engel to discuss the company's mission and technology. • For more on cybersecurity, visit us at https://cybersecurityventures.com
Published Jan. 22, 2024
The Lost City of Un'Goro miniset "Day of Rebirth" has been announced! and I play Protoss Priest on the ladder. You can find the deck import code below the following contact links. You can follow me @blisterguy on Twitch, Bluesky, and Youtube. Join our Discord community here or at discord.me/blisterguy. You can support this podcast and my other Hearthstone work at Patreon here. # 2x (1) Catch of the Day # 2x (1) Hallucination # 2x (1) Power Word: Shield # 2x (2) Birdwatching # 2x (2) Photon Cannon # 2x (2) Sentry # 1x (3) Chillin' Vol'jin # 2x (3) Trusty Fishing Rod # 2x (3) Void Ray # 1x (4) Elise the Navigator # 1x (4) Narain Soothfancy # 1x (4) Nightmare Lord Xavius # 2x (5) Chrono Boost # 2x (5) Resuscitate # 1x (6) Bob the Bartender # 1x (7) Endbringer Umbra # 1x (8) Artanis # 1x (9) Fyrakk the Blazing # 2x (12) Mothership # AAECAcWaBgjZwQbX0gaT9AbDgwesiAfblweCmAf1mAcLzsAGjMEGi9YGi/QGkPQGmPQGs/QGxfgGyvgGtZYH0q8HAAA=
Be sure to download our "READY. AAIM. PHIRE!" takeaway sheet for all the key insights from this episode. Click the link below: https://tinyurl.com/mpbarn29 AI tools like ChatGPT are transforming how we work - but are they doing more harm than good? This week, Phil and Burt unpack the growing trend of overusing and over trusting AI in the workplace. From fabricated facts to blind faith in chatbots, we'll explore the risks that come with leaning too heavily on artificial intelligence. Are your employees using AI as a helpful assistant - or hiding behind it? Tune in for real-world examples, red flags to watch for, and practical steps every employer should take to stay protected. 00:00 – Welcome & opening remarks 01:25 – Live from SHRM Conference in Orlando 03:20 – Why AI “hallucinates” and how Amy Hart helps 04:50 – Playing the Reliable, Reasonable, or Ridiculous AI game 07:10 – Unlimited PTO: pros and cons 08:34 – Asking older employees about retirement 11:21 – Should remote workers get more PTO? 13:20 – Lawyer on the Clock: THC in the workplace 18:19 – Cracker Barrel logo backlash and DEI debate 24:30 – Illinois AI discrimination law 31:12 – Ready, Aim, Fire key takeaways download 32:07 – Upcoming Employment Law Update events
In today’s deep dive, a Central Illinois attorney was sanctioned for using A.I. in a brief with hallucinated cases.
A predictive programming sci-fi novel/An endless void demands souls Patreon (Get ad-free episodes, Patreon Discord Access, and more!) https://www.patreon.com/user?u=18482113 PayPal Donation Link https://tinyurl.com/mrxe36ph MERCH STORE!!! https://tinyurl.com/y8zam4o2 Amazon Wish List https://www.amazon.com/hz/wishlist/ls/28CIOGSFRUXAD?ref_=wl_share Dead Rabbit Radio Archive Episodes https://deadrabbitradio.blogspot.com/2025/07/ episode-archive.html Archive https://archive.ph/UELip Help Promote Dead Rabbit! Dual Flyer https://i.imgur.com/OhuoI2v.jpg "As Above" Flyer https://i.imgur.com/yobMtUp.jpg “Alien Flyer” By TVP VT U https://imgur.com/gallery/aPN1Fnw “QR Code Flyer” by Finn https://imgur.com/a/aYYUMAh Links: Jeffrey Epstein Episodes https://deadrabbitradio.blogspot.com/2025/08/jeffrey-epstein-episodes.html Space Relations https://en.wikipedia.org/wiki/Space_Relations Donald Barr https://en.wikipedia.org/wiki/Donald_Barr Dalton School https://en.wikipedia.org/wiki/Dalton_School William Barr https://en.wikipedia.org/wiki/William_Barr Office of Strategic Services https://en.wikipedia.org/wiki/Office_of_Strategic_Services Space Relations: A Slightly Gothic Interplanetary Tale https://books.google.com/books?id=q1fQIQAACAAJ Space relations; a slightly Gothic interplanetary tale https://search.worldcat.org/title/856626 Jeffrey Epstein Taught at Dalton. His Behavior Was Noticed. https://www.nytimes.com/2019/07/12/nyregion/jeffrey-epstein-dalton-teacher.html Barr Quits Dalton School Post, Charging Trustees' Interference https://www.nytimes.com/1974/02/20/archives/barr-quits-dalton-school-post-charging-trustees-interference.html Epstein Truthers Are Obsessed With a Sci-Fi Book About Child Sex Slavery Written by Bill Barr's Dad https://www.vice.com/en/article/epstein-truthers-are-obsessed-with-a-sci-fi-book-about-child-sex-slavery-written-by-bill-barrs-dad/ Jeffrey Epstein Taught at Dalton. His Behavior Was Noticed. https://www.nytimes.com/2019/07/12/nyregion/jeffrey-epstein-dalton-teacher.html SHOULD HOLDEN CAULFIELD READ THESE BOOKS? By Donald Barr https://www.nytimes.com/1986/05/04/books/should-holden-caulfield-read-these-books.html As attorney general, William Barr personally investigated Jeffrey Epstein's death. Now Congress has questions. https://www.cbsnews.com/news/jeffrey-epstein-william-barr-deposition-congress/ Swirling Abyss By Man's Bed? https://forums.forteana.org/index.php?threads/swirling-abyss-by-mans-bed.60890/ Census of Hallucinations https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/census-hallucinations#: Donald J. West https://en.wikipedia.org/wiki/Donald_J._West Society for Psychical Research https://en.wikipedia.org/wiki/Society_for_Psychical_Research ------------------------------------------------ Logo Art By Ash Black Opening Song: "Atlantis Attacks" Closing Song: "Bella Royale" Music By Simple Rabbitron 3000 created by Eerbud Thanks to Chris K, Founder Of The Golden Rabbit Brigade Dead Rabbit Archivist Some Weirdo On Twitter AKA Jack YouTube Champ Stewart Meatball Reddit Champ: TheLast747 The Haunted Mic Arm provided by Chyme Chili Forever Fluffle: Cantillions, Samson, Gregory Gilbertson, Jenny the Cat Discord Mods: Mason http://www.DeadRabbit.com Email: DeadRabbitRadio@gmail.com Twitter: https://twitter.com/DeadRabbitRadio Facebook: www.Facebook.com/DeadRabbitRadio TikTok: https://www.tiktok.com/@deadrabbitradio Dead Rabbit Radio Subreddit: https://www.reddit.com/r/DeadRabbitRadio/ Paranormal News Subreddit: https://www.reddit.com/r/ParanormalNews/ Mailing Address Jason Carpenter PO Box 1363 Hood River, OR 97031 Paranormal, Conspiracy, and True Crime news as it happens! Jason Carpenter breaks the stories they'll be talking about tomorrow, assuming the world doesn't end today. All Contents Of This Podcast Copyright Jason Carpenter 2018 - 2025
Adrian Holovaty is the founder and CEO of Soundslice, a web-based application for learning music that combines audio, video, and music notation. We recorded this conversation in July of 2025, a few weeks after Soundslice was in the news because Chat GPT was directing its users to Soundslice promising a feature that did not exist. What makes the story even more insane is that in response Adrian decided to quickly develop and implement that feature, making Soundslice arguably the first company in history forced to add a feature to its product because of an AI hallucination.Follow the work of the Music Notation Community Group
Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they discuss the transformative world of Generative AI. Together, they uncover the ways in which generative AI agents are changing the way we interact with technology, automating tasks and delivering new possibilities. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! Last week was Part 2 of our conversation on core AI concepts, where we went over the basics of data science. In Part 3 today, we'll look at generative AI and gen AI agents in detail. To help us with that, we have Himanshu Raj, Principal AI/ML Instructor. Hi Himanshu, what's the difference between traditional AI and generative AI? 01:01 Himanshu: So until now, when we talked about artificial intelligence, we usually meant models that could analyze information and make decisions based on it, like a judge who looks at evidence and gives a verdict. And that's what we call traditional AI that's focused on analysis, classification, and prediction. But with generative AI, something remarkable happens. Generative AI does not just evaluate. It creates. It's more like a storyteller who uses knowledge from the past to imagine and build something brand new. For example, instead of just detecting if an email is spam, generative AI could write an entirely new email for you. Another example, traditional AI might predict what a photo contains. Generative AI, on the other hand, creates a brand-new photo based on description. Generative AI refers to artificial intelligence models that can create entirely new content, such as text, images, music, code, or video that resembles human-made work. Instead of simple analyzing or predicting, generative AI produces something original that resembles what a human might create. 02:16 Lois: How did traditional AI progress to the generative AI we know today? Himanshu: First, we will look at small supervised learning. So in early days, AI models were trained on small labeled data sets. For example, we could train a model with a few thousand emails labeled spam or not spam. The model would learn simple decision boundaries. If email contains, "congratulations," it might be spam. This was efficient for a straightforward task, but it struggled with anything more complex. Then, comes the large supervised learning. As the internet exploded, massive data sets became available, so millions of images, billions of text snippets, and models got better because they had much more data and stronger compute power and thanks to advances, like GPUs, and cloud computing, for example, training a model on millions of product reviews to predict customer sentiment, positive or negative, or to classify thousands of images in cars, dogs, planes, etc. Models became more sophisticated, capturing deeper patterns rather than simple rules. And then, generative AI came into the picture, and we eventually reached a point where instead of just classifying or predicting, models could generate entirely new content. Generative AI models like ChatGPT or GitHub Copilot are trained on enormous data sets, not to simply answer a yes or no, but to create outputs that look and feel like human made. Instead of judging the spam or sentiment, now the model can write an article, compose a song, or paint a picture, or generate new software code. 03:55 Nikita: Himanshu, what motivated this sort of progression? Himanshu: Because of the three reasons. First one, data, we had way more of it thanks to the internet, smartphones, and social media. Second is compute. Graphics cards, GPUs, parallel computing, and cloud systems made it cheap and fast to train giant models. And third, and most important is ambition. Humans always wanted machines not just to judge existing data, but to create new knowledge, art, and ideas. 04:25 Lois: So, what's happening behind the scenes? How is gen AI making these things happen? Himanshu: Generative AI is about creating entirely new things across different domains. On one side, we have large language models or LLMs. They are masters of generating text conversations, stories, emails, and even code. And on the other side, we have diffusion models. They are the creative artists of AI, turning text prompts into detailed images, paintings, or even videos. And these two together are like two different specialists. The LLM acts like a brain that understands and talks, and the diffusion model acts like an artist that paints based on the instructions. And when we connect these spaces together, we create something called multimodal AI, systems that can take in text and produce images, audio, or other media, opening a whole new range of possibilities. It can not only take the text, but also deal in different media options. So today when we say ChatGPT or Gemini, they can generate images, and it's not just one model doing everything. These are specialized systems working together behind the scenes. 05:38 Lois: You mentioned large language models and how they power text-based gen AI, so let's talk more about them. Himanshu, what is an LLM and how does it work? Himanshu: So it's a probabilistic model of text, which means, it tries to predict what word is most likely to come next based on what came before. This ability to predict one word at a time intelligently is what builds full sentences, paragraphs, and even stories. 06:06 Nikita: But what's large about this? Why's it called a large language model? Himanshu: It simply means the model has lots and lots of parameters. And think of parameters as adjustable dials the model fine tuned during learning. There is no strict rule, but today, large models can have billions or even trillions of these parameters. And the more the parameters, more complex patterns, the model can understand and can generate a language better, more like human. 06:37 Nikita: Ok… and image-based generative AI is powered by diffusion models, right? How do they work? Himanshu: Diffusion models start with something that looks like pure random noise. Imagine static on an old TV screen. No meaningful image at all. From there, the model carefully removes noise step by step to create something more meaningful and think of it like sculpting a statue. You start with a rough block of stone and slowly, carefully you chisel away to reveal a beautiful sculpture hidden inside. And in each step of this process, the AI is making an educated guess based on everything it has learned from millions of real images. It's trying to predict. 07:24 Stay current by taking the 2025 Oracle Fusion Cloud Applications Delta Certifications. This is your chance to demonstrate your understanding of the latest features and prove your expertise by obtaining a globally recognized certification, all for free! Discover the certification paths, use the resources on MyLearn to prepare, and future-proof your skills. Get started now at mylearn.oracle.com. 07:53 Nikita: Welcome back! Himanshu, for most of us, our experience with generative AI is with text-based tools like ChatGPT. But I'm sure the uses go far beyond that, right? Can you walk us through some of them? Himanshu: First one is text generation. So we can talk about chatbots, which are now capable of handling nuanced customer queries in banking travel and retail, saving companies hours of support time. Think of a bank chatbot helping a customer understand mortgage options or virtual HR Assistant in a large company, handling leave request. You can have embedding models which powers smart search systems. Instead of searching by keywords, businesses can now search by meaning. For instance, a legal firm can search cases about contract violations in tech and get semantically relevant results, even if those exact words are not used in the documents. The third one, for example, code generation, tools like GitHub Copilot help developers write boilerplate or even functional code, accelerating software development, especially in routine or repetitive tasks. Imagine writing a waveform with just a few prompts. The second application, is image generation. So first obvious use is art. So designers and marketers can generate creative concepts instantly. Say, you need illustrations for a campaign on future cities. Generative AI can produce dozens of stylized visuals in minutes. For design, interior designers or architects use it to visualize room layouts or design ideas even before a blueprint is finalized. And realistic images, retail companies generate images of people wearing their clothing items without needing real models or photoshoots, and this reduces the cost and increase the personalization. Third application is multimodal systems, and these are combined systems that take one kind of input or a combination of different inputs and produce different kind of outputs, or can even combine various kinds, be it text image in both input and output. Text to image It's being used in e-commerce, movie concept art, and educational content creation. For text to video, this is still in early days, but imagine creating a product explainer video just by typing out the script. Marketing teams love this for quick turnarounds. And the last one is text to audio. Tools like ElevenLabs can convert text into realistic, human like voiceovers useful in training modules, audiobooks, and accessibility apps. So generative AI is no longer just a technical tool. It's becoming a creative copilot across departments, whether it's marketing, design, product support, and even operations. 10:42 Lois: That's great! So, we've established that generative AI is pretty powerful. But what kind of risks does it pose for businesses and society in general? Himanshu: The first one is deepfakes. Generative AI can create fake but highly realistic media, video, audios or even faces that look and sound authentic. Imagine a fake video of a political leader announcing a policy, they never approved. This could cause mass confusion or even impact elections. In case of business, deepfakes can be also used in scams where a CEO's voice is faked to approve fraudulent transactions. Number two, bias, if AI is trained on biased historical data, it can reinforce stereotypes even when unintended. For example, a hiring AI system that favors male candidates over equally qualified women because of historical data was biased. And this bias can expose companies to discrimination, lawsuits, brand damage and ethical concerns. Number three is hallucinations. So sometimes AI system confidently generate information that is completely wrong without realizing it. Sometimes you ask a chatbot for a legal case summary, and it gives you a very convincing but entirely made up court ruling. In case of business impact, sectors like health care, finance, or law hallucinations can or could have serious or even dangerous consequences if not caught. The fourth one is copyright and IP issues, generative AI creates new content, but often, based on material it was trained on. Who owns a new work? A real life example could be where an artist finds their unique style was copied by an AI that was trained on their paintings without permission. In case of a business impact, companies using AI-generated content for marketing, branding or product designs must watch for legal gray areas around copyright and intellectual properties. So generative AI is not just a technology conversation, it's a responsibility conversation. Businesses must innovate and protect. Creativity and caution must go together. 12:50 Nikita: Let's move on to generative AI agents. How is a generative AI agent different from just a chatbot or a basic AI tool? Himanshu: So think of it like a smart assistant, not just answering your questions, but also taking actions on your behalf. So you don't just ask, what's the best flight to Vegas? Instead, you tell the agent, book me a flight to Vegas and a room at the Hilton. And it goes ahead, understands that, finds the options, connects to the booking tools, and gets it done. So act on your behalf using goals, context, and tools, often with a degree of autonomy. Goals, are user defined outcomes. Example, I want to fly to Vegas and stay at Hilton. Context, this includes preferences history, constraints like economy class only or don't book for Mondays. Tools could be APIs, databases, or services it can call, such as a travel API or a company calendar. And together, they let the agent reason, plan, and act. 14:02 Nikita: How does a gen AI agent work under the hood? Himanshu: So usually, they go through four stages. First, one is understands and interprets your request like natural language understanding. Second, figure out what needs to be done, in this case flight booking plus hotel search. Third, retrieves data or connects to tools APIs if needed, such as Skyscanner, Expedia, or a Calendar. And fourth is takes action. That means confirming the booking and giving you a response like your travel is booked. Keep in mind not all gen AI agents are fully independent. 14:38 Lois: Himanshu, we've seen people use the terms generative AI agents and agentic AI interchangeably. What's the difference between the two? Himanshu: Agentic AI is a broad umbrella. It refers to any AI system that can perceive, reason, plan, and act toward a goal and may improve and adapt over time. Most gen AI agents are reactive, not proactive. On the other hand, agentic AI can plan ahead, anticipate problems, and can even adjust strategies. So gen AI agents are often semi-autonomous. They act in predefined ways or with human approval. Agentic systems can range from low to full autonomy. For example, auto-GPT runs loops without user prompts and autonomous car decides routes and reactions. Most gen AI agents can only make multiple steps if explicitly designed that way, like a step-by-step logic flows in LangChain. And in case of agentic AI, it can plan across multiple steps with evolving decisions. On the memory and goal persistence, gen AI agents are typically stateless. That means they forget their goal unless you remind them. In case of agentic AI, these systems remember, adapt, and refine based on goal progression. For example, a warehouse robot optimizing delivery based on changing layouts. Some generative AI agents are agentic, like auto GPT. They use LLMs to reason, plan, and act, but not all. And likewise not all agentic AIs are generative. For example, an autonomous car, which may use computer vision control systems and planning, but no generative models. So agentic AI is a design philosophy or system behavior, which could be goal-driven, autonomous, and decision making. They can overlap, but as I said, not all generative AI agents are agentic, and not all agentic AI systems are generative. 16:39 Lois: What makes a generative AI agent actually work? Himanshu: A gen AI agent isn't just about answering the question. It's about breaking down a user's goal, figuring out how to achieve it, and then executing that plan intelligently. These agents are built from five core components and each playing a critical role. The first one is goal. So what is this agent trying to achieve? Think of this as the mission or intent. For example, if I tell the agent, help me organized a team meeting for Friday. So the goal in that case would be schedule a meeting. Number 2, memory. What does it remember? So this is the agent's context awareness. Storing previous chats, preferences, or ongoing tasks. For example, if last week I said I prefer meetings in the afternoon or I have already shared my team's availability, the agent can reuse that. And without the memory, the agent behaves stateless like a typical chatbot that forgets context after every prompt. Third is tools. What can it access? Agents aren't just smart, they are also connected. They can be given access to tools like calendars, CRMs, web APIs, spreadsheets, and so on. The fourth one is planner. So how does it break down the goal? And this is where the reasoning happens. The planner breaks big goals into a step-by-step plans, for example checking team availability, drafting meeting invite, and then sending the invite. And then probably, will confirm the booking. Agents don't just guess. They reason and organize actions into a logical path. And the fifth and final one is executor, who gets it done. And this is where the action takes place. The executor performs what the planner lays out. For example, calling APIs, sending message, booking reservations, and if planner is the architect, executor is the builder. 18:36 Nikita: And where are generative AI agents being used? Himanshu: Generative AI agents aren't just abstract ideas, they are being used across business functions to eliminate repetitive work, improve consistency, and enable faster decision making. For marketing, a generative AI agent can search websites and social platforms to summarize competitor activity. They can draft content for newsletters or campaign briefs in your brand tone, and they can auto-generate email variations based on audience segment or engagement history. For finance, a generative AI agent can auto-generate financial summaries and dashboards by pulling from ERP spreadsheets and BI tools. They can also draft variance analysis and budget reports tailored for different departments. They can scan regulations or policy documents to flag potential compliance risks or changes. For sales, a generative AI agent can auto-draft personalized sales pitches based on customer behavior or past conversations. They can also log CRM entries automatically once submitting summary is generated. They can also generate battlecards or next-step recommendations based on the deal stage. For human resource, a generative AI agent can pre-screen resumes based on job requirements. They can send interview invites and coordinate calendars. A common theme here is that generative AI agents help you scale your teams without scaling the headcount. 20:02 Nikita: Himanshu, let's talk about the capabilities and benefits of generative AI agents. Himanshu: So generative AI agents are transforming how entire departments function. For example, in customer service, 24/7 AI agents handle first level queries, freeing humans for complex cases. They also enhance the decision making. Agents can quickly analyze reports, summarize lengthy documents, or spot trends across data sets. For example, a finance agent reviewing Excel data can highlight cash flow anomalies or forecast trends faster than a team of analysts. In case of personalization, the agents can deliver unique, tailored experiences without manual effort. For example, in marketing, agents generate personalized product emails based on each user's past behavior. For operational efficiency, they can reduce repetitive, low-value tasks. For example, an HR agent can screen hundreds of resumes, shortlist candidates, and auto-schedule interviews, saving HR team hours each week. 21:06 Lois: Ok. And what are the risks of using generative AI agents? Himanshu: The first one is job displacement. Let's be honest, automation raises concerns. Roles involving repetitive tasks such as data entry, content sorting are at risk. In case of ethics and accountability, when an AI agent makes a mistake, who is responsible? For example, if an AI makes a biased hiring decision or gives incorrect medical guidance, businesses must ensure accountability and fairness. For data privacy, agents often access sensitive data, for example employee records or customer history. If mishandled, it could lead to compliance violations. In case of hallucinations, agents may generate confident but incorrect outputs called hallucinations. This can often mislead users, especially in critical domains like health care, finance, or legal. So generative AI agents aren't just tools, they are a force multiplier. But they need to be deployed thoughtfully with a human lens and strong guardrails. And that's how we ensure the benefits outweigh the risks. 22:10 Lois: Thank you so much, Himanshu, for educating us. We've had such a great time with you! If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on the AI for You course. Nikita: Join us next week as we chat about AI workflows and tools. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 22:32 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
What exactly is an AI agent, and how will agents change the way we work?In this episode, a16z general partners Erik Torenberg and Martin Casado sit down with Aaron Levie (CEO, Box) and Steven Sinofsky (a16z board partner; former Microsoft exec) to unpack one of the hottest debates in AI right now.They cover:Competing definitions of an “agent,” from background tasks to autonomous internsWhy today's agents look less like a single AGI and more like networks of specialized sub-agentsThe technical challenges of long-running, self-improving systemsHow agent-driven workflows could reshape coding, productivity, and enterprise softwareWhat history — from the early PC era to the rise of the internet — tells us about platform shifts like this oneThe conversation moves from deep technical questions to big-picture implications for founders, enterprises, and the future of work. Timecodes: 0:00 Introduction: The Evolution of AI Agents0:36 Defining Agency and Autonomy1:54 Long-Running Agents and Feedback Loops4:49 Specialization and Task Division in AI6:20 Human-AI Collaboration and Productivity6:59 Anthropomorphizing AI and Economic Impact9:10 Predictions, Progress, and Platform Shifts11:31 Recursive Self-Improvement and Technical Challenges13:20 Hallucinations, Verification, and Expert Productivity16:20 The Role of Experts and Tool Adoption22:14 Changing Workflows: Agents Reshaping Work Patterns45:55 Division of Labor, Specialization, and New Roles48:47 Verticalization, Applied AI, and the Future of Agents54:44 Platform Competition and the Application Layer55:29 Closing Thoughts and Takeaways Resources: Find Aaron on X: https://x.com/levieFind Martin on X: https://x.com/martin_casadoFind Steven on X: https://x.com/stevesi Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Ghostly Encounters on Mount Everest Lisa Morton delves into the eerie tales and ghost stories surrounding Mount Everest in this week's Ghost Report. From shadow figures in the Death Zone to strange voices in snowstorms and mysterious flying creatures, she discusses the supernatural elements and hallucinations experienced by climbers. Despite the extreme conditions of the world's highest mountain, these spooky encounters make Everest both a dangerous and fascinating place. 00:00 Introduction and Previous Topic Recap 00:22 Mount Everest: The Death Zone 01:12 Ghostly Encounters on Everest 01:50 Strange Phenomena and Hallucinations 02:24 Conclusion and Upcoming Topic
Welcome to episode 317 of The Cloud Pod, where the forecast is always cloudy! Justin, Matt, and an out-of-breath (from outrunning bears) Ryan are back in the studio to bring you another episode of everyone's favorite cloud and AI news wrap-up. This week we've got GTP-5, Oracle's newly minted AI conference, hallucinations (not the good kind), and even a Cloud Journey follow-up. Let's get into it! Titles we almost went with this week: Oracle Intelligence: Mission Las Vegas AI World: Oracle’s Excellent Adventure AI Gets a Reality Check: Amazon’s New Math Teacher for Hallucinating Models Jules Verne’s 20,000 Lines Under the C GPT-5: The Empire Strikes Back at Computing Costs 5⃣Five Alive: OpenAI’s Latest Language Model Drops GPT-5 is Alive! (And Ready for Your API Calls) From Kanban to Kan’t-Ban: Alienate Your User Base in One Update No More Console Hopping: ECS Logs Stay Put Following the Paper Trail: ECS Logs Go Live The Pull Request Whisperer Five’s Company: DigitalOcean Joins the GPT Party WireGuard Your Kubernetes: The Mesh-iah Has Arrived EKS-tending Your Reach: When Your Nodes Need a VPN Alternative Buttercup Blooms: DARPA’s Prize-Winning AI Security Tool Goes Public From DARPA to Docker: How Buttercup Brings AI Bug-Hunting to Your Laptop Agent 007: License to Query Compliance Manager: Because Nobody Dreams of Filling Out Federal Paperwork Do Compliance Managers dream of Public Sector sheep? Blob’s Your Uncle: Finding Lost Data in the Cloud Wassette: Teaching Your AI Assistant to Go Shopping for Tools Monitor, Monitor on the Wall, Who’s the Most Secure of All? Better Late Than IPv-Never VPC Logs: Now with 100% Less Manual Labor CloudWatch Catches All the Flows in Your Organization The Organization-Wide Net: No VPC Left Behind SQS Goes Super Size: Would You Like to Quadruple That? One MiB to Rule Them All: SQS’s Payload Growth Spurt Microsoft Finally Merges with Its $7.5 Billion Side Piece From Hub to Spoke: GitHub Loses Its Independence Cloud Run Forest Run: Google’s AI Workshop Marathon From Zero to AI Hero: Google’s Production Pipeline Workshop The Fast and the Serverless: Cloud Run Drift A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our Slack channel for more info. General News 01:17 GitHub will be folded into Microsoft proper as CEO steps down – Ars Technica GitHub will lose its operational independence and be integrated into Microsoft’s CoreAI organization in 2025, ending its separate CEO structure that has existed since Microsoft’s $7.5 billion acquisition in 2018. The reorganization eliminates the CEO position, with GitHub’s leadership team reporting to multiple executives within CoreAI rather than a single leader, potentially impacting decision-making speed and product direction.
What if the words we use to describe AI are misleading us—and even shaping how we regulate, trust, and collaborate with technology itself? In this revelatory episode of the Tech Humanist Show, host Kate O'Neill sits down with Anna Mills and Nate Angell, co-authors of a provocative new paper arguing that the term “AI hallucination” […]
In this bonus episode, John and Matt preview their upcoming conversation about trends in AI, the evolution of GPT modes and the ongoing debate between data warehouses and agent-based approaches.The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
As artificial intelligence-generated writing becomes widespread in modern life, hallucinations are showing up with alarming frequency in legal writing. Professor Christina Frohock generates some content on recent kerfuffles in the courts. Recorded on July 21, 2025.
Here in Jeremiah, you can hear the understandable anger in God's words with respect to the harmful misinformation coming from the false prophets. In fact, in loving concern he flags for us the danger it presents. This continues to be true today. In love, He redirects us to His Word and the remarkable and rock solid assurances contained therein.
Part 3 of our Midwest Super Slam mini-series! Coaches Loretta and Lindsay sit down with athlete Lily Medina after Burning River 100—her third 100-miler of the Slam—where she remains first overall. We break down the highs, lows, and hard lessons from Ohio and look ahead to Hallucination 100 in six weeks. In this episode: Burning River debrief: lows and highs problem-solving on the fly Fueling wins & fixes after three 100s What's holding up vs. what needs TLC (body + brain) Six-week rebuild: training, recovery, and taper priorities Loop-course tactics for Hallucination 100 and managing heat/humidity Competing while leading the Slam: pressure → performance If you're chasing your first 100—or your fourth—this one's packed with practical takeaways.
On this Freedom Friday, we wrapped up our “Ask the Expert” week with our very own Karl Clauson and special guest, Dr. Mark Hitchcock. Karl discussed the importance of knowing where you stand with the Lord and how to break out of the “Hallucination of Salvation.” Karl also turned to the phone lines and texts to answer listener questions about salvation. Karl Clauson is the weekday morning host of the nationally syndicated radio program, Karl and Crew. He is also the lead Pastor at 180 Chicago Church. He has also authored several books including, “The 7 Resolutions” and “Killing Sin: Conquer the One Thing That Is Defeating You.” You can hear the highlights of today's program on the Karl and Crew Showcast. Karl and Crew airs live weekday mornings from 5-9 a.m. Central Time. Click this link for ways to listen in your area! https://www.moodyradio.org/ways-to-listen/ Donate to Moody Radio: http://moodyradio.org/donateto/morningshowSee omnystudio.com/listener for privacy information.
In today's Cloud Wars Minute, I explore AWS's bold new approach to eliminating AI hallucinations using automated reasoning and formal logic.Highlights00:04 — AWS has announced that automated reasoning checks, a new Amazon Bedrock guardrails policy, are now generally available. In a blog post, AWS's Chief Evangelist (EMEA), Danilo Poccia said that: "Automated reasoning checks help you validate the accuracy of content generated by foundation models against domain knowledge. This can help prevent factual errors due to AI hallucinations."00:38 —The policy uses mathematical logic and formal verification techniques to validate accuracy. The biggest takeaway from this news is AWS's approach differs dramatically from probabilistic reasoning methods. Instead, automated reasoning checks provide 99% verification accuracy.01:10 — This means that the new policy is significantly more reliable in ensuring factual accuracy than traditional methods. The issue of hallucinations was a significant concern when generative AI first emerged. The problems associated with non-factual content are becoming increasingly damaging. This new approach represents an important leap forward. Visit Cloud Wars for more.
On this Freedom Friday, we wrapped up our “Ask the Expert” week with our very own Karl Clauson and special guest, Dr. Mark Hitchcock. Karl discussed the importance of knowing where you stand with the Lord and how to break out of the “Hallucination of Salvation.” Karl also turned to the phone lines and texts to answer listener questions about salvation. Karl Clauson is the weekday morning host of the nationally syndicated radio program, Karl and Crew. He is also the lead Pastor at 180 Chicago Church. He has also authored several books including, “The 7 Resolutions” and “Killing Sin: Conquer the One Thing That Is Defeating You.” You can hear the highlights of today's program on the Karl and Crew Showcast. Karl and Crew airs live weekday mornings from 5-9 a.m. Central Time. Click this link for ways to listen in your area! https://www.moodyradio.org/ways-to-listen/ Donate to Moody Radio: http://moodyradio.org/donateto/morningshowSee omnystudio.com/listener for privacy information.
On this Freedom Friday, we wrapped up our “Ask the Expert” week with our very own Karl Clauson and special guest, Dr. Mark Hitchcock. Karl discussed the importance of knowing where you stand with the Lord and how to break out of the “Hallucination of Salvation.” Karl also turned to the phone lines and texts to answer listener questions about salvation. Karl Clauson is the weekday morning host of the nationally syndicated radio program, Karl and Crew. He is also the lead Pastor at 180 Chicago Church. He has also authored several books including, “The 7 Resolutions” and “Killing Sin: Conquer the One Thing That Is Defeating You.” You can hear the highlights of today's program on the Karl and Crew Showcast. Karl and Crew airs live weekday mornings from 5-9 a.m. Central Time. Click this link for ways to listen in your area! https://www.moodyradio.org/ways-to-listen/ Donate to Moody Radio: http://moodyradio.org/donateto/morningshowSee omnystudio.com/listener for privacy information.
On this Freedom Friday, we wrapped up our “Ask the Expert” week with our very own Karl Clauson and special guest, Dr. Mark Hitchcock. Karl discussed the importance of knowing where you stand with the Lord and how to break out of the “Hallucination of Salvation.” Karl also turned to the phone lines and texts to answer listener questions about salvation. Karl Clauson is the weekday morning host of the nationally syndicated radio program, Karl and Crew. He is also the lead Pastor at 180 Chicago Church. He has also authored several books including, “The 7 Resolutions” and “Killing Sin: Conquer the One Thing That Is Defeating You.” You can hear the highlights of today's program on the Karl and Crew Showcast. Karl and Crew airs live weekday mornings from 5-9 a.m. Central Time. Click this link for ways to listen in your area! https://www.moodyradio.org/ways-to-listen/ Donate to Moody Radio: http://moodyradio.org/donateto/morningshowSee omnystudio.com/listener for privacy information.
On this Freedom Friday, we wrapped up our “Ask the Expert” week with our very own Karl Clauson and special guest, Dr. Mark Hitchcock. Karl discussed the importance of knowing where you stand with the Lord and how to break out of the “Hallucination of Salvation.” Karl also turned to the phone lines and texts to answer listener questions about salvation. Karl Clauson is the weekday morning host of the nationally syndicated radio program, Karl and Crew. He is also the lead Pastor at 180 Chicago Church. He has also authored several books including, “The 7 Resolutions” and “Killing Sin: Conquer the One Thing That Is Defeating You.” You can hear the highlights of today's program on the Karl and Crew Showcast. Karl and Crew airs live weekday mornings from 5-9 a.m. Central Time. Click this link for ways to listen in your area! https://www.moodyradio.org/ways-to-listen/ Donate to Moody Radio: http://moodyradio.org/donateto/morningshowSee omnystudio.com/listener for privacy information.
On this Freedom Friday, we wrapped up our “Ask the Expert” week with our very own Karl Clauson and special guest, Dr. Mark Hitchcock. Karl discussed the importance of knowing where you stand with the Lord and how to break out of the “Hallucination of Salvation.” Karl also turned to the phone lines and texts to answer listener questions about salvation. Karl Clauson is the weekday morning host of the nationally syndicated radio program, Karl and Crew. He is also the lead Pastor at 180 Chicago Church. He has also authored several books including, “The 7 Resolutions” and “Killing Sin: Conquer the One Thing That Is Defeating You.” You can hear the highlights of today's program on the Karl and Crew Showcast. Karl and Crew airs live weekday mornings from 5-9 a.m. Central Time. Click this link for ways to listen in your area! https://www.moodyradio.org/ways-to-listen/ Donate to Moody Radio: http://moodyradio.org/donateto/morningshowSee omnystudio.com/listener for privacy information.
durée : 00:04:22 - La Chronique du Grand Continent - par : Gilles Gressani - Dans la Silicon Valley, il y a beaucoup de choses bizarres, mais il y en a une en particulier qui sidère à chaque fois.
Prepare to be inspired as we dive into Ryan John's extraordinary journey of spiritual awakening and transformation. After a life-altering near-death experience (NDE), Ryan shares his vivid memories of visiting breathtaking crystal cities, exploring realms of limitless potential, and reconnecting with the essence of unconditional love. His personal tale reveals profound insights into the afterlife, the healing power of forgiveness, and the incredible resilience of the human spirit.In this heartfelt conversation hosted by Shannon Torrence on the Magic Is Real podcast, Ryan recounts how his traumatic head injury not only brought him to the brink of death but also opened the door to a transformative spiritual journey. From witnessing intricate crystal landscapes to experiencing unconditional peace and joy, Ryan's story is a powerful reminder of the beauty and possibilities that extend beyond our earthly existence. Along the way, he shares how his NDE heightened his intuition, deepened his compassion, and reignited his purpose to help others.This is more than a story of survival—it's a testament to the healing journey of the soul and the power of spiritual awakening. Whether you're curious about past life regression, spiritual healing, intuitive development, or dreamwork, Ryan's experiences will leave you in awe and inspire you to explore your own connection to the divine.Thank you for supporting Magic Is Real! Please like, comment, and share this video to help others discover these incredible stories. Follow us on Instagram, explore our Patreon for exclusive content, and email us at magicisrealshannon@gmail.com for mediumship readings. Don't forget to subscribe and join our mission to uncover and share transformative spiritual experiences. Together, we can spread hope, unity, and enlightenment.Thank you for being part of our community, and we look forward to connecting with you on this beautiful journey of discovery!
Dedicated to our dear Amigo Bill Hollweg who loved the X Minus One series. Series: X Minus One Show: Hallucination Orbit Date: May 15 1956 CAST: ANNOUNCER CAPTAIN, of the spaceship……………………….Lothar Tuppan MR. CHAKA, Mate of the spaceship……………..Jeff Billard DANBURY, a guest on the spaceship……………Jack Ward COLIN ORD, mostly sardonic……………………..Mark Kalita UNA, beautiful and smart…………………………..Angela Young ELSA, beautiful and tough…………………………Janet Didur MARILYN, beautiful and kind………………………Tanja Milojevic The Amigos and Audio Groove Cats present the X-Minus One classic “Hallucination Orbit” where a man alone on a space station for far too long, struggles to figure out if what he's seeing is real or if he is just hallucinating.
Some things don't want to be seen… but they still show up. In a flicker of motion at the edge of a stairwell. In the sound of knuckles tapping from inside the walls. In a face that looks familiar—until it doesn't. You tell yourself it's nothing. A draft. A dream. Old wiring. But what if it isn't? What if that shadow you all saw wasn't a trick of the light? What if the alarm that won't stop isn't broken—but trying to warn you? Tonight, the stories are quiet, but they don't stay that way. They reach. They follow. They watch. And after the intro… we listen. We're getting back to our roots with todays episode as we dive into some spooky first hand paranormal horror stories! We'll see you on the inside.. ----- TIME STAMPS: 0:00 - Hot Goss & Juicy Deets 1:08 - Intro 2:27 - Welcome Back 2:33 - Christian Always ALWAYS Dates Us.. 3:33 - Sticker Packs, SauceSquatch & New Merch 4:42 - STORY: Somewhat Alarming 8:03 - Not Christian's Favorite Sound Byte 10:23 - STORY: The 4-Way Shared Hallucination 10:58 - The Equality/Sexism Tangent 13:23 - Workin' Your Balls to the Bone (A Classic Saying) 15:43 - Heavy Metal Music vs. Christian Metal 17:23 - Potty Mouth Scott Needs to Get Back Under Control 20:23 - STORY: The Twisty Black Shadow 23:23 - Is There Still a Stigma Attached to Believers? 24:23 - Shoutout Christina at Fred Meyer? 24:43 - Canadian Listeners, Christian Might See You Soon. 25:23 - Go Listen to Nick Thune 27:34 - Is Scott Wildly Too Racist or Are You a Sensitive Little B*tch? 28:23 - Tangent Talk 29:49 - Christian Channels His Inner-Scott 31:13 - STORY: Knock Knock Knocking On This Guy's Wall 32:48 - Early Birds v. Night Owls, Which Are You? 35:03 - Was It a Woodpecker? 38:13 - STORY: The Protective Aunt 39:43 - Alien Ant Farm v. Michael Jackson: Smooth Criminal 43:03 - STORY: The Cranky Old Man 52:03 - The Problem with EVPs 54:03 - Sleep With The Lights On Returns This Fall 55:33 - Outro _____________________________________________ +PRODUCERS: Eric Long, Daniel Heng, Anthony M, Marlene Olmos +BECOME A PRODUCER: http://bit.ly/3WZ3xTg +BUY A $9 SHOUT-OUT: https://holler.baby/thefreakydeaky The Twilight Zone meets Mystery Science Theatre 3000 meets an uncomfortable Thanksgiving dinner conversation with your in-laws. TFD is a weekly paranormal comedy podcast featuring real ghost stories, Cryptid lore discussions, and true paranormal experiences hosted by believer/skeptic in-laws. Recorded in an undisclosed location somewhere in the beautiful woods of Wasilla, Alaska. +SUBMIT YOUR (TRUE) STORY: —Email: thegang@thefreakydeaky.com —Voicemail: 801-997-0051 +WEBSITE & MERCH: —Website: www.thefreakydeaky.com —Merch: www.thefreakydeaky.com/store +JOIN THE DISCUSSION: —TFD Facebook Group: https://tinyurl.com/tfdfb —Instagram: https://bit.ly/2HOdleo —Facebook: https://bit.ly/3ebSde6
Dan begins with a story is about a woman who is dealing with a family tragedy and the paranormal. Her brother is seemingly unaware, which may make it worse. Then, in Pennsylvania the haunted lore associated with Brownsville's Nemacolin Castle brings forth an interesting haunted house tale.Lynze brings us three stories this week. Her first takes us to Idaho Falls, a long and dark drive that includes an unwanted passenger. Then, a woman declares she is no Darren but also won't be moving despite seeing her daughter and her not-daughter in her home. Lastly, a creepy mirror hung in an Air BnB has a group of friends on edge. STDP Fan Stories Book #6: CREEPS AND PEEPERS! IF YOU CAN EVEN BELIEVE IT, WE ARE READY TO START TALKING ABOUT BOOK #6. FIRST OF ALL, THANK YOU FOR YOUR CONTINUED LOVE AND SUPPORT OVER THESE PAST 6 YEARS. IT FEELS IMPOSSIBLE TO US THAT WE HAVE BEEN DOING THIS FOR THAT LONG! IN A WORLD WITH SO MUCH CONTENT TO CHOOSE FROM, WE REMAIN HUMBLED BY YOUR SUPPORT.LAST YEAR, WE HEARD WHEN YOU SAID THERE WAS TOO MUCH TIME BETWEEN THE PRE-SALE AND THE ACTUAL DELIVERY OF THE BOOK. WE ALSO HEARD WHEN YOU SAID THERE WERE TOO MANY WEEKS OF ANNOUNCEMENTS BEFORE THE PRE SALE HAPPENED. NO WORRIES! WE THINK WE HAVE A BETTER SYSTEM THIS YEAR!WE ARE DOING A LIMITED RUN OF 1,500 AUTOGRAPHED BOOKS- SAME AS LAST YEAR!500 BOOKS WILL GO ON SALE JUST FOR ANNABELLE'S ON TUESDAY, AUGUST 19TH AT 12 NOON PT/3 PM ET.THE REMAINING 1,000 BOOKS WILL GO ON SALE WEDNESDAY, AUGUST 20TH, FOR ANYONE WHO WANTS ONE AT 12 NOON PT/3 PM ET.ALL BOOKS ARE PRE SALE AND WILL BE SHIPPED OUT TO YOU AS SOON AS THEY ARE READY. WE CAN GUARANTEE YOU WILL HAVE YOUR BOOKS ON OR BEFORE HALLOWEEN! YAY! ALL BOOKS WILL BE PURCHASED ON OUR MERCH WEBSITE, BADMAGICPRODUCTIONS.COMWant to be a Patron? Get episodes AD-FREE, listen and watch before they are released to anyone else, bonus episodes, a 20% merch discount, additional content, and more! Learn more by visiting: https://www.patreon.com/scaredtodeathpodcast.Send stories to mystory@scaredtodeathpodcast.comSend everything else to info@scaredtodeathpodcast.comPlease rate, review, and subscribe anywhere you listen.Thank you for listening!Follow the show on social media: @scaredtodeathpodcast on Facebook and IG and TTWebsite: https://www.badmagicproductions.com/Facebook: https://www.facebook.com/scaredtodeathpodcastInstagram: https://bit.ly/2miPLf5Mailing Address:Scared to Deathc/o Timesuck PodcastPO Box 3891Coeur d'Alene, ID 83816Opening Sumerian protection spell (adapted):"Whether thou art a ghost that hath come from the earth, or a phantom of night that hath no home… or one that lieth dead in the desert… or a ghost unburied… or a demon or a ghoul… Whatever thou be until thou art removed… thou shalt find here no water to drink… Thou shalt not stretch forth thy hand to our own… Into our house enter thou not. Through our fence, breakthrough thou not… we are protected though we may be frightened. Our life you may not steal, though we may feel SCARED TO DEATH." Subscribe to SiriusXM Podcasts+ to listen to new episodes of Scared to Death ad-free and a whole week early. Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus.
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Bayesian deep learning is a growing field with many challenges.Current research focuses on applying Bayesian methods to neural networks.Diffusion methods are emerging as a new approach for uncertainty quantification.The integration of machine learning tools into Bayesian models is a key area of research.The complexity of Bayesian neural networks poses significant computational challenges.Future research will focus on improving methods for uncertainty quantification. Generalized Bayesian inference offers a more robust approach to uncertainty.Uncertainty quantification is crucial in fields like medicine and epidemiology.Detecting out-of-distribution examples is essential for model reliability.Exploration-exploitation trade-off is vital in reinforcement learning.Marginal likelihood can be misleading for model selection.The integration of Bayesian methods in LLMs presents unique challenges.Chapters:00:00 Introduction to Bayesian Deep Learning03:12 Panelist Introductions and Backgrounds10:37 Current Research and Challenges in Bayesian Deep Learning18:04 Contrasting Approaches: Bayesian vs. Machine Learning26:09 Tools and Techniques for Bayesian Deep Learning31:18 Innovative Methods in Uncertainty Quantification36:23 Generalized Bayesian Inference and Its Implications41:38 Robust Bayesian Inference and Gaussian Processes44:24 Software Development in Bayesian Statistics46:51 Understanding Uncertainty in Language Models50:03 Hallucinations in Language Models53:48 Bayesian Neural Networks vs Traditional Neural Networks58:00 Challenges with Likelihood Assumptions01:01:22 Practical Applications of Uncertainty Quantification01:04:33 Meta Decision-Making with Uncertainty01:06:50 Exploring Bayesian Priors in Neural Networks01:09:17 Model Complexity and Data Signal01:12:10 Marginal Likelihood and Model Selection01:15:03 Implementing Bayesian Methods in LLMs01:19:21 Out-of-Distribution Detection in LLMsThank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer,...
Episode 134: In this episode of Critical Thinking - Bug Bounty Podcast we're joined by Diego Djurado to give us the scoop on XBOW. We cover a little about its architecture and approach to hunting, the challenges with hallucinations, and the future of AI in the BB landscape. Diego also shares some of his own hacking journey and successes in the Ambassador World cup.Follow us on twitter at: https://x.com/ctbbpodcastGot any ideas and suggestions? Feel free to send us any feedback here: info@criticalthinkingpodcast.ioShoutout to YTCracker for the awesome intro music!====== Links ======Follow your hosts Rhynorater and Rez0 on Twitter: https://x.com/Rhynoraterhttps://x.com/rez0__====== Ways to Support CTBBPodcast ======Hop on the CTBB Discord at https://ctbb.show/discord!We also do Discord subs at $25, $10, and $5 - premium subscribers get access to private masterclasses, exploits, tools, scripts, un-redacted bug reports, etc.You can also find some hacker swag at https://ctbb.show/merch!Today's Sponsor - ThreatLocker User StoreToday's Guest: https://x.com/djurado9====== This Week in Bug Bounty ======Announcement of our upcoming live hacking event at Nullcon Berlin, taking place on September 4-5Bug Bounty Village Speakers 2025Talkie Pwnii Caido showcaseCaido Masterclass – From Setup to ExploitsAccess Control vs Account Takeover: What Bug Bounty Hunters Need to Know====== Resources ======CVE-2025-49493: XML External Entity (XXE) Injection in Akamai CloudTest====== Timestamps ======(00:00:00) Introduction(00:05:56) Diego's ATO Bug(00:12:01) H1 Ambassador World Cup and work with XBOW(00:20:57) XBOW's CloudTest XXE Bug(00:49:59) Freedom, Hallucinations, & Validation(01:07:24) XBOW's Architecture(01:23:50) Humans in the Loop, Harnesses, and Xbow's Reception(01:44:21) Ambassador World Cup plans for the future
“There's a word for this brain rot, right? I think that's very real. There are studies coming out now that are showing that the more and more of our cognitive labor we offload to AI systems, the less creative we become, the less critical we become, and the less of our human faculties for reason we use. There's something sad about that, but there's also something dangerous about it because that leaves us very open to being manipulated. The surveillance capitalism kind of economy of extracting data from every possible moment of everyday life in order to extract value. The sensor systems that we're surrounded with are not simply passive devices that are recording us; they are increasingly becoming active sculptors of our experience of reality. Playing dirty.If we look at the entire history of the human experience, if you saw some text or you heard some spoken language, you could 100 percent reliably infer that there was a human who created that. Our experience of having that text or that image generated for us is very akin to the experience of a magic trick, and we sort of pre-subconsciously want to attribute some kind of intelligence to what's going on on the other side.I'm trying to think about these other media strategies, whether that's UFO photography, psychological operations, magic, or neuroscience, and take them seriously as contributing factors to the changing visual culture. With the collection at Pace Gallery, (New York, Jun 26–Aug 15) it was really just to put those images together and show them. They're all photographed on film. A lot of them are photographed on instant film. They're not images that are made with AI; they're not images that are photoshopped. What I really wanted to get at was thinking about the ways in which what we see in an image is very often what we're predisposed to see. I am playing with these mechanics of perception and proposing that as a way of thinking about images now.At the core of the work is that sense of curiosity, that sense of joy, that sense of beauty, and that sense of learning. I've been fortunate to have all kinds of strange and interesting experiences, whether that's seeing weird things in the sky over secret military bases in the middle of the Nevada desert, going scuba diving and finding internet cables on the bottom of the ocean, or tracking spy satellites in the sky and being able to predict when they'll appear in a flash against the backdrop of stars. The world around us is extraordinary and embodied, right? It is not on screens, and I'm very privileged to have that be so much a part of my process.”Trevor Paglen is an artist whose work makes the invisible visible, whether he's photographing secret government sites from miles away or revealing the hidden infrastructures of mass surveillance. He's a geographer by training, and he combines investigative journalism with his art practice to explore the hidden power structures of our time, including military technology and artificial intelligence. His work has been exhibited in major museums around the world, and he's also an award-winning author. His exhibitions, like 'The Black Sites' and 'Limit Tele-photography,' have earned him critical acclaim, including a MacArthur Fellowship. He's recently opened a new exhibition at Pace Gallery in New York called Cardinals, which delves into the intriguing connections between UFO sightings, AI, and the spread of disinformation.Episode Websitewww.creativeprocess.info/podInstagram:@creativeprocesspodcast
In the late 1970s, a quiet suburban home in Enfield, London, belonging to the Hodgson family got a new roommate—a poltergeist. The entity seemed to latch on to their 11-year-old daughter, Janet, who experienced levitation, violent attacks, and had strange voices speak through her. The case was heavily investigated by paranormal researchers who agree an evil spirit was present, but some think there's another explanation regarding Janet's supernatural abilities. For a full list of sources, please visit: sosupernaturalpodcast.com/haunted-enfield-poltergeistSo Supernatural is an audiochuck and Crime House production. Find us on social!Instagram: @sosupernatualpodTwitter: @_sosupernaturalFacebook: /sosupernaturalpod
In this episode of the FutureCraft GTM Podcast, hosts Ken Roden and Erin Mills reunite with returning favorite Liza Adams to discuss the current state of AI adoption in marketing teams. Liza shares insights on why organizations are still struggling with the same human change management challenges from a year ago, despite significant advances in AI technology. The conversation covers practical frameworks for AI implementation, the power of digital twins, and Liza's approach to building hybrid human-AI marketing teams. The episode features Liza's live demonstration in our new Gladiator segment, where she transforms a dense marketing report into an interactive Jeopardy game using Claude Artifacts. Unpacking AI's Human Challenge Liza returns with a reality check: while AI tools have dramatically improved, the fundamental challenge remains human adoption and change management. She reveals how one marketing team successfully built a 45-person organization with 25 humans and 20 AI teammates, starting with simple custom GPTs and evolving into sophisticated cross-functional workflows. The Digital Twin Strategy: Liza demonstrates how creating AI versions of yourself and key executives can improve preparation, challenge thinking, and overcome unconscious bias while providing a safe learning environment for teams. The 80% Rule for Practical Implementation: Why "good enough" AI outputs that achieve 80-85% accuracy can transform productivity when combined with human oversight, as demonstrated by real-world examples like translation and localization workflows. Prompt Strategy Over Prompt Engineering: Liza explains why following prompt frameworks isn't enough—you need strategic thinking about what questions to ask and how to challenge AI outputs for better results. 00:00 Introduction and Balance Quote 00:22 Welcome Back to FutureCraft 01:28 Introducing Liza Adams 03:58 The Unchanged AI Adoption Challenge 06:30 Building Teams of 45 (25 Humans, 20 AI) 09:06 Digital Twin Framework and Implementation 17:34 The 80% Rule and Real ROI Examples 25:31 Prompt Strategy vs Prompt Engineering 26:02 Measuring AI Impact and ROI 28:21 Handling Hallucinations and Quality Control 32:50 Gladiator Segment: Live Jeopardy Game Creation 40:00 The Future of Marketing Jobs 47:49 Why Balance Beats EQ as the Critical Skill 51:09 Rapid Fire Questions and Wrap-Up Edited Transcript: Introduction: The Balance Between AI and Human Skills As AI democratizes IQ, EQ becomes increasingly important. Critical thinking and empathy are important, but I believe as marketers, balance is actually more important. Host Updates: Leveraging AI Workflows Ken Roden shares his approach to building better AI prompts by having full conversations with ChatGPT, exporting them to Word documents, then using that content to create more comprehensive prompts. This method resulted in more thorough market analysis with fewer edits required. Erin Mills discusses implementing agentic workflows using n8n to connect different APIs and build systems where AI tools communicate with each other. The key insight: break workflows down into steps rather than having one agent handle multiple complex tasks. Guest Introduction: Liza Adams on AI Adoption Challenges Liza Adams, the AI MarketBlazer, returns to discuss the current state of AI adoption in marketing teams. Despite significant technological advances, organizations still struggle with the same human change management challenges from a year ago. The Core Problem: Change Management Over Technology The main issue isn't about AI tools or innovation - teams can't simply be given ChatGPT, Claude, Gemini, and Perplexity and be expected to maximize their potential. Marketing teams are being handed tools while leaders expect employees to figure out implementation themselves. People need to see themselves in AI use cases that apply to their specific jobs. Joint learning sessions where teams share what works and what doesn't are essential. The focus has over-pivoted to "what's the right tool" when it should be on helping people understand, leverage, and make real impact with AI. The AI Adoption Plateau Many organizations face an AI adoption plateau where early adopters have already implemented AI, but a large group struggles with implementation. Companies attempting to "go fully agentic" or completely redo workflows in AI are taking on too much at once. Success Story: The 45-Person Hybrid Team Liza shares a case study of a marketing team with 45 members: 25 humans and 20 AI teammates that humans built, trained, and now manage. They started with simple custom GPTs, beginning with digital twins. Digital Twin Strategy for AI Implementation Digital twins are custom GPTs trained on frameworks, thinking patterns, publicly available content, and personality assessments like Myers-Briggs. These aren't designed to mimic humans but to learn about them and find blind spots, challenge thinking patterns, and overcome unconscious bias. For executive preparation, team members use digital twins of leadership to anticipate questions, identify gaps in presentations, and prepare responses before important meetings. The progression: Simple digital twins → Function-specific GPTs (pitch deck builders, content ideators, campaign analyzers) → Chained workflows across multiple departments (marketing, sales, customer success). Prompt Strategy vs. Prompt Engineering Following prompt frameworks (GRACE: Goals, Role, Action, Context, Examples) isn't enough if the underlying thinking is basic. AI magnifies existing thinking quality - good or bad. Example: Instead of asking "How do I reduce churn?" ask "Can you challenge my assumption that this is a churn problem? Could this data indicate an upsell opportunity instead?" This transforms churn problems into potential revenue opportunities through different strategic thinking. The 80% Rule for Practical AI Implementation AI outputs achieving 80-85% accuracy can transform productivity when combined with human oversight. Example: A team reduced translation and localization costs from tens of thousands of dollars monthly to $20/month using custom GPTs for eight languages, with human review for the final 15-20%. Measuring AI ROI: Three Strategic Approaches Align with Strategic Initiatives: Connect AI projects to existing company strategic initiatives that already have budgets, resources, and executive attention. Focus on Biggest Pain Points: Target areas where teams will invest resources to solve problems - excessive agency costs, overworked teams, or poor quality processes. Leverage Trailblazers: Identify curious team members already building AI solutions and scale their successful implementations. Handling AI Hallucinations and Quality Control AI models hallucinate 30-80% of the time when used as question-and-answer machines for factual queries. Hallucinations are less common with strategic questions, scenario analysis, and brainstorming. Prevention strategies: Limit conversation length and dataset size to avoid context window limitations Use multiple AI models to cross-check outputs Implement confidence checking: Ask AI to rate confidence levels (low/medium/high), explain assumptions, and identify what additional information would increase confidence Live Demo: Claude Artifacts for Interactive Content Liza demonstrates transforming the 2025 State of Marketing AI report into an interactive Jeopardy game using Claude Artifacts. The process involves uploading a PDF, providing specific prompts for game creation, and generating functional code without technical skills. This "vibe coding" approach allows users to describe desired outcomes and have AI build interactive tools, calculators, dashboards, and training materials. Future of Marketing Jobs and Skills Emerging roles: AI guides, workflow orchestrators, human-AI team managers Disappearing roles: Language editors, basic researchers, repetitive design tasks Transforming roles: Most existing positions adapting to include AI collaboration Critical skill for the future: Balance Innovation with ethics Automation with human touch Personalization with transparency Balance may be more important than emotional intelligence as AI democratizes cognitive capabilities. Key Takeaways The Gladiator segment demonstrates how dense research reports can become engaging, interactive content without engineering resources. Making AI implementation fun helps teams stay balanced and avoid overwhelm. Success comes from starting with tiny AI wins rather than comprehensive strategies, focusing on human change management over tool selection, and building systems that augment rather than replace human creativity. This version removes the conversational back-and-forth while preserving all the searchable content people would look for when researching AI implementation, digital twins, prompt strategy, change management, and practical AI use cases. Stay tuned for more insightful episodes from the FutureCraft podcast, where we continue to explore the evolving intersection of AI and GTM. Take advantage of the full episode for in-depth discussions and much more. ----more---- To listen to the full episode and stay updated on future episodes, visit the FutureCraft GTM website. Disclaimer: This podcast is for informational and entertainment purposes only and should not be considered advice. The views and opinions expressed in this podcast are our own and do not represent those of any company or business we currently work for/with or have worked for/with in the past.
Get it together, people! ----- It was a very bad week for lawyers and hallucinations. A federal judge had to withdraw an opinion with fake cites. One Biglaw firm fired a partner over an invented case, while another firm got tossed off a case over AI shenanigans. And the scribe of Ashurbanipal got mercilessly trolled by a judge pointing out that his fake AI cite apology included... another fake cite. Why does it seem like this is all getting worse? A Biglaw firm pushes its start date leaving incoming associates in the lurch and Alina Habba might be the U.S. Attorney for New Jersey. Or maybe not. Or maybe yes.
Have you ever felt like you're stuck in a cycle, trying hard to live right, but falling into the same old traps? In this powerful episode, we sit down with Karl Clauson, and trust us—you'll want to lean in for this one. Karl's story is raw, honest, and deeply hopeful. From hitting rock bottom in a haze of cocaine and alcohol to experiencing a radical transformation through Christ, his testimony reminds us that true healing starts with true surrender.But it doesn't stop at his story. Karl drops truth bombs about how so many of us are living with what he calls the “hallucination of salvation”—busy with spiritual activity but missing authentic transformation. Whether you've been in church your whole life or are just starting to explore your faith, this message will resonate with you.Together, we unpack what it truly means to “kill sin”—not through willpower, but by walking in humility, staying close to Jesus, and relying on the Holy Spirit. You'll learn why healing is impossible without addressing sin, how to identify what's truly holding you back, and practical ways to step into the freedom God has already made available.If you're craving spiritual breakthrough, tired of the shame cycle, or just ready to be real with God, this conversation will encourage and equip you. Healing begins where honesty and the Holy Spirit meet.Chapters:[00:00] Podcast Preview[01:16] Topic and Guest Introduction[04:05] Carl's Journey to Breakthrough[06:40] From Good Church Kid to Cocaine Crash[09:50] The Hallucination of Salvation[12:29] The Call to Spiritual Revolution[16:00] “Killing Sin” – What the Book is Really About[19:12] The Marriage Crisis That Changed Everything[21:26] Killing Sin is Only Possible by the Spirit[27:45] What Self-Control Means[29:53] Discerning Sin from Bad Habits[32:04] The Power of Rhema and Speaking the Word[35:21] Why We're Afraid to Linger with God[36:25] Encouragement for the One Who Feels Stuck[37:55] How to Get Karl's BookResources mentioned:
Jump in with Carlos Juico and Gavin Ruta on episode 245 of Jumpers Jump. This episode we discuss: Hallucinations, Gavin's Final Destination moment, Fever Dreams, Sleep Paralysis theory, Frequency theories, Drinking theories, Tech changing lifestyle, Neurolink theories, Dark tech predictions, Severance, Ai Crimes, Heist stories, Ai superhero, The Old West, Changing your avatar, Evil case, Chicken fighting, Cenotes & Human sacrifices, Chichen Itza, Mayan temples theory, Humans & Gold theory, El Dorado, Frontal Lobe, Signs your getting old, Real vs Fakes, Jinn mythical creatures, Ghost stories, Evil Eye stories, Faith & Religion, Monk vlogs, Sinners theory, Hidden Spells and much more! Follow the podcast: @JumpersPodcast Follow Carlos: @CarlosJuico Follow Gavin: @GavinRutaa Check out the podcast on YouTube: https://bit.ly/JumpersJumpYT Thanks to our Sponsors: Sign up for your one-dollar-per-month trial and start selling today at https://SHOPIFY.COM/jumpers Learn more about your ad choices. Visit podcastchoices.com/adchoices
A mysterious stranger appears on a doctor's doorstep, arriving aboard the 5:16 train—a train that supposedly no longer stops in town. Only he and his daughter have ever seen the man. Despite the train's schedule, he continues to stop in, even when the train doesn't. | #RetroRadio EP0472Join the DARKNESS SYNDICATE: https://weirddarkness.com/syndicateCHAPTERS & TIME STAMPS (All Times Approximate)…00:00:00.000 = Show Open00:01:30.028 = CBS Radio Mystery Theater, “The Train Stops” (August 23, 1976)00:45:02.836 = The Croupier, “The Roman” (September 21, 1949) ***WD01:14:03.471 = Ripley's Believe It or Not, “Dancing Corpses” (1930) ***WD01:15:05.432 = Sam Spade, “2512353679 Caper” (December 15, 1950) ***WD01:43:10.233 = The Sealed Book, “King of the World” (March 25, 1945)02:13:06.805 = The Shadow, “Death On The Bridge” (March 03, 1940) ***WD (LQ)02:37:41.473 = Sleep No More, “Mr. Mergenthwirker's Lobblies” and “August Heat” (November 28, 1956) ***WD03:01:58.463 = BBC Radio 4 Spine Chillers, “The Boy Who Cried Wolf” (1984)03:15:53.075 = Stay Tuned for Terror, “Lizzie Borden Took an Axe” (July 12, 1945) ***WD03:28:50.563 = Strange Wills, “Emeralds Come High” (August 24, 1946) ***WD03:58:33.655 = Strange, “Greenwood Acres” (October 10, 1955) ***WD04:11:06.498 = Suspense, “King's Birthday” (August 28, 1943) ***WD04:39:13.512 = Tales of the Frightened, “Story of Wood” (December 18, 1957)04:43:33.169 = Tales of Tomorrow, “The Old Die Rich” (March 26, 1953) ***WD05:15:20.133 = The Saint, “Fight” (December 17, 1952) 05:44:15.872 = Theater Five, “I'm Sorry Mom” (October 09, 1964)06:04:11.491 = 2000 Plus, “The Giant Walks” (November 05, 1950) ***WD (LQ)06:32:05.818 = The Unexpected, “Find The Man” (1948)06:46:52.621 = Nick Carter, “Death After Dark” (February 19, 1944) ***WD07:16:08.317 = Dark Venture, “Ten Dollar Bill” (August 14, 1945)07:45:06.181 = The Weird Circle, “Queer Client” (October 15, 1944)08:12:29.361 = The Whistler, “Last of Devereaux” (July 23, 1944)08:41:54.325 = Witch's Tale, “Graveyard Mansion” (March 06, 1933) ***WD09:12:00.571 = X Minus One, “Sea Legs” (May 01, 1956) 09:40:13.075 = Show Close(ADU) = Air Date Unknown(LQ) = Low Quality***WD = Remastered, edited, or cleaned up by Weird Darkness to make the episode more listenable. Audio may not be pristine, but it will be better than the original file which may have been unusable or more difficult to hear without editing.Weird Darkness theme by Alibi Music LibraryABOUT WEIRD DARKNESS: Weird Darkness is a true crime and paranormal podcast narrated by professional award-winning voice actor, Darren Marlar. Seven days per week, Weird Darkness focuses on all thing strange and macabre such as haunted locations, unsolved mysteries, true ghost stories, supernatural manifestations, urban legends, unsolved or cold case murders, conspiracy theories, and more. On Thursdays, this scary stories podcast features horror fiction along with the occasional creepypasta. Weird Darkness has been named one of the “Best 20 Storytellers in Podcasting” by Podcast Business Journal. Listeners have described the show as a cross between “Coast to Coast” with Art Bell, “The Twilight Zone” with Rod Serling, “Unsolved Mysteries” with Robert Stack, and “In Search Of” with Leonard Nimoy.= = = = ="I have come into the world as a light, so that no one who believes in me should stay in darkness." — John 12:46= = = = =WeirdDarkness® is a registered trademark. Copyright ©2025, Weird Darkness.= = = = =#TrueCrime #Paranormal #ScienceFiction #OldTimeRadio #OTR #OTRHorror #ClassicRadioShows #HorrorRadioShows #VintageRadioDramas #SuspenseRadioClassics #1940sRadioHorror #OldRadioMysteryShows #CreepyOldRadioShows #TrueCrimeRadio #SupernaturalRadioPlays #GoldenAgeRadio #EerieRadioMysteries #MacabreOldTimeRadio #NostalgicThrillers #ClassicCrimePodcast #RetroHorrorPodcast #WeirdDarkness #WeirdDarknessPodcast #RetroRadio #ClassicRadioCUSTOM WEBPAGE: https://weirddarkness.com/WDRR0472
What if the most powerful psychedelic on Earth isn't just a hallucination—but a message? Neuroscientist Dr. Andrew Gallimore joins us to explore the mind-bending world of DMT, a molecule found in both nature and the human body that seems to reliably transport users to an alien-like realm filled with intelligent beings. In this episode, we dive into the science behind DMT, the mysterious commonalities in user experiences, and why Dr. Gallimore believes it is so important to try to understand. Is this a glimpse of the afterlife, a forgotten technology, or even contact with a creator? Strap in—this conversation will challenge everything you think you know about consciousness. Dr. Gallimore's book, Death by Astonishment: Confronting the Mystery of the World's Strangest Drug, is available wherever books are sold. Learn more about Dr. Andrew Gallimore at noonautics.org/personnel/dr-andrew-gallimore-dmt. Support the Show - Become a Patron! Help us grow and become a Patron today: https://www.patreon.com/smartpeoplepodcast Learn more about your ad choices. Visit megaphone.fm/adchoices