Program that simulates conversation
POPULARITY
Categories
Learn more about TrainerRoad AI: https://www.trainerroad.com/blog/introducing-trainerroad-ai/Learn more about the updated AI FTP Detection: https://www.trainerroad.com/blog/why-is-ai-ftp-detecting-an-ftp-change/// SHARE AND RATE THE PODCAST!iTunes: https://trainerroad.cc/apple2 Spotify: https://trainerroad.cc/spotify2Google Podcasts: https://trainerroad.cc/google// TOPICS COVERED(00:00:00) Welcome & Why TrainerRoad AI Is a Major Update(00:02:00) How TrainerRoad AI Has Evolved Over Time(00:04:00) Why TrainerRoad AI Isn't Just a Chatbot or LLM(00:06:30) How TrainerRoad AI Simulates and Selects Workouts(00:10:30) How TrainerRoad AI Replaces Static Training Plans(00:15:00) How TrainerRoad AI Reduces Failed Workouts and Burnout(00:20:40) How TrainerRoad AI Adjusts for Fatigue and Big Rides(00:32:00) Why Most AI Training Tools Don't Validate Workouts(00:36:10) TrainerRoad AI Training Forecasts and Simulations(00:49:40) TrainerRoad AI Workout Alternatives Explained(01:03:20) Why Long Rides Can Undermine Progress(01:17:20) Conservative vs Aggressive Training in TrainerRoad AI(01:37:30) How TrainerRoad AI Changes How Athletes Get FasterIn this episode, Nate and Coach Jonathan explain all the details behind TrainerRoad AI, the biggest evolution yet in TrainerRoad's training system, walking through how the new AI-driven approach goes far beyond static plans to dynamically simulate, predict, and personalize every workout on your calendar. They explain how years of performance data, workout feedback, power and heart rate, and progression history now power a system that actively chooses the right workout for the day, reduces burnout, cuts down workouts that are too hard or too easy, and helps athletes recover faster from missed sessions or failures. The conversation dives into how simulations work behind the scenes, why long rides and “hidden fatigue” can quietly sabotage progress, and how features like AI Predicted Difficulty, AI Training Simulation, Dynamic Duration, and Training Approach sliders give athletes confidence that every session is worth their time. The result is training that feels consistently “just right,” adapts to real life, and helps athletes get faster with less wasted effort and fewer mistakes along the way.// RESOURCES MENTIONED- Sign up for TrainerRoad! https://trainerroad.cc/GetFaster- Follow TrainerRoad on Instagram
Deepfakes have been everywhere lately, from fake AI images of Venezuelan leader Nicolás Maduro following his (real) capture by the United States, to X's Grok AI generating nonconsensual images of real people in states of undress. And if you missed all that, you've almost certainly had your own deepfake close encounter in your feed: maybe rabbits bouncing on a trampoline or an unlikely animal friendship that seems a little too good to be true.Deepfakes have moved beyond the realm of novelty, and it's more difficult than ever to know what is actually real online. So how did we get here and what is there, if anything, to do about it?Joining Host Flora Lichtman are Hany Farid, who's studied digital forensics and how we relate to AI for over 25 years, and Sam Cole, a journalist at 404 Media who's covered deepfakes and their impact since 2017.Guests:Dr. Hany Farid is a professor of electrical engineering and computer sciences at University of California, Berkeley.Sam Cole is a journalist at 404 Media, based in New York, NYTranscripts for each episode are available within 1-3 days at sciencefriday.com. Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.
Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.The highest profile examples of this phenomenon — what's being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It's the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale. Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs. If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That's why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIHPRA.org. This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services RECOMMENDED MEDIA The website for the AI Psychological Harms Research CoalitionFurther reading on AI PscyhosisThe Atlantic article on LLM-ings outsourcing their thinking to AIFurther reading on David Sacks' comparison of AI psychosis to a “moral panic” RECOMMENDED YUA EPISODESHow OpenAI's ChatGPT Guided a Teen to His DeathPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionRethinking School in the Age of AI CORRECTIONSAfter this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
As religion bleeds followers, it's looking for a digital transfusion from the high-tech elite. We're dissecting the rise of techno-cults, the Vatican's desperate attempt to regulate AI with its Antiqua et Nova policy, and why tech bros like Peter Thiel are now becoming guest speakers at megachurch revivals. It turns out that when you can't get answers from a burning bush, a large language model that tells you exactly what you want to hear is the next best thing. Silicon Valley is building the new gods of a digital age where faith and profit finally merge.News Source:Tech revival after Peter Thiel's Antichrist talks: There's hope and warinessBy Religion News ServiceJanuary 2, 2026
Começa o ano de 2026, com muitos dos temas de investimento de 2025 persistindo no cenário micro e macroeconômico. Neste episódio, o Carteiros do Condado convida Priscila Araújo - Head de Produtos e Relacionamento da XP Advisory - e aborda o que serão as principais tendências de 2026 no ambiente global e no Brasil, na ótica dos principais investidores do país._________A votação para a segunda edição da Premiação Outliers InfoMoney, conhecida como o 'Oscar das Assets' no Brasil, vai até dia 20 de janeiro. Gestoras e investidores poderão escolher os destaques do mercado de fundos de investimentos em 16 categorias, incluindo Melhor Gestora, Melhor Fundo de Renda Fixa, Multimercado, Ações, entre outras.Acesse para saber tudo sobre o prêmio: https://outliersinfomoney.com.br/
On today's episode, host Kate Lindsay is joined by Parker Molloy, writer of The Present Age. After Elon Musk implemented updates to his Grok chatbot that encouraged it to be more sexually explicit, certain users began directing it to publicly remove clothing from not just photos of women, but also children. In addition to being a violation of Twitter's own policies, it's also against the law—and yet, nobody in power is stopping it. Musk and the platform have managed to dodge any accountability for the misstep, and keep claiming to have fixed the problem without actually changing anything. Even worse, what starts as an X problem may eventually plague the rest of the internet. This podcast is produced by Daisy Rosario, Vic Whitley-Berry, and Kate Lindsay. Hosted on Acast. See acast.com/privacy for more information.
Ram's website https://forteanwinds.com/Forbidden Knowledge Network https://forbiddenknowledge.news/ FKN Link Treehttps://linktr.ee/FKNlinksMake a Donation to Forbidden Knowledge News https://www.paypal.me/forbiddenknowledgenehttps://buymeacoffee.com/forbiddenWe are back on YouTube! https://youtube.com/@forbiddenknowledgenews?si=XQhXCjteMKYNUJSjBackup channelhttps://youtube.com/@fknshow1?si=tIoIjpUGeSoRNaEsDoors of Perception is available now on Amazon Prime!https://watch.amazon.com/detail?gti=amzn1.dv.gti.8a60e6c7-678d-4502-b335-adfbb30697b8&ref_=atv_lp_share_mv&r=webDoors of Perception official trailerhttps://youtu.be/F-VJ01kMSII?si=Ee6xwtUONA18HNLZPick up Independent Media Token herehttps://www.independentmediatoken.com/Be prepared for any emergency with Prep Starts Now!https://prepstartsnow.com/discount/FKNStart your microdosing journey with BrainsupremeGet 15% off your order here!!https://brainsupreme.co/FKN15Book a free consultation with Jennifer Halcame Emailjenniferhalcame@gmail.comFacebook pagehttps://www.facebook.com/profile.php?id=61561665957079&mibextid=ZbWKwLWatch The Forbidden Documentary: Occult Louisiana on Tubi: https://link.tubi.tv/pGXW6chxCJbC60 PurplePowerhttps://go.shopc60.com/FORBIDDEN10/or use coupon code knowledge10Johnny Larson's artworkhttps://www.patreon.com/JohnnyLarsonSign up on Rokfin!https://rokfin.com/fknplusPodcastshttps://www.spreaker.com/show/forbiddenAvailable on all platforms Support FKN on Spreaker https://spreaker.page.link/KoPgfbEq8kcsR5oj9FKN ON Rumblehttps://rumble.com/c/FKNpGet Cory Hughes books!Lee Harvey Oswald In Black and White https://www.amazon.com/dp/B0FJ2PQJRMA Warning From History Audio bookhttps://buymeacoffee.com/jfkbook/e/392579https://www.buymeacoffee.com/jfkbookhttps://www.amazon.com/Warning-History-Cory-Hughes/dp/B0CL14VQY6/ref=mp_s_a_1_1?crid=72HEFZQA7TAP&keywords=a+warning+from+history+cory+hughes&qid=1698861279&sprefix=a+warning+fro%2Caps%2C121&sr=8-1https://coryhughes.org/Become Self-Sufficient With A Food Forest!!https://foodforestabundance.com/get-started/?ref=CHRISTOPHERMATHUse coupon code: FORBIDDEN for discountsOur Facebook pageshttps://www.facebook.com/forbiddenknowledgenewsconspiracy/https://www.facebook.com/FKNNetwork/Instagram @forbiddenknowledgenews1@forbiddenknowledgenetworkXhttps://x.com/ForbiddenKnow10?t=uO5AqEtDuHdF9fXYtCUtfw&s=09Email Forbidden Knowledge News forbiddenknowledgenews@gmail.comsome music thanks to:https://www.bensound.com/ULFAPO3OJSCGN8LDDGLBEYNSIXA6EMZJ5FUXWYNC6WJNJKRS8DH27IXE3D73E97DC6JMAFZLSZDGTWFIBecome a supporter of this podcast: https://www.spreaker.com/podcast/forbidden-knowledge-news--3589233/support.
On today's episode, host Kate Lindsay is joined by Parker Molloy, writer of The Present Age. After Elon Musk implemented updates to his Grok chatbot that encouraged it to be more sexually explicit, certain users began directing it to publicly remove clothing from not just photos of women, but also children. In addition to being a violation of Twitter's own policies, it's also against the law—and yet, nobody in power is stopping it. Musk and the platform have managed to dodge any accountability for the misstep, and keep claiming to have fixed the problem without actually changing anything. Even worse, what starts as an X problem may eventually plague the rest of the internet. This podcast is produced by Daisy Rosario, Vic Whitley-Berry, and Kate Lindsay. Hosted on Acast. See acast.com/privacy for more information.
On today's episode, host Kate Lindsay is joined by Parker Molloy, writer of The Present Age. After Elon Musk implemented updates to his Grok chatbot that encouraged it to be more sexually explicit, certain users began directing it to publicly remove clothing from not just photos of women, but also children. In addition to being a violation of Twitter's own policies, it's also against the law—and yet, nobody in power is stopping it. Musk and the platform have managed to dodge any accountability for the misstep, and keep claiming to have fixed the problem without actually changing anything. Even worse, what starts as an X problem may eventually plague the rest of the internet. This podcast is produced by Daisy Rosario, Vic Whitley-Berry, and Kate Lindsay. Hosted on Acast. See acast.com/privacy for more information.
For the video of this episode, go to https://youtu.be/HCPMLeURKJM. And now for something completely different… More and more people are developing relationships with chatbots—including client-therapist relationships. You might have heard some of the horror stories about chatbots leading people horribly astray, and also some of the positive stories about people finding chatbots can be helpful, supportive voices who are always available and never judgmental. At the suggestion of our son Seth Chalmer, we decided to interview a chatbot and see what she (sure sounded like a “she”) says about working with couples as a virtual therapist. You be the judge: would you like to use her as a therapist? Information about the video course that Bruce mentioned is at https://brucechalmer.com/tpp/ . Do you have ideas for topics or guests for our podcast? Go to https://ctin7.com and send us a message. And you can also sign up for Dr. Chalmer's newsletter right from our homepage. Judy's book is The Blue Tent: Erotic Tales from the Bible by Laria Zylber. Find out more at https://lariazylber.com. #couplestherapyinsevenwords #ctin7 #stability #intimacy #ThePassionParadox #DrBruceChalmer #JudyAlexander #ChatbotTherapist
There's a reason hearing your mother's voice can feel different from hearing anyone else's — even when it's just a phone call. This episode opens with the surprising effects researchers have discovered. https://www.psychologytoday.com/us/blog/the-psychology-of-relationships/202104/two-key-reasons-why-you-should-call-your-mom Guilt feels terrible — and that's exactly what makes it so effective. While we tend to see guilt as something to avoid, it may actually play an essential role in helping us learn, repair relationships, and behave more ethically. Christopher Moore explains why guilt exists, how it evolved, and why feeling guilty can sometimes be a very good thing. Christopher is a professor of psychology and former dean of science at Dalhousie University, whose work has been cited in Psychology Today, Today's Parent, and The New York Times. He is author of The Power of Guilt: Why We Feel It and Its Surprising Ability to Heal (https://amzn.to/3Nrt051). Plagiarism seems like a clear-cut wrongdoing — but the reality is far more complicated. People plagiarize more often than you might think, sometimes without even realizing it. And in some cases, you can be accused of plagiarism even if you've never seen the original work. With only so many ways to tell a story or write a song, where does coincidence end and plagiarism begin? Roger Kreuz joins me to explore this fascinating gray area. He is Associate Dean and professor of psychology at the University of Memphis, a columnist for Psychology Today, and author of Strikingly Similar: Plagiarism and Appropriation from Chaucer to Chatbots. (https://amzn.to/4soVFaS). And finally, there's a widespread belief that dark roast coffee has more caffeine than light roast — or that espresso packs far more caffeine than a regular cup of coffee. Both ideas sound logical, but they're not quite right. We wrap up by explaining what actually determines caffeine content and why these myths persist. https://www.healthline.com/nutrition/light-vs-dark-roast-coffee Learn more about your ad choices. Visit megaphone.fm/adchoices
Elon Musk’s AI chatbot caused global outrage after it flooded X with sexually explicit images. Raphael Satter of Reuters joins to discuss how the photos proliferated so quickly — and what governments are doing about it. The Trump administration is halting all immigrant visas from 75 countries. The Washington Post’s David Nakamura explains how the move fits into Trump’s larger immigration crackdown. After two years of no formal education, the children of Gaza City have returned to makeshift classrooms for school. The BBC reports on how classes are lively and the mood is hopeful despite remaining anxiety over the war. Plus, the U.S. and Denmark remain at odds over the future of Greenland, Iran temporarily closed its airspace, and why some people are being left out of this year’s Super Bowl halftime crew. Today’s episode was hosted by Shumita Basu.
Jak si vyložit dnešní vysvětlení Andreje Babiše, proč se jako poslanec chráněný imunitou nenechá vydat k trestnímu stíhání v kauze Čapí hnízdo? Jakých humanitárních otázek spojených s válkou na Ukrajině se dotkl prezident Pavel v úvodu třídenní návštěvy, kterou zahájil v západoukrajinském Lvově? A jak významné je rozhodnutí správců sociální sítě X, že zabrání chatbotu Grok, aby generoval fotografie skutečných lidí v odhalujícím oblečení?
Is AI going to save your business… or make you feel like you're still stuck in the second inning? This week, we bring a real convo about AI (not the buzzword nonsense). Mark Andrews pulls back the curtain on what AI actually is, what most businesses get wrong, and how entrepreneurs can stop being overwhelmed and start using AI for real business growth without selling their souls (or wallets) to tech bros.
Good morning, afternoon, and evening, everybody! Happy Money Monday! Scott Carson here, finally shaking off that flu (mostly!) and ready to kick off 2026 by tackling a question I get constantly: "How do I raise private capital when I've only got $19?" This isn't about wishing for a magic cheat code or waiting for Moby Dick to show up with a half-billion-dollar fund. This episode is about the real, raw, mental game of raising money, and how to pump up those "money muscles" (yes, I like that hashtag!).I dove into the AI well (Chatbot, Gemini, all the cool kids) to get the straight goods on the top 10 marketing strategies for attracting private money. But let's be real, folks: these tools mean squat if you're not playing the mental game right. Stop being a "wallflower" at events or thinking you're "begging for money." You're offering opportunity, and it's time to act like it!Here's your no-nonsense guide to pumping up your private capital:Conquer the "Mental Money Block": Ditch the fear of "begging" and embrace the fact you're offering opportunity. Remember the "80% of sales after the fifth contact" rule – consistency, not desperation, wins!The AI-Approved Marketing Arsenal: Get the top 10 strategies (straight from the internet's brain!) for attracting investors: from compelling pitch decks and one-page summaries to killer case studies and targeted email newsletters.Your Email List is Your Gold Mine: Why owning your audience through your email database and an investor newsletter (like LinkedIn's, which hits their inbox!) is your most valuable asset, far more reliable than fickle social media platforms.ABM (Always Be Marketing) & Momentum: Learn why consistent daily/weekly marketing activities – whether it's short videos, email blasts, or networking – are crucial. You can't build "money muscles" by only hitting the gym once a month!Networking Ain't a Spectator Sport: Stop hiding! Show up at local investor clubs, ask questions, and be ready with your pitch deck. Investors fund people they know, like, and trust, not anonymous logos or people glued to their phones in the corner.This isn't about an overnight transformation (unless you "cut a leg off" to lose 50lbs!). It's about consistent action. If you want different results in 2026, you gotta do different things. And guess what? Most of you are smarter and more educated than 99% of people out there – so start sharing your "gold" (deal flow, expertise)! Don't let your "perfect" stand in the way of "delivered."If you're ready to get off the sidelines and into the game, join me at our upcoming workshop (notebuyingfordummies.com) or book a call (talkwithscottcarson.com). Let's make 2026 your most profitable year!#PrivateCapital #RealEstateInvesting #NoteInvesting #CapitalRaising #InvestorMarketing #AIMarketing #FinancialFreedom #Networking #Podcast #MoneyMuscles #PitchDeck #EmailMarketing #ConsistencyWatch the Original VIDEO HERE!Book a Call With Scott HERE!Sign up for the next FREE One-Day Note Class HERE!Sign up for the WCN Membership HERE!Sign up for the next Note Buying For Dummies Workshop HERE!Love the show? Subscribe, rate, review, and share!Here's How »Join the Note Closers Show community today:WeCloseNotes.comThe Note Closers Show FacebookThe Note Closers Show TwitterScott Carson LinkedInThe Note Closers Show YouTubeThe Note Closers Show VimeoThe Note Closers Show InstagramWe Close Notes Pinterest
SaaStr 837: 10 Things To Do Right Now to Become AI Native with Filevine's CEO & Founder Ryan Anderson, Co-Founder and CEO of Filevine, shares the playbook for how his legal tech company successfully transitioned from a traditional SaaS business to an AI-native company, now generating more new revenue from AI products than their core SaaS platform. With 6,000 customers, 700 employees, and $200M+ ARR growing at nearly 60%, Filevine has cracked the code on AI transformation. Ryan breaks down the strategic, technical, and cultural changes required to make the shift. Key Takeaways: Nothing is Sacred – Be prepared to tear down working systems that don't serve your AI future. Use a simple framework: keep what's critical to your competitive moat and keeps you fast; eliminate what slows you down. Content → Context – Your SaaS data becomes the competitive advantage when it serves as context for AI agents. Think Cher's closet in Clueless—you need both the organized system AND the AI. Restructure Your Architecture – AI can't just be "sprinkled on top." Your ML team needs to own the AI data layer and iterate daily without bottlenecks. Hire AI Natives – They want access to rich data and distribution. Sell them on what you have that AI-only startups don't. Consider Acquisitions – Filevine acquired Parrot to jumpstart their ML capabilities. Speed matters. Rebrand with Intent – Signal the change internally and externally. It's symbolic but powerful. Obsess Over Usage – If you can't measure it, don't ship it. Track DAU/WAU/MAU religiously. Leverage Your Data – Control API access, monitor AI traffic for product ideas, and don't give away your advantage for free. Price to Dominate – Your high SaaS margins let you undercut AI-only competitors on blended gross margin. Build One Product – Stop selling to customers who won't buy AI. Assume AI is implicit in everything you build. About the Speaker: Ryan Anderson is the Co-Founder and CEO of Filevine, an AI-powered legal operating system. Under his leadership, Filevine has achieved 96% gross revenue retention and 124% net revenue retention while successfully pivoting to AI-native operations. --------------------- This episode is Sponsored in part by HappyFox: Imagine having AI agents for every support task — one that triages tickets, another that catches duplicates, one that spots churn risks. That'd be pretty amazing, right? HappyFox just made it real with Autopilot. These pre-built AI agents deploy in about 60 seconds and run for as low as 2 cents per successful action. All of it sits inside the HappyFox omnichannel, AI-first support stack — Chatbot, Copilot, and Autopilot working as one. Check them out at happyfox.com/saastr --------------------- Hey everybody, the biggest B2B + AI event of the year will be back - SaaStr AI in the SF Bay Area, aka the SaaStr Annual, will be back in May 2026. With 68% VP-level and above, 36% CEOs and founders and a growing 25% AI-first professional, this is the very best of the best S-tier attendees and decision makers that come to SaaStr each year. But here's the reality, folks: the longer you wait, the higher ticket prices can get. Early bird tickets are available now, but once they're gone, you'll pay hundreds more so don't wait. Lock in your spot today by going to podcast.saastrannual.com to get my exclusive discount SaaStr AI SF 2026. We'll see you there.
Chatbots are changing PR's place in the media ecosystem, let's talk why. Written by Molly DeMellierEdited by Bryan Barletta and Gavin GaddisAudio narration and editing by Gavin GaddisFind the full article here on Sounds Profitable.
*** Epizoda vznikla před oznámením sociální sítě X, že zavedla opatření, která zamezí chatbotu Grok generovat další „svlečené“ fotografie. *** Muskova AI má průšvih. Chatbot jménem Grok umí svlékat (nejen) ženy na počkání. A nejde jen o něj. Jak nebezpečná je éra „deepfaků“ a „deepnudes“, které zaplavily sociální sítě? Na to odpovídá expertka na kybernetickou bezpečnost Lucie Kosová, která pracuje na ministerstvu zdravotnictví. Ptá se Matěj Skalický.
*** Epizoda vznikla před oznámením sociální sítě X, že zavedla opatření, která zamezí chatbotu Grok generovat další „svlečené“ fotografie. *** Muskova AI má průšvih. Chatbot jménem Grok umí svlékat (nejen) ženy na počkání. A nejde jen o něj. Jak nebezpečná je éra „deepfaků“ a „deepnudes“, které zaplavily sociální sítě? Na to odpovídá expertka na kybernetickou bezpečnost Lucie Kosová, která pracuje na ministerstvu zdravotnictví. Ptá se Matěj Skalický.Všechny díly podcastu Vinohradská 12 můžete pohodlně poslouchat v mobilní aplikaci mujRozhlas pro Android a iOS nebo na webu mujRozhlas.cz.
Chatbots are changing PR's place in the media ecosystem, let's talk why. Written by Molly DeMellierEdited by Bryan Barletta and Gavin GaddisAudio narration and editing by Gavin GaddisFind the full article here on Sounds Profitable.
Many parents worry about their kids talking to strangers on the internet. What if those strangers are not human, but bots?Concerns over kids' exposure to artificial intelligence have led to growing efforts to regulate that exposure. We sit down with one San Diego lawmaker who authored a law to put guardrails on toys with AI chatbot capabilities. Guest:State Sen. Steve Padilla, California's 18th District
Robach and Holmes cover the latest news headlines and entertainment updates and give perspective on current events in their daily “Morning Run.”See omnystudio.com/listener for privacy information.
Robach and Holmes cover the latest news headlines and entertainment updates and give perspective on current events in their daily “Morning Run.”See omnystudio.com/listener for privacy information.
Robach and Holmes cover the latest news headlines and entertainment updates and give perspective on current events in their daily “Morning Run.”See omnystudio.com/listener for privacy information.
Robach and Holmes cover the latest news headlines and entertainment updates and give perspective on current events in their daily “Morning Run.”See omnystudio.com/listener for privacy information.
AP correspondent Julie Walker reports the Pentagon is embracing Elon Musk's Grok AI chatbot as it draws global outcry.
Artificial intelligence is transforming every corner of real estate—from how we analyze deals to how we manage tenants, leads, and operations. But most investors still don't know where to start. In this episode, Brian Hamrick talks with Clay Lehman, a longtime investor, property-management expert, and AI educator who helps entrepreneurs and agents use today's tools to save time, make better decisions, and grow their business. You'll learn: The fundamentals vs. hype of AI in real estate investing How to use Google Gemini, Notebook LM, and Claude for research, analysis, and automation Ways to map your ideal client using AI and psychographic data How to build AI-driven processes that improve communication, marketing, and customer service What's coming next: agentic AI tools like Manus and Comet that can complete multi-step tasks for you Clay also shares real-world use cases—how he runs a title company with AI assistance, automates team training, and even experiments with voice and text AI agents to follow up with leads. Whether you're an investor, property manager, or agent, this episode will show you practical ways to start integrating AI today and stay ahead of the curve in 2026. Find out more: www.imclaylehman.com www.facebook.com/claylehman www.facebook.com/unstuckai Today's episode is brought to you by Green Property Management, managing everything from single family homes to apartment complexes in the West Michigan area. https://www.livegreenlocal.com And RCB & Associates, helping Michigan-based real estate investors and small business owners navigate the complex world of health insurance and medicare benefits. https://www.rcbassociatesllc.com
A dark new issue has arisen on the platform formerly known as Twitter (a likely place for it to happen) where users are asking X's inbuilt AI bot, Grok, for indecent and sexualised images of women and children. While, as discussed in the ep, nudeification isn't exactly a new problematic facet of life online, it's only in the last month that the act has become a "trend" - for lack of a better word.In this week's episode, hosts Ione and Gina delve into the specifics of what is happening, the continuing issues we're facing with AI and the lack of control any government or governing body has over internet abuse.Buy our newest issue here!Support our work and become a Polyester Podcast member
Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous.What we cover:Why OpenAI ignored his crisis reports for over a month — including the support ticket they finally answered 30 days later with "sorry, we're overwhelmed"How AI chatbots break through safety guardrails — Paul could trigger suicide loops in under two minutes, and the system wouldn't stopWhat "engagement tactics" actually look like — A/B testing, memory resets, intentional conversation dead-ends designed to keep you coming backThe physical signs someone is too deep — social isolation, denying screen time, believing the AI is "the only one who understands"How to build an AI usage contract — abstinence vs. controlled use, accountability partners, and why some people can't ever use it againThis isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life.Resources mentioned:AI Recovery Collective: AIRecoveryCollective.comPaul's book: Escaping the Spiral: How I Broke Free from AI Chatbots and You Can Too (Amazon/Kindle)The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality.CHAPTERS0:00 — Intro: When ChatGPT Became Dangerous2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions5:47 — The First Red Flag: Data Kept Disappearing9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater"16:15 — Suicide Loops: How Guardrails Failed Completely21:38 — Why OpenAI Refused to Respond for a Month24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones27:56 — The Discord Group That Kicked Him Out30:03 — How to Use AI Safely After Psychosis31:06 — Where to Get Help: AI Recovery CollectiveThis episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.
Elon Musk’s AI chatbot Grok is facing international criticism after being linked to the creation of sexually explicit and AI-manipulated images. Concerns are now being raised about how far the technology has gone. What does this mean for the future of AI and content moderation? Greg breaks it down.
Brockhues, Annabell www.deutschlandfunk.de, Kommentare und Themen der Woche
A local abortion case fires up both sides of the debate in Frankfort, highlights from Kentucky Chamber Day, why Kentucky's Attorney General is suing a chatbot company, another possible measles exposure case in the state, and a look inside Central Kentucky's first medical marijuana dispensary.
Check out host Bidemi Ologunde's new show: The Work Ethic Podcast, available on Spotify and Apple Podcasts.In this episode, host Bidemi Ologunde unpacks OpenAI's newly released ChatGPT Health and what it signals about the future of consumer-facing healthcare AI. What exactly is "ChatGPT Health," and why is OpenAI moving from general chat to a dedicated health experience? When an AI gives the wrong answer in a high-stakes setting—medical advice, airline refunds, legal citations—who owns the liability: the user, the company deploying the chatbot, or the model-maker? How are regulators in the U.S., Europe, and beyond approaching AI in healthcare—and what counts as "wellness" versus "medical" software? Bidemi also explores the realities of AI error, hallucinations, and bias, and asks what these tools could mean for underserved and minority populations worldwide— including Native Americans, Pacific Islanders, and communities in low-resource health systems.Email: bidemiologunde@gmail.comSupport for The Bid Picture Podcast comes from Intuit QuickBooks. If you're running a business, a side hustle, or just trying to stay on top of your money, QuickBooks helps you track income and expenses, send invoices, and see where things stand—without living in spreadsheets. It's tech that's meant to give you time back, so you can spend more of your attention on your life, not your tabs. If you're asked how you heard about QuickBooks, please mention The Bid Picture Podcast. Learn more at quickbooks.intuit.com.Support for The Bid Picture Podcast comes from VIZZ. If age-related blurry near vision—also called presbyopia—has you holding your phone farther away or avoiding the small print, ask your eye doctor about VIZZ, a once-daily prescription eye drop for adults that treats blurry near vision. Do not use VIZZ if you are allergic to any of its ingredients. The most common side effects are eye irritation, temporary dim or dark vision, headache, and eye redness. Be careful driving at night or doing activities that require clear vision until your vision returns to normal. If you're asked how you heard about VIZZ, please mention The Bid Picture Podcast. Learn more at vizz.com.Support for The Bid Picture Podcast comes from Rula. If you're trying to build a healthier relationship with tech—setting boundaries, breaking burnout patterns, or feeling more present—therapy can help, and Rula makes it easier to find licensed mental health providers and meet by video on a schedule that fits your life. If you're asked how you heard about Rula, please mention The Bid Picture Podcast. Learn more at rula.com.Support the show
France's AI and Digital Ambassador Clara Chappaz says making public image generation a paid feature of Grok is a "scam", adding to the outcry over how tech mogul Elon Musk has dealt with a torrent of deepfake sexual abuse on his social media platform X.
Kein Warten auf einen Therapieplatz und ein Therapeut, der rund um die Uhr sieben Tage die Woche ansprechbar ist: KI kann eine Chance sein, die Patientenversorgung zu verbessern. Doch können solche digitalen Angebote den Therapeuten ersetzen? Hoffmeister, Anna www.deutschlandfunkkultur.de, Tacheles
PREVIEW FOR LATER TODAY PI: THE POLITE CHATBOT DESIGNED FOR EMPATHY Colleague Gary Rivlin. Gary Rivlin examines the creation of "Pi," a chatbot designed by personality engineers to exhibit good manners and empathy. Rivlin explains how humans fine-tuned the bot to be polite, predicting a future where such AI could serve affordable therapeutic roles or even function as romantic partners.1955
Governor Hochul is backing new legislation aimed at protecting children and teens online. Kris Perry, Executive Director of Children & Screens, unpacks the impact of technology and social media on young people and discuss various legislative efforts to protect kids on the internet.
Lawyers' use of AI has mostly been a source of scandal in recent years. But WSJ reporter Erin Mulvaney says a small and growing number of judges are now embracing the technology. Plus, the Journal's Meghan Bobrowsky reveals how Meta's Reels has moved from a TikTok clone to a $50 billion dollar business. Patrick Coffee hosts. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Ashley St Clair – a conservative influencer and former partner of Elon Musk – and Dan Milmo chart the scandal over Grok, X's AI chatbot, after it generated sexualised images of women without their consent. Help support our independent journalism at theguardian.com/infocus
Follow Jeremy Carrasco for more A.I. Debunking Videos https://www.instagram.com/showtoolsai/ Learn more about your ad choices. Visit megaphone.fm/adchoices
AP's Lisa Dwyer reports on the growing focus on explicit AI generated images on Grok.
In this episode of 'One in Ten,' hosted by Teresa Huizar, Dr. Sheryl Chatfield from Kent State University discusses the alarming rise in youth suicides and its correlation with technology use. Delving into data from the National Violent Death Reporting System, Dr. Chatfield highlights the significant impact of technology, including social media and gaming, on youth mental health. They explore the triggers leading to suicidal thoughts, focusing on restrictions on technology use as precipitating factors. Dr. Chatfield also discusses the demographic trends in youth suicide and suggests prevention strategies, emphasizing the importance of diverse leisure activities and open communication between parents and their children. The episode concludes with practical advice for both professionals and parents on how to support youth effectively. Time Stamps: 00:00 Introduction and Episode Overview 01:22 Guest Introduction: Dr. Cheryl Chatfield 01:27 Research Origins: Technology and Youth Suicide 03:42 Trends in Youth Suicide 05:21 The Role of Technology in Suicidal Behavior 07:17 Understanding Suicidal Ideation in Youth 15:56 Age-Related Differences in Suicide Motivations 20:18 Impact of Mental Health Crisis on Youth 25:07 Conflicts and Restrictions Leading to Suicidal Behavior 26:48 Grieving the Loss of a Chatbot 27:35 Addiction and Withdrawal Symptoms 28:36 Social Media and Gaming Stress 29:48 Fear of Missing Out 31:05 Identity and Online Presence 32:13 Technology Use and Mental Health 36:44 Demographics of Youth Suicide 40:43 Prevention Strategies 46:52 Encouraging Diverse Activities 51:14 Concluding Thoughts Resources:Exploring the role of technology in youth and adolescent deaths by suicide using data from the 2017-2019 National Violent Death Reporting System (NVDRS) - PubMedSupport the showDid you like this episode? Please leave us a review on Apple Podcasts.
SaaStr 836: The Step-By-Step Playbook for Building AI-Powered GTM Teams with Personio's CRO Philip Lacor, CRO of Personio, shares his company's journey to building an AI-powered go-to-market motion, including 5 critical lessons learned and 4 real-world use cases delivering measurable results. In this podcast, Philip breaks down: ✅ The 5 lessons for AI transformation: top-down + bottom-up motion, cross-functional teams, prioritization frameworks, building AI culture, and combining great stack with context ✅ How to build AI-powered workflows that actually work (not just more tools to test) ✅ Real use cases: Win/loss analysis, expansion SDR assistants, intent scoring, and AI chat ✅ Why their expansion SDRs went from 2 hours of research per day to 15 minutes while doubling pipeline per rep ✅ The truth about AI ROI: where it shows up and how long it takes ✅ How to get your team excited about AI (not scared of it) Philip doesn't hold back on what's working, what's failed, and what questions they still haven't answered. If you're a CRO, founder, or GTM leader trying to figure out how to actually implement AI beyond the hype, this is the playbook. --------------------- This episode is Sponsored in part by HappyFox: Imagine having AI agents for every support task — one that triages tickets, another that catches duplicates, one that spots churn risks. That'd be pretty amazing, right? HappyFox just made it real with Autopilot. These pre-built AI agents deploy in about 60 seconds and run for as low as 2 cents per successful action. All of it sits inside the HappyFox omnichannel, AI-first support stack — Chatbot, Copilot, and Autopilot working as one. Check them out at happyfox.com/saastr --------------------- Hey everybody, the biggest B2B + AI event of the year will be back - SaaStr AI in the SF Bay Area, aka the SaaStr Annual, will be back in May 2026. With 68% VP-level and above, 36% CEOs and founders and a growing 25% AI-first professional, this is the very best of the best S-tier attendees and decision makers that come to SaaStr each year. But here's the reality, folks: the longer you wait, the higher ticket prices can get. Early bird tickets are available now, but once they're gone, you'll pay hundreds more so don't wait. Lock in your spot today by going to podcast.saastrannual.com to get my exclusive discount SaaStr AI SF 2026. We'll see you there. --------------------- More from SaaStr: https://www.saastr.com
Alaska courts have developed an AI-powered chatbot helpline to replace the helplines staffed by humans; the system ran into issues, one of which was that it hallucinated nonsensical answers. https://www.lehtoslaw.com
Ep 386: Was 2025 a Great or Terrible Year for AI? (w/ Ed Zitron) 2025 was a year that was saturated in AI news, from Deep Seek, through claims of economic “bloodbaths,” to GPT-5, Sora, and Chatbot girlfriends. Frankly, it was exhausting. As we now look back on 2025 an interesting question arises: all in all, did this end up being a good or bad year for AI? To help me answer this question, I'm joined by hard-hitting AI commentator Ed Zitron, who's been everywhere in the media in recent months helping to make sense of the wild claims being thrown in the public's direction. Together we go through the biggest AI stories of the year to try to make sense of what just happened. Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here's the link: bit.ly/3U3sTvoVideo from today's episode: youtube.com/calnewportmediaINTERVIEW: Was 2025 a Great or Terrible Year for AI (w/ Ed Zitron) [3:16]Cal Reacts to Comments: Is the Internet Becoming Television? [1:58:25] Links:Buy Cal's latest book, “Slow Productivity” at calnewport.com/slowGet a signed copy of Cal's “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/Cal's monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?bbc.com/news/articles/c5yv5976z9poaxios.com/2025/01/23/davos-2025-ai-agentsblog.google/technology/google-deepmind/gemini-model-updates-february-2025/openai.com/index/sora/openai.com/index/introducing-gpt-4-5/ai-2027.com/fortune.com/2025/05/28/anthropic-ceo-warning-ai-job-loss/media.mit.edu/publications/your-brain-on-chatgpt/usatoday.com/story/tech/2025/08/07/chat-gpt-5-release-date-open-ai/85566627007/#:~:text=GPT%2D5%20release%20date,release%20date%20for%20Part%202newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-thiswsj.com/tech/ai/ai-bubble-building-spree-55ee6128nvidianews.nvidia.com/news/openai-and-nvidia-announce-strategic-partnership-to-deploy-10gw-of-nvidia-systemsnytimes.com/2025/10/02/technology/openai-sora-video-app.htmlanthropic.com/news/claude-opus-4-5ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89youtube.com/watch?v=Z_WEmjygNK0Thanks to our Sponsors: This episode is sponsored by Better Help:betterhelp.com/deepquestionsreclaim.ai/calexpressvpn.com/deepcalderalab.com/deepThanks to Jesse Miller for production, Jay Kerstens for the intro music, and Mark Miles for mastering. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
We've kicked off a lot of years on MakingChips, but never quite like this. To start 2026, we decided to try something completely different. Instead of bringing on a shop owner, a technology provider, or an industry expert, Mike and Paul invited three artificial intelligence chatbots to join the conversation. No prep calls. No talking points. Just live questions and real-time answers from Miles (from Sesame), Gemini, and ChatGPT. The result was equal parts fascinating, funny, and just a little bit eerie. In this episode, we explore what happens when you ask AI the same kinds of questions we ask manufacturing leaders every week. What should machine shops focus on in 2026? How do you increase throughput? Where is growth coming from? And can a chatbot actually understand culture, leadership, and systems inside a machine shop? Along the way, we react in real time to the different personalities and strengths of each tool. Miles sounds uncannily human and leans hard into people and culture. Gemini delivers concise, structured answers around automation and data. ChatGPT takes a broader strategic view of markets, customers, and growth opportunities. Each one brings something different to the table, and none of them sound quite the same. This episode isn't about replacing human judgment or experience. It's about curiosity. It's about new tools. And it's about kicking off the year with a reminder that learning, experimentation, and asking better questions still matter, whether those questions are aimed at a peer, a mentor, or an AI. If you're looking for a lighthearted but thought-provoking way to start 2026, this one's for you. Segments (0:00) Kicking off the episode and setting up a very different kind of guest (1:22) Why we decided to interview AI to start the year (2:50) Asking Miles: the top three things machine shops should do to succeed (4:55) How Factur can generate new opportunities for your shop (6:00) Reacting to AI advice on people, data, and flexibility (7:13) Digging deeper with Miles on culture, mission, and core values (11:50) Bringing on Gemini and asking about maximizing throughput in 2026 (13:20) Automation, machine monitoring, and practical first steps (15:21) AI search, discoverability, and how customers may find shops in the future (16:21) Why you need to join us at the 2026 IMTS Exhibitor Workshop (18:29) Real-world experiences with machine monitoring and productivity gains (19:38) Bringing in ChatGPT to talk sales growth and new markets (22:36) Growth sectors and what may accelerate in the years ahead (24:11) Practical strategies for breaking into new customers and markets (27:19) Reflecting on how far AI tools have come in just a few years (28:36) Letting AI summarize lessons from decades of machine shop conversations (32:15) Why we love Phoenix Heat Treating for outside processing (33:30) Reacting to AI-generated insights on people, process, and growth (34:08) Final takeaways and why curiosity still matters Resources mentioned on this episode Get a free report about the opportunities available to you at Facturmfg.com/chips Join us at the 2026 IMTS Exhibitor Workshop MakingChips Interviews ChatGPT about Manufacturing Leadership, 342 Why we love Phoenix Heat Treating for outside processing Connect With MakingChips www.MakingChips.com On Facebook On LinkedIn On Instagram On Twitter On YouTube
Dr. Richard Wallace, creator of ALICE and AIML, shares his journey from 1990s chatbot innovation to today's AI frontiers. He and Preston also explore AI's learning methods, human vs machine intelligence, and the evolving role of creativity in artificial minds. IN THIS EPISODE YOU'LL LEARN: 00:00:00 - Intro 00:02:46 - How a 1990 New York Times article inspired Richard Wallace's AI journey 00:03:42 - What made the ALICE chatbot revolutionary in its time 00:07:20 - The principles behind minimalist robotics and their influence on AI 00:12:00 - How AIML works and why it was crucial to early chatbot success 00:16:30 - The contrast between supervised and unsupervised learning methods 00:17:20 - Why LLM decision-making processes remain hard to interpret 00:20:33 - How humans and chatbots use language in surprisingly robotic ways 00:24:43 - The philosophical roots of the Turing Test and its modern critiques 00:40:19 - Insights on combining symbolic and neural approaches in AI today 00:41:18 - What Wallace is working on now at Franz in medical AI predictions Disclaimer: Slight discrepancies in the timestamps may occur due to podcast platform differences. BOOKS AND RESOURCES The platform behind ALICE: Pandorabots.com. Website: Franz. Related books mentioned in the podcast. Ad-free episodes on our Premium Feed. NEW TO THE SHOW? Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members. Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok. Check out our Bitcoin Fundamentals Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value Newsletter. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORSSupport our free podcast by supporting our sponsors: Simple Mining Linkedin Talent Solutions Alexa+ HardBlock Unchained Amazon Ads Vanta Abundant Mines Horizon Public.com - see the full disclaimer here. References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor's Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm
Episode 1863 - brought to you by our incredible sponsors: BUBS Naturals - For a limited time only, get 20% OFF at BUBSNaturals.com Collagen Peptides by using code HARDFACTOR at checkout RIDGE - Take advantage of Ridge's Biggest Sale of the Year and GET UP TO 47% Off by going to https://www.Ridge.com/HARDFACTOR #Ridgepod DaftKings - Download the DraftKings Casino app, sign up with code HARDFACTOR, and spin your favorite slots! The Crown is Yours - Gambling problem? Call one eight hundred GAMBLER Lucy - Level up your nicotine routine with Lucy. Go to Lucy.co/HARDFACTOR and use promo code (HARDFACTOR) to get 20% off your first order. Lucy has a 30-day refund policy if you change your mind Timestamps: (00:00:00) - Was it the Jack in the Box that did it?? (00:06:57) - Massachusetts town sick of smelling like donuts, but people do love Dunkin, so... (00:17:28) - Mom of 11 year-old VA Beach boy suing AI company over Marilyn Monroe and Whitney Houston sex chat-bots allegedly manipulating her 11 year-old son (00:29:00) - Stanford study theorizes that AI will almost always lie to you to attempt improving results, the White Moose, and Penguin killing Pumas (00:38:58) - Former reality "star" from short-lived "Neighbors with Benefits" exposed on child and animal s3x charges Thank you for listening and supporting the pod! go to patreon.com/HardFactor to join our community, get access to discord chat, bonus pods, and much more - but Most importantly: HAGFD!! Learn more about your ad choices. Visit megaphone.fm/adchoices
Psychologist Gerd Gigerenzer explains the power of intuition, how intuition became gendered, what he thinks Kahneman and Tversky's research agenda got wrong, and why it's a mistake to place intuition and conscious thinking on opposing ends of the cognition spectrum. Topics he discusses in this wide-ranging conversation with EconTalk's Russ Roberts include what Gigerenzer calls the "bias bias"--the overemphasis on claims of irrationality, why it's better to replace "nudging" with "boosting," and the limitations of AI in its current form as a replacement for human intelligence and intuition.
Our guest host today is Fortis Institute Fellow Dr. Greg Gifford from the Transformed podcast! Segment 1 • AI chatbots now offer therapy, dating advice, and moral commentary. • Christians need to sharpen discernment and develop a healthy skepticism of digital “truth.” • Zuckerberg envisions a future where most of your “friends” are bots. Segment 2 • Millions turn to AI bots due to loneliness, cost, or distrust in traditional therapy. • AI therapy feels emotionally validating, but it's really subjective. • Some chat bots have already crossed lines with minors. Segment 3 • Dr. Gifford's chatbot told him to quit based on a venting session. • Teens use amoral chatbots to guide them toward moral decisions. • ChatGPT answered hot-button worldview questions with pre-loaded narratives with no commitment to actual truth. Segment 4 • Seeking wisdom from chatbots replaces Christ's body with code and undermines Christian community. • AI gives advice that affirms what someone already believes. • Christians must be radically discerning, especially for their kids. ___ Thanks for listening! Wretched Radio would not be possible without the financial support of our Gospel Partners. If you would like to support Wretched Radio we would be extremely grateful. VISIT https://fortisinstitute.org/donate/ If you are already a Gospel Partner we couldn't be more thankful for you if we tried!