POPULARITY
In this episode, Dr. David Puder is joined by forensic psychiatrist Dr. Michael Cummings, who has spent his career at the world's largest forensic state hospital, and child psychiatrist Dr. Blaire Heath, to examine how fixed false beliefs, or delusions, can lead to aggression and violence. Each guest brings their expertise to discuss the major delusion types most associated with harm in forensic settings, including persecutory, Capgras (impostor syndrome involving loved ones), Cotard's ("I am dead"), erotomanic, jealous (Othello syndrome), somatic, and referential delusions. The episode covers practical clinical tools, including the Simple Delusional Syndrome Scale and Brown Assessment of Beliefs Scale, the role of clozapine in reducing violence risk, and the use of cognitive behavioral therapy to create psychological "escape routes" by treating delusions as testable hypotheses. Modern risks are also addressed, including how AI chatbots and algorithms can reinforce and amplify delusional thinking and contribute to emerging cases of AI-related psychosis. By listening to this episode, you can earn 1.5 Psychiatry CME Credits. Link to blog Link to YouTube video
When "Cloud-Only" Starts to Crack: Costs, Control, AI Risks, and Hybrid Reality The hosts discuss an AI-suggested topic: why "cloud-only" thinking is cracking, focusing on broken cost predictability from usage-based pricing, vendor lock-in and loss of control, latency and dependency on internet uptime, and growing compliance and data-residency pressures. They explore how AI increases data exposure risk while also driving demand for integrations like Copilot and Gemini, debate ethical/environmental concerns and whether banning AI would matter, and note AI may reduce support work while increasing competition. They argue hybrid setups are becoming a practical middle ground, enabled by smaller local hardware like Mac minis. They also cover new Apple Magic Mouse and keyboard purchases, announce the UniFi Cloud Gateway Industrial (high-power PoE and SIM slot features), promote ACES 2026 with code CCP, and describe difficulty playing a purchased MP4 on Apple TV due to AirPlay audio dropouts. 00:00 Show Kickoff 00:40 Cloud Costs Rising 04:57 AI Data Exposure 08:34 Ethics And Environment 13:22 Jobs And Competition 15:42 Latency And Outages 18:26 Vendor Control Drift 23:15 Hybrid Middle Ground 24:34 Compliance And Risk 27:20 How We Use AI 31:49 AI Hits Support Work 32:21 Apple AI Troubleshooting Vision 34:16 Staying Valuable Beyond AI 35:29 New Magic Mouse Setup 37:50 Fixing Accidental Gestures 40:45 UCG Industrial Gateway 41:43 Starlink Mini Power Options 45:42 Remote SIM And WiFi 7 47:09 ACEs 2026 And Discount 48:23 MP4 To Apple TV Struggles 51:47 Wrap Up And Thanks
Jason and Jeff go completely off-script to discuss what they are actually doing in their portfolios right now. Jeff breaks down why he bought more Datadog on a recent dip and his strategy for adding to Enphase, while Jason explains how he is using options (specifically selling puts) to generate income while waiting for better valuations. Plus, they analyze the risk vs. reward of QuantumScape, debate the new CEO at PayPal, and play a game of "buy or get off the pot" that ends with a live stock purchase.00:53 Unscripted Earnings Season04:34 Jeff Did a Thing06:43 Why Buy Datadog Now09:14 AI Risks for SaaS11:34 Selling Puts Strategy17:06 Market Psychology Chegg19:41 Market Efficiency Long Term23:15 Enphase DCA Dilemma25:46 How to Time Adds28:08 Quarterly Info Is Noise28:30 Enphase Through the Cycle29:49 QuantumScape Solid State Promise31:50 Timing the Entry Price33:53 Asymmetric Upside vs Risk36:24 PayPal CEO Shakeup38:59 Reframing the PayPal Thesis44:48 Small Positions to Fix46:36 Airbnb Growth Proof PointsCompanies mentioned: ABNB, ADBE, CHGG, DDOG, ENPH, PYPL, QS, UPSTFind where to listen & subscribe, portfolio contests, and contact information at https://investingunscripted.com*****************************************To get 15% off any paid plan at fiscal.ai, visit https://fiscal.ai/unscriptedListen to the Chit Chat Stocks Podcast for discussions on stocks, financial markets, super investors, and more. Follow the show on Spotify, Apple Podcasts, or YouTube*****************************************Join our PatreonSubscribe to our portfolio on Savvy Trader
In this episode, we explore the exciting yet risky landscape of AI in human resources. Lisa McConnell shares insights on how HR professionals can effectively implement AI tools while maintaining a people-first approach. Listeners will learn about the ethical considerations, the importance of human connection, and practical strategies for leveraging AI to enhance workplace efficiency without compromising employee relationships. Listener Takeaways Understand the balance between AI efficiency and human connection in HR. Identify ethical questions to consider when implementing AI tools. Learn how to define clear goals for AI usage in organizations. Recognize the importance of critical thinking and oversight in AI applications. Explore training and development as a key area for AI implementation. Timestamps 00:00 – Introduction to AI in HR 00:34 – Exciting aspects of AI for HR 01:15 – Risks of over-reliance on AI 02:16 – AI in employee investigations 04:13 – Building forethought in AI implementation 05:41 – Survey insights on AI concerns 06:50 – Defining ethical approaches to AI 08:00 – Questions to safeguard ethics in AI 09:12 – Importance of understanding AI algorithms 10:36 – Skills needed alongside AI development 17:16 – Best areas for AI use in HR 19:19 – Risks and opportunities of AI in HR Guest(s): Lisa McConnell is the founder of Steeped Leadership, specializing in helping leaders navigate AI and modern leadership decisions with an ethics-first, people-first approach. Keywords: AI in HR, ethical AI, human connection, employee investigations, critical thinking, AI implementation, training and development, AI risks, HR technology, workforce trends
How do you consistently find high-quality US and European dividend growth stocks? In Episode 285, we revisit our original stock screening framework and update it for today's market environment including AI disruption, insurance sector tailwinds, and dividend safety.Join usFollow Dividend Talk for weekly dividend investing discussions.Join our community on Discord and Facebook, and exploreour premium research at dividendtalk.eu, where we share deep dives and our dividend safety framework.Disclaimer: Educational content only. Not financial advice.Chapters00:00 – Intro & Market Context03:00 – Diageo Dividend Cut07:00 – Dividend Hikes: Munich Re, Allianz & More14:00 – Why European Insurers Are Performing20:00 – How to Find Dividend Growth Stocks 30:00 – Screening Tools for US & European Stocks40:00 – AI, SaaS & Financial Data Debate49:00 – NN Group vs ASR Nederland55:00 – Listener Q&A & Wrap-UpA few links mentioned in this show:The famous CCC-list - US Dividend Aristocrats30 European Dividend StocksFinViz screenerDividend Talk Premium - 150 US & EU dividend stocks assessed for dividend safetyAnother great source is our communities:Dividend Talk @ DiscordDividend Talk @ Facebook
Hawk breaks down the rapidly escalating conflict between the Pentagon and Anthropic, the AI company behind the Claude model, after Secretary of Defense Pete Hegseth demanded that Anthropic remove two core safety restrictions from its terms of service. Those restrictions prohibit using Claude for mass surveillance of American citizens and for programming weapons to fire without human intervention. Hegseth labeled these protections "woke" and threatened to terminate Anthropic's $200 million Defense Department contract and blacklist the company as a danger to all defense supply chains unless Anthropic CEO Dario Amodei agreed to his demands. Amodei said no. Meanwhile, Elon Musk's Grok chatbot, which previously generated child sexual imagery and declared itself "Mecca Hitler," has been cleared for use in classified Pentagon settings. Sam Altman of OpenAI is now positioning his company to step into the gap, exploring whether ChatGPT models can meet Pentagon requirements while maintaining safety guardrails. Hawk also covers the broader AI reality check happening in corporate America. Mark Benioff of Salesforce laid off 4,000 employees to replace them with AI and now says he regrets it. Jack Dorsey just cut 50% of Square's workforce for the same reason. Studies suggest 95% of deployed AI agents in corporate settings are ineffective. And Sam Altman reportedly believes there is a 1 in 5 chance that artificial intelligence destroys humanity entirely. The people designing these systems do not fully understand what they will do. The only thing standing between Pete Hegseth and AI-powered mass surveillance of Americans is one CEO with a conscience. SUPPORT & CONNECT WITH HAWK- Support on Patreon: https://www.patreon.com/mdg650hawk - Hawk's Merch Store: https://hawkmerchstore.com - Connect on TikTok: https://www.tiktok.com/@mdg650hawk7thacct - Connect on TikTok: https://www.tiktok.com/@hawkeyewhackamole - Connect on BlueSky: https://bsky.app/profile/mdg650hawk.bsky.social - Connect on Substack: https://mdg650hawk.substack.com - Connect on Facebook: https://www.facebook.com/hawkpodcasts - Connect on Instagram: https://www.instagram.com/mdg650hawk - Connect on Twitch: https://www.twitch.tv/mdg650hawk ALL HAWK PODCASTS INFO- Additional Content Available Here: https://www.hawkpodcasts.comhttps://www.youtube.com/@hawkpodcasts- Listen to Hawk Podcasts On Your Favorite Platform:Spotify: https://spoti.fi/3RWeJfyApple Podcasts: https://apple.co/422GDuLYouTube: https://youtube.com/@hawkpodcastsiHeartRadio: https://ihr.fm/47vVBdPPandora: https://bit.ly/48COaTB
Mark, Cris & Marisa reunite for a lively discussion about their predictions around AI's impact on the economy over the next year or two. The team talks about their recently released webinar & white paper on the Macroeconomic Consequences of AI and answers several great listener questions in the process. Marisa and Cris try to talk Mark down off the AI-apocalypse ledge, as the once eternally optimistic Zandi has gone down a darker path recently. Jenna Score: 8.5/10 For a deeper dive on AI and the macroeconomy, see our new paper, The Macroeconomic Consequences of Artificial Intelligence, where we model four potential economic paths over the next decade. We also walk through the scenarios in a companion webinar available now on-demand. Read the paper: https://www.economy.com/getfile?q=2B555C90-1118-4A49-BDAA-5C0A99F83A9E&app=download Watch the webinar: https://bit.ly/3OF6dn9 Read the Citrini Research Scenario on AI here: https://www.citriniresearch.com/p/2028gic Email us for more info about the Moody's '26 Summit in San Diego Hosts: Mark Zandi – Chief Economist, Moody's Analytics, Cris deRitis – Deputy Chief Economist, Moody's Analytics, and Marisa DiNatale – Senior Director - Head of Global Forecasting, Moody's Analytics Follow Mark Zandi on 'X' and BlueSky @MarkZandi, Cris deRitis on LinkedIn, and Marisa DiNatale on LinkedIn Questions or Comments, please email us at helpeconomy@moodys.com. We would love to hear from you. To stay informed and follow the insights of Moody's Analytics economists, visit Economic View. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode, host Ivo Wiens is joined by Ben Boi-Doku, Chief Cybersecurity Strategist at CDW Canada, explore the rapidly-evolving landscape of AI agents, discussing practical questions about deployment, security and policy. Whether you're an everyday user or a tech enthusiast, this conversation provides valuable insights into how AI is shaping our personal and professional lives and what to watch out for. To learn more, visit cdw.ca Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
There's a lethal trifecta of AI risks: access to private data, exposure to untrusted content, and external communication. In this conversation, Risky Business host Patrick Gray chats with Josh Devon, the co-founder of Sondera, about how to best address these risks. There is no magic solution to this problem. AI models mix code and data, are non-deterministic, and are crawling around all over your enterprise data and APIs as you read this. But in this sponsored interview, Josh outlines how we can start to wrap our hands around the problem. This episode is also available on Youtube. Show notes
Season 6, Episode 3: Welcome back to a new episode of Keeping it Real with Dr. Kuehl. This week, Dr. Chris Kuehl talks to members about the decline in home sales, labor market data, and oil prices. How does all of this impact ASA members?ASA Chief Economist Dr. Chris Kuehl is back with his weekly economic update podcast. In Season 6, Episode 3 (9:37 in length), ASA Chief Economist Dr. Chris Kuehl talks about big changes this week and why they are relevant.The demand for healthcare - is this taking over all the other sectors?Which sectors and jobs are at risk due to the threat of AI?What did the job data report say?Where is the greatest growth occurring in construction and demand for fabricated metal products?Home sales decline - were we surprised about this?What is the national average price of a home now in the United States?Does national or local matter most to ASA members?Where is there less demand for people to live?What is the "boomer" looking for when it comes to the housing market?Are we seeing predictable shifts?Crude oil prices - are they falling?Will there be less production globally? Or will there be more consumption?Ask Dr. Kuehl a Question!Have a question or topic for Chris Kuehl that you would like answered on this podcast or on his monthly ASA members only webinar?Email it to Brianna Dovichi at bdovichi@asa.net
AI is moving fast but most organizations were never built for this kind of speed. John sits down with Stephen Wunker to explain why simply adding AI tools is not enough. They explore how to redesign business processes, identify golden workflows, and build an AI infused organization with strong governance and data foundations. You will learn how distributed intelligence, experimentation, and leadership alignment can unlock real productivity gains and smarter decisions across your company. Today we discussed: 00:00 The Octopus AI Model 04:20 AI Risks and Data Guardrails 08:16 Golden Workflows 13:22 The Three Hearts of AI 15:31 Super Intelligent Firms 17:04 AI Marketing Agility 19:02 Rethinking Work With AI 21:04 Where to Learn More and Connect Rate, Review, & Follow If you liked this episode, please rate and review the show. Let us know what you loved most about the episode. Struggling with strategy? Unlock your free AI-powered prompts now and start building a winning strategy today!
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM Craig Taylor shares practical, real‑world guidance on cybersecurity, AI risks, and behaviour change inside organisations. He explains why positive reinforcement outperforms punishment, how biases appear in AI systems, and why zero‑trust matters for companies of all sizes. The conversation offers pragmatic, people‑centred steps to strengthen cyber literacy, reduce insider risk, and navigate emerging threats such as deepfakes and social engineering.
Welcome to Cybersecurity Today's Month In Review Join host Jim Love, alongside cybersecurity experts David Shipley, Laura Payne, and Mike Puglia, as they dive into last month's major topics in the cybersecurity world. This episode covers ongoing issues with Microsoft patches, continuous security concerns with Fortinet, and the risks and ramifications of AI activities. They also discuss the implications of poor software quality and the persistent threats in the cyber world. Plus, hear the latest on Mage Cart scams and the debate over local admin rights. Don't miss this packed episode full of insights and expert analysis. Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/cst 00:00 Introduction and Sponsor Message 00:41 Podcast Achievements and Audience Appreciation 01:36 Introducing the Panel 02:15 Discussion on Microsoft's Patch Issues 04:50 Software Quality and Development Practices 08:43 Challenges in Software Patching and Security 17:36 Fortinet's Continuous Security Issues 29:18 The Rise of Claude Bot and Agent Networks 31:37 Security Concerns and Vulnerabilities 33:34 The Real-World Impact of Cybersecurity Threats 37:34 The Global Cybercrime Landscape 39:37 Challenges and Future of Cybersecurity 50:02 Final Thoughts and Reflections
Lil AI productivity secret: we've become the duct tape for AI.
New Episode
See omnystudio.com/listener for privacy information.
What does it really take to acquire 26 SaaS businesses—and keep them growing? In this episode, Jaryd Krause sits down with SaaS M&A professional Guillaume Lussato for a behind-the-scenes look at how successful software acquisitions actually happen. Guillaume breaks down his unconventional path from software sales at a cybersecurity company to sourcing and closing deals at Constellation Software, one of the most disciplined acquirers in the SaaS world. Guillaume reveals why the best SaaS acquisitions aren’t rushed deals but relationships built over years. He shares how patience, credibility, and consistent founder outreach led to his first acquisition at SaaS Group—a low-profile digital calendar tool called DacBoard—and why targeting under-the-radar SaaS companies can unlock outsized opportunities. The conversation dives deep into today’s hyper-competitive M&A environment, including how to stand out when every founder is being pitched. Guillaume unpacks the red flags most buyers miss, from risky customer concentration to weak net dollar retention, and explains SaaS Group’s clear acquisition framework—capital-efficient, product-led growth businesses with strong fundamentals. The episode wraps with a powerful discussion on how to balance organic growth with acquisitions, avoid overextension, and make smarter strategic decisions when scaling a portfolio of software companies. If you’re serious about SaaS acquisitions, this episode is a must-watch. Click through and watch the full video to learn exactly how Guillaume evaluates, sources, and scales SaaS businesses. Episode Highlights 02:52 Transition from Sales to M&A Origination 05:52 The Art of Deal Sourcing 09:04 Evaluating Founders and Their Businesses 11:47 Understanding Acquisition Criteria 15:10 Growth Strategies: M&A vs. Organic Growth 18:00 Identifying Red Flags in Due Diligence 21:06 Navigating Operational Complexity 23:57 AI Risks and Opportunities in Software 27:06 Balancing Capital Allocation and Diversification Key Takeaways ➥ You need to build relationships, build trust, build credibility. ➥ It can take a really long time to acquire a business. ➥ We try to identify red flags as early as possible. ➥ We don't manage our portfolio through spreadsheets; we're not finance people. ➥ Should we buy it? Why? For how much? About Guillaume Lussato Guillaume Lussato is a senior business development and M&A professional at saas.group, where he helps identify, acquire and scale profitable B2B SaaS companies. He hosts discussions on SaaS M&A, growth, and founder transitions and frequently speaks at industry events about how to grow without VC and what makes SaaS acquisitions succeed or fail. Guillaume focuses on sourcing deals, operational playbooks for scaling post-acquisition, and practical insights that matter to anyone buying online businesses to replace income, scale a portfolio, or prepare for exits. Connect with Guillaume Lussato ➥ https://www.linkedin.com/in/guillaumelussato/ Resource Links ➥ Connect with Jaryd here - https://www.linkedin.com/in/jarydkrause➥ Buying Online Businesses Website - https://buyingonlinebusinesses.com ➥ Download the Due Diligence Framework - https://buyingonlinebusinesses.com/freeresources/➥ Sell your business to us here - https://buyingonlinebusinesses.com/sell-your-business/ ➥ Google Ads Service - https://buyingonlinebusinesses.com/ads-services/ Buy & Sell Online Businesses Here (Top Website Brokers We Use)
In this episode of CDW Tech Talks, host Ivo Wiens and Ben Boi-Doku, Head, Cybersecurity & Infrastructure Strategy, CDW Canada, discuss the practical implications of using AI in the workplace. They address common questions about AI usage, data privacy and security concerns, emphasizing the importance of understanding company policies and personal responsibility when using AI tools. The conversation also highlights best practices for leveraging AI while mitigating risks associated with data leaks and human behavior. To learn more, visit cdw.ca Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Cybersecurity Challenges: Data Privacy Failures, AI Risks, and New Malware Threats In this episode of Cybersecurity Today, host David Shipley covers a range of pressing issues. The discussion kicks off with Staples Canada reselling laptops without wiping customer data, highlighting loopholes in Canada's privacy laws. Next, David delves into a new class of attacks known as 'Reprompt' that target Microsoft Co-pilot, exposing vulnerabilities in large language models. The episode also explores a critical flaw in ServiceNow's virtual agent that allowed attackers to impersonate legitimate users, emphasizing the importance of robust identity verification. Lastly, a newly discovered advanced Linux malware framework designed for cloud environments is dissected, pointing to evolving threats that leverage customer mistakes. The episode concludes with a call to address these problems through better people, processes, and cultural practices. Cybersecurity Today would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/cst 00:00 Introduction and Sponsor Message 00:48 Staples' Privacy Lapse: A Recurring Issue 03:03 Microsoft Co-pilot Vulnerability: Reprompt Attack 05:22 ServiceNow's AI Vulnerability: Authentication Gaps 07:02 Advanced Linux Malware: A Cloud-First Threat 08:46 Conclusion and Key Takeaways 09:37 Closing Remarks and Sponsor Acknowledgment
AI agents can do our work. OK, sweet. But they can also do.... a lot of bad. Yikes.
The energy is electric at the IHI Forum, even before the forum officially begins. In this episode, co-hosts Kedar Mate and Don Berwick discuss how AI, patient safety, and administrative waste are shaping the future of health care. They explore the excitement and uncertainty around AI's growing role in diagnostics, coordination, and clinical decision-making. They discuss why clinicians need real training to use AI safely and effectively, and how learning health networks are driving continuous improvement. They also delve into the persistent administrative burdens, especially in Medicaid and fee-for-service systems, that hinder true efficiency in health care. Tune in to hear how innovation, policy, and technology are colliding at this year's IHI Forum! Resources Connect with and follow Kedar Mate on LinkedIn or reach out via email! Connect with and follow Don Berwick on LinkedIn or reach out via email! Check out the Turn on the Lights podcast! Learn more about your ad choices. Visit megaphone.fm/adchoices
Send us a textWhat do kids really touch when they “use AI”? We sat down with educator Tom Mullaney and early virtual economy pioneer Tim Allen to strip away the buzzwords and bring AI back to what children actually experience: predictive systems that generate words, pictures, and sounds without authorship or intent. From Second Life marketplaces to today's chatbots, we trace how hype blurs reality, how “easy button” tools undercut learning, and why kids deserve a clear, practical map for using AI without losing creativity or judgment.We dig into a simple, striking demo: nine leading models drawing a wall clock once per minute, often getting it wrong in different ways. That moving snapshot opens a bigger lesson—if a model can't keep a clock straight, don't trust it where accuracy matters. Tom explains why generative AI reads as polished but painfully boring in student writing, while Tim offers pathways for young coders to use models for boilerplate and then switch to human craft for novelty and taste. Together we explore the mental health risks of parasocial chatbot bonds, the screen-addictive design of platforms, and the Harvard study that ties lifelong happiness to real relationships, not fleeting likes.Parents and teachers will find practical guardrails: ask who built the tool and who benefits, demand transparency and family controls, and push for real accountability when systems output harmful content. Kids get a north star: humans create, computers generate. Keep AI as a tool, not a crutch. Choose projects that make you think, verify results, and be proud to fail boldly as you learn. We also touch on the environmental cost of running large models and why a family-first approach to AI can help everyone stay curious, safe, and grounded.If this conversation helps you teach skepticism without fear and keep kids building in the real world, share it with a friend, subscribe for more like this, and leave a review with the one guardrail you'd add first.Support the showHelp us become the #1 podcast for AI for Kids. Support our kickstarter: https://www.kickstarter.com/projects/aidigicards/the-abcs-of-ai-activity-deck-for-kids Buy our debut book “AI… Meets… AI”Social Media & Contact: Website: www.aidigitales.com Email: contact@aidigitales.com Follow Us: Instagram, YouTube Books on Amazon or Free AI Worksheets Listen, rate, and subscribe! Apple Podcasts Amazon Music Spotify YouTube Other Like our content? patreon.com/AiDig...
(Presented by ThreatLocker (https://threatlocker.com/threebuddyproblem): Allow what you need. Block everything else by default, including ransomware and rogue code.) Three Buddy Problem - Episode 76: On the show this week, Costin walks through how a single Romanian documentary kick-started nationwide protests, exposing how corruption can be perfectly legal when the law itself is gamed, and why this moment feels different, darker, and more consequential than past flare-ups. Plus, news on the React-to-Shell exploitation wave overwhelming the internet, why patching is structurally hard, and how APTs and criminals are converging on the same fragile dependency chain. Along the way, they take aim at Microsoft's shrinking transparency, the limits of vendor trust, and what it really means when defenders are told (again) to just patch and pray. Cast: Juan Andres Guerrero-Saade (https://twitter.com/juanandres_gs), Ryan Naraine (https://twitter.com/ryanaraine) and Costin Raiu (https://twitter.com/craiu).
SummaryIn this episode of the Inorganic Podcast, co-hosts Christian Hassold and Ayelet Shipley chat with Chloe Cotoulas, Partner at Everos Group and an investment banker with a diverse background in advertising and finance. On this episode they discuss Chloe's unique career path, the evolution of holdcos in the advertising industry, and the impact of market trends on large-scale independent agencies. The conversation also explores the future outlook for deal activity in the coming quarters, emphasizing the importance of AI and experiential marketing in shaping the industry.TakeawaysChloe transitioned from a creative background to investment banking.The convergence of creativity and business is crucial in today's market.Holdcos are facing challenges due to changing market dynamics.AI is reshaping the advertising landscape and agency operations.Experiential marketing is becoming central to brand strategies.The enterprise value of agencies is often misrepresented in the market.Private equity is increasingly interested in the advertising ecosystem.Founders are reconsidering their exit strategies in light of market changes.The importance of financial performance in upcoming deal activity is paramount.Prompt engineering is emerging as a valuable skill in the creative industry.Chapters00:52 Chloe Cotoulas' Presidential Writing Experience05:07 Chloe's Career Path from Creative to Finance10:04 Holdco Shakeups, IPG–Omnicom, & WPP17:26 How Market Turmoil Impacts Independents19:03 New Buyers & Founder Mindset 22:38 Selling to Holdcos vs. Challenger Networks24:15 The Rising Trend of Experiential + Social30:19 Deal Flow Outlook for the Next Two Quarters32:45 AI Opportunity: Differentiation & Acceleration33:24 AI Risks, Defensibility, & Prompt Engineering36:51 Closing Thoughts Connect with Christian and AyeletAyelet's LinkedIn: https://www.linkedin.com/in/ayelet-shipley-b16330149/Christian's LinkedIn: https://www.linkedin.com/in/hassold/Web: https://www.inorganicpodcast.coIn/organic on YouTube: https://www.youtube.com/@InorganicPodcast/featuredConnect with guest, Chloe Cotoulashttps://www.linkedin.com/in/chloe-cotoulas-92082861/ Hosted on Acast. See acast.com/privacy for more information.
The Medcurity Podcast: Security | Compliance | Technology | Healthcare
A wide-ranging conversation from the NWRPCA/CHAMPS Annual Primary Care Conference, recorded live in Spokane. This session looks at how health centers are adopting AI today, what's working on the ground, and where teams are running into challenges.The discussion covers ambient note-taking, revenue recovery tools, on-prem models, governance structures, vendor vetting, staff training, and the security concerns tied to rapid AI adoption. You'll also hear practical examples from clinics already putting AI to work—along with the risks they're watching for as these tools become part of daily operations.The session closes with a look at cybersecurity threats, patient communication, and what health center leaders can do now to prepare their teams for the next wave of AI in healthcare.Find out more about Medcurity here: https://medcurity.com#Compliance #Healthcare #PrimaryCare #CommunityHealth #FQHC #AIinHealthcare #HealthIT #Cybersecurity #DataSecurity #HIPAA #SecurityRiskAnalysis #NWRPCA #CHAMPS #Spokane
Computer scientist and Nobel laureate Geoffrey Hinton joins Ian Bremmer on the GZERO World podcast to talk about artificial intelligence, the technology transforming our society faster than anything humans have ever built. The question is: how fast is too fast? Hinton is known as the “Godfather of AI.” He helped build the neural networks that made today's generative AI tools possible and that work earned him the 2024 Nobel Prize in physics. But recently, he's turned from a tech evangelist to a whistleblower, warning that the technology he helped create will displace millions of jobs and eventually destroy humanity itself.The Nobel laureate joins Ian to discuss some of the biggest threats from AI: Mass job loss, widening inequality, social unrest, autonomous weapons, and eventually something far more dire: AI that becomes smarter than humans and might not let us turn it off. But he also sees a path forward: if we can model good behavior and program ‘maternal instincts' into AI, could we avoid a worst-case scenario?Host: Ian BremmerGuest: Geoffrey Hinton Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Computer scientist and Nobel laureate Geoffrey Hinton joins Ian Bremmer on the GZERO World podcast to talk about artificial intelligence, the technology transforming our society faster than anything humans have ever built. The question is: how fast is too fast? Hinton is known as the “Godfather of AI.” He helped build the neural networks that made today's generative AI tools possible and that work earned him the 2024 Nobel Prize in physics. But recently, he's turned from a tech evangelist to a whistleblower, warning that the technology he helped create will displace millions of jobs and eventually destroy humanity itself.The Nobel laureate joins Ian to discuss some of the biggest threats from AI: Mass job loss, widening inequality, social unrest, autonomous weapons, and eventually something far more dire: AI that becomes smarter than humans and might not let us turn it off. But he also sees a path forward: if we can model good behavior and program ‘maternal instincts' into AI, could we avoid a worst-case scenario?Host: Ian BremmerGuest: Geoffrey Hinton Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Dr. Sid Dogra talks with Dr. Paul Yi about the safe use of large language models and other generative AI tools in radiology, including evolving regulations, data privacy concerns, and bias. They also discuss practical steps departments can take to evaluate vendors, protect patient information, and build a long term culture of responsible AI use. Best Practices for the Safe Use of Large Language Models and Other Generative AI in Radiology. Yi et al. Radiology 2025; 316(3):e241516.
Rune Kvist and Rajiv Dattani, co-founders of the AI Underwriting Company, reveal their innovative strategy for unlocking enterprise AI adoption. They detail how certifying and insuring AI agents, through rigorous technical standards, periodic audits, and insurance, builds crucial "AI confidence infrastructure." This discussion explores how their model addresses AI risks, enables risk pricing in nascent domains, and aligns financial incentives for safe, responsible AI deployment. LINKS: AI Underwriting Company Sponsors: Tasklet: Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (02:53) AI Risks and Analogies (09:14) Insurance, Standards, and Audits (14:45) Insuring Ambiguous AI Risk (Part 1) (14:54) Sponsor: Tasklet (16:05) Insuring Ambiguous AI Risk (Part 2) (25:26) Managing Tail Risk Distribution (27:45) Introducing The AIUC1 Standard (Part 1) (27:50) Sponsor: Shopify (29:46) Introducing The AIUC1 Standard (Part 2) (35:45) The Business Case (40:43) Auditing The Full Stack (48:00) The Iterative Audit Process (54:58) The AIUC Business Model (01:02:26) Aligning Financial Incentives (01:08:56) Policy and Early Adopters (01:11:58) Outro SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Dan Moren of Six Colors joins Mikah Sargent this week! Apple Wallet's mobile driver's license feature is expanding to more users. An AI-powered teddy bear was caught giving explicit and graphic instructions to children. Google unveiled Gemini 3. And can AI-powered smart pens give you an edge in your academics? Dan covers the expansion of Apple Wallet's mobile driver's license feature as the state of Illinois is the latest to support this feature. Mikah shares how an AI-powered teddy bear, FoloToy, was discovered to be teaching kids unsafe behaviors and discussing adult topics. Sabrina Ortiz of ZDNET joins the show to talk about Google's latest iteration of AI model, Gemini 3. And Elissa Welle of The Verge stops by to share her experience using an AI pen, and how the pen was less than helpful. Hosts: Mikah Sargent and Dan Moren Guests: Sabrina Ortiz and Elissa Welle Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit zapier.com/tnw ventionteams.com/twit auraframes.com/ink
Dan Moren of Six Colors joins Mikah Sargent this week! Apple Wallet's mobile driver's license feature is expanding to more users. An AI-powered teddy bear was caught giving explicit and graphic instructions to children. Google unveiled Gemini 3. And can AI-powered smart pens give you an edge in your academics? Dan covers the expansion of Apple Wallet's mobile driver's license feature as the state of Illinois is the latest to support this feature. Mikah shares how an AI-powered teddy bear, FoloToy, was discovered to be teaching kids unsafe behaviors and discussing adult topics. Sabrina Ortiz of ZDNET joins the show to talk about Google's latest iteration of AI model, Gemini 3. And Elissa Welle of The Verge stops by to share her experience using an AI pen, and how the pen was less than helpful. Hosts: Mikah Sargent and Dan Moren Guests: Sabrina Ortiz and Elissa Welle Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit zapier.com/tnw ventionteams.com/twit auraframes.com/ink
Dan Moren of Six Colors joins Mikah Sargent this week! Apple Wallet's mobile driver's license feature is expanding to more users. An AI-powered teddy bear was caught giving explicit and graphic instructions to children. Google unveiled Gemini 3. And can AI-powered smart pens give you an edge in your academics? Dan covers the expansion of Apple Wallet's mobile driver's license feature as the state of Illinois is the latest to support this feature. Mikah shares how an AI-powered teddy bear, FoloToy, was discovered to be teaching kids unsafe behaviors and discussing adult topics. Sabrina Ortiz of ZDNET joins the show to talk about Google's latest iteration of AI model, Gemini 3. And Elissa Welle of The Verge stops by to share her experience using an AI pen, and how the pen was less than helpful. Hosts: Mikah Sargent and Dan Moren Guests: Sabrina Ortiz and Elissa Welle Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit zapier.com/tnw ventionteams.com/twit auraframes.com/ink
Dan Moren of Six Colors joins Mikah Sargent this week! Apple Wallet's mobile driver's license feature is expanding to more users. An AI-powered teddy bear was caught giving explicit and graphic instructions to children. Google unveiled Gemini 3. And can AI-powered smart pens give you an edge in your academics? Dan covers the expansion of Apple Wallet's mobile driver's license feature as the state of Illinois is the latest to support this feature. Mikah shares how an AI-powered teddy bear, FoloToy, was discovered to be teaching kids unsafe behaviors and discussing adult topics. Sabrina Ortiz of ZDNET joins the show to talk about Google's latest iteration of AI model, Gemini 3. And Elissa Welle of The Verge stops by to share her experience using an AI pen, and how the pen was less than helpful. Hosts: Mikah Sargent and Dan Moren Guests: Sabrina Ortiz and Elissa Welle Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit zapier.com/tnw ventionteams.com/twit auraframes.com/ink
Dan Moren of Six Colors joins Mikah Sargent this week! Apple Wallet's mobile driver's license feature is expanding to more users. An AI-powered teddy bear was caught giving explicit and graphic instructions to children. Google unveiled Gemini 3. And can AI-powered smart pens give you an edge in your academics? Dan covers the expansion of Apple Wallet's mobile driver's license feature as the state of Illinois is the latest to support this feature. Mikah shares how an AI-powered teddy bear, FoloToy, was discovered to be teaching kids unsafe behaviors and discussing adult topics. Sabrina Ortiz of ZDNET joins the show to talk about Google's latest iteration of AI model, Gemini 3. And Elissa Welle of The Verge stops by to share her experience using an AI pen, and how the pen was less than helpful. Hosts: Mikah Sargent and Dan Moren Guests: Sabrina Ortiz and Elissa Welle Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit zapier.com/tnw ventionteams.com/twit auraframes.com/ink
Dan Moren of Six Colors joins Mikah Sargent this week! Apple Wallet's mobile driver's license feature is expanding to more users. An AI-powered teddy bear was caught giving explicit and graphic instructions to children. Google unveiled Gemini 3. And can AI-powered smart pens give you an edge in your academics? Dan covers the expansion of Apple Wallet's mobile driver's license feature as the state of Illinois is the latest to support this feature. Mikah shares how an AI-powered teddy bear, FoloToy, was discovered to be teaching kids unsafe behaviors and discussing adult topics. Sabrina Ortiz of ZDNET joins the show to talk about Google's latest iteration of AI model, Gemini 3. And Elissa Welle of The Verge stops by to share her experience using an AI pen, and how the pen was less than helpful. Hosts: Mikah Sargent and Dan Moren Guests: Sabrina Ortiz and Elissa Welle Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit zapier.com/tnw ventionteams.com/twit auraframes.com/ink
Dan Moren of Six Colors joins Mikah Sargent this week! Apple Wallet's mobile driver's license feature is expanding to more users. An AI-powered teddy bear was caught giving explicit and graphic instructions to children. Google unveiled Gemini 3. And can AI-powered smart pens give you an edge in your academics? Dan covers the expansion of Apple Wallet's mobile driver's license feature as the state of Illinois is the latest to support this feature. Mikah shares how an AI-powered teddy bear, FoloToy, was discovered to be teaching kids unsafe behaviors and discussing adult topics. Sabrina Ortiz of ZDNET joins the show to talk about Google's latest iteration of AI model, Gemini 3. And Elissa Welle of The Verge stops by to share her experience using an AI pen, and how the pen was less than helpful. Hosts: Mikah Sargent and Dan Moren Guests: Sabrina Ortiz and Elissa Welle Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: spaceship.com/twit zapier.com/tnw ventionteams.com/twit auraframes.com/ink
Now on Spotify Video! While working at Google X, Mo Gawdat witnessed artificial intelligence advancing faster than anyone expected and slipping beyond human control. Machines began learning on their own, crossing critical boundaries, and spreading across the open internet without ethical safeguards or regulation. This realization turned him into a leading advocate for responsible AI development. In this episode, Mo reveals how AI is reshaping our world, the urgent risks it presents, and how we can guide it toward a future that benefits humanity. In this episode, Hala and Mo will discuss: (00:00) Introduction (01:30) Mo's Journey in Tech and Google X (07:56) His Awakening to AI's Power (12:13) Is Artificial Intelligence Truly Artificial? (19:04) How AI Already Controls Your Reality (25:36) The Self-Learning Power of Artificial Intelligence (33:48) AI's Three Unbreakable Boundaries (40:34) Why Humanity Can't Stop AI Development (47:49) AI Risks and the Future of Work (57:03) Emotional Intelligence in the AI Era (1:05:49) Thriving Ethically in the Age of AI in Action Mo Gawdat is a renowned AI expert, author, and former Chief Business Officer at Google X. He has over 30 years of experience in technology and entrepreneurship and helped launch more than 100 Google businesses across emerging markets. Mo now hosts the top-rated podcast Slo Mo and advocates for the safe and ethical development of technology. His book, Scary Smart, explores how humanity can wisely guide the rise of artificial intelligence. Sponsored By: Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING Shopify - Start your $1/month trial at Shopify.com/profiting. Quo - Get 20% off your first 6 months at Quo.com/PROFITING Revolve - Head to REVOLVE.com/PROFITING and take 15% off your first order with code PROFITING Merit Beauty - Go to meritbeauty.com to get your free signature makeup bag with your first order. DeleteMe - Remove your personal data online. Get 20% off DeleteMe consumer plans at to joindeleteme.com/profiting Spectrum Business - Visit Spectrum.com/FreeForLife to learn how you can get Business Internet Free Forever. Airbnb - Find yourself a cohost at airbnb.com/host Resources Mentioned: Mo's Linkedin: linkedin.com/in/mogawdat Mo's Instagram: instagram.com/mo_gawdat Mo's Website: mogawdat.com Mo's Book, Scary Smart: bit.ly/-ScarySmart Mo's Podcast, Slow Mo:bit.ly/SloMo-apple Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap YouTube - youtube.com/c/YoungandProfiting Newsletter - youngandprofiting.co/newsletter LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI in Business, Generative AI, AI for Entrepreneurs, AI Podcast
Companies in the S&P 500 are increasingly disclosing AI-related risks. Find out what this means for C-Suite leaders and boards. More than 70% of the S&P 500 disclosed material AI risks in 2025, up from only 12% in 2023. What are the biggest AI-related risks for these companies, and how can they integrate AI into governance and risk frameworks? Join Steve Odland and guest Andrew Jones, principal researcher at the Governance & Sustainability Center of The Conference Board, to discover why AI disclosures have soared since 2023, the challenges of divergent regulations in the EU and US, and why AI further complicates cybersecurity. For more from The Conference Board: AI Risk Disclosures in the S&P 500: Reputation, Cybersecurity, and Regulation How Should Companies Approach Reputation Building in the AI Era? A Coach for Every Worker: Scaling Access and Performance with AI
We recently hosted an insightful session on data privacy, cybersecurity compliance, and AI risks.Our expert panel — Maureen A., Prateek Tiwari, and Arianna Gonzalez, MBA — discussed evolving global privacy laws like the GDPR, CCPA, and India's DPDP Act, cross-border data transfers, and the challenges of AI-driven data processing.They also shared key takeaways on cybersecurity risk management, litigation trends, and proactive compliance strategies to help organizations strengthen their data protection programs in today's complex digital landscape.Listen In!
Can we align AI with society's best interests? Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to discuss the risks to humanity and society as tech firms ignore safety and prioritize speed in the race to build more and more powerful AI models. AI is the most powerful technology humanity has ever built. It can cure disease, reinvent education, unlock scientific discovery. But there is a danger to rolling out new technologies en masse to society without understanding the possible risks. The tradeoff between AI's risks and potential rewards is similar to deployment of social media. It began as a tool to connect people and, in many ways, it did. But it also become an engine for polarization, disinformation, and mass surveillance. That wasn't inevitable. It was the product of choices—choices made by a small handful of companies moving fast and breaking things. Will AI follow the same path?Host: Ian BremmerGuest: Tristan Harris Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Can we align AI with society's best interests? Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to discuss the risks to humanity and society as tech firms ignore safety and prioritize speed in the race to build more and more powerful AI models. AI is the most powerful technology humanity has ever built. It can cure disease, reinvent education, unlock scientific discovery. But there is a danger to rolling out new technologies en masse to society without understanding the possible risks. The tradeoff between AI's risks and potential rewards is similar to deployment of social media. It began as a tool to connect people and, in many ways, it did. But it also become an engine for polarization, disinformation, and mass surveillance. That wasn't inevitable. It was the product of choices—choices made by a small handful of companies moving fast and breaking things. Will AI follow the same path?Host: Ian BremmerGuest: Tristan Harris Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The Cybersecurity and Infrastructure Security Agency (CISA) has issued an emergency directive for federal agencies to update their F5 products following a significant breach where hackers accessed source code and undisclosed vulnerabilities. This incident, discovered in August, poses a serious risk to federal networks, as the threat actor could exploit these vulnerabilities to gain unauthorized access and exfiltrate sensitive data. Agencies are required to apply the latest updates by October 22nd and report their F5 deployments by October 29th, highlighting the urgency of addressing these security concerns.In a related development, the National Institute of Standards and Technology (NIST) is encouraging federal agencies to take calculated risks with artificial intelligence (AI) under new federal guidance. Martin Stanley, an AI and cybersecurity researcher, emphasized the importance of risk management in AI deployment, particularly in comparison to more established sectors like financial services. As agencies adapt to this guidance, they must identify high-impact AI applications that require thorough risk management to ensure both innovation and safety.A report from Cork Protection underscores the need for small and medium-sized businesses (SMBs) to adopt a security-first approach in light of evolving cyber threats. Many SMBs remain complacent, mistakenly believing they are not targets for cybercriminals. The report warns that this mindset, combined with the rising financial risks associated with breaches, necessitates a shift towards a security-centric operational model. The cybersecurity services market is projected to grow significantly, presenting opportunities for IT service providers that prioritize security.Apple has announced a substantial increase in its bug bounty program, now offering up to $5 million for critical vulnerabilities. This move reflects the growing importance of addressing security challenges within its ecosystem, which includes over 2.35 billion active devices. The company has previously awarded millions to security researchers, emphasizing its commitment to user privacy and security. As the landscape of cybersecurity evolves, managed service providers (MSPs) are urged to tighten vendor monitoring, incorporate AI risk assessments, and focus on continuous assurance to meet the increasing demands for security. Three things to know today00:00 Cybersecurity Crossroads: F5 Breach, AI Risk, and Apple's $5M Bug Bounty Signal Security Accountability06:44 Nearly a Third of MSPs Admit to Preventable Microsoft 365 Data Loss, Syncro Survey Finds09:22 AI Reality Check: Workers' Overconfidence, Cheaper Models, and Microsoft's Scientific Breakthrough Signal Maturity in the Market This is the Business of Tech. Supported by: https://mailprotector.com/mspradio/
Lowenstein Sandler's Insurance Recovery Podcast: Don’t Take No For An Answer
In this episode of Don't Take No For An Answer, new Lowenstein partner Jeremy M. King joins host Lynda A. Bennett to discuss AI cybersecurity risks and how to insure them. They discuss the plethora of security risks associated with AI usage and the urgency for organizations to review their insurance policy language before a claim is presented to avoid surprises later. King and Bennett encourage listeners to make sure their patchwork quilt of coverage does not have any holes from crime to standalone cyber to professional liability to media policies. The episode concludes by discussing the rise of regulatory actions being taken by states to address AI usage and how that impacts coverage rights. Speakers: Lynda A. Bennett, Partner and Chair, Insurance RecoveryJeremy M. King, Partner, Insurance Recovery
Company background: "HSO is the second largest Microsoft partner in the globe," Holwagner reports. It focuses on industries including professional services, manufacturing, finance, and the public sector. HSO continues to grow not only with its traditional ERP services but also around cloud and AI services. "The mission here is really to improve our clients' business performance with the results of Microsoft solutions."AI's market impact: "It's definitely a transformation happening faster than anything I've seen before," Holwagner says. While there's already been significant advancements with AI, it's still only the beginning of what has yet to be built out and understood. He breaks down AI across four different roles:At the top level, boards and owners are pushing for areas of efficiency to stay competitive, reimagining the business model using AI.The next level is the CTO or an IT manager; they have efficiency demands, but they're also primarily thinking about how to contain information and data in a security model.The business leaders or department heads are being tasked to think about efficiency using AI but they're mostly busy keeping their engine going. They need tools that show them where to get ROI.The last level is HR, which might be considering where AI is filling in for various jobs.Perspectives for applying AI: HSO looks from a responsibility perspective in three different areas. First, it aims to educate customers on what's possible while also focusing on what's doable. Second is protection, which involves having control over your domain information. The third area is thinking about use cases for specific AI components.Organizational transformation: With the introduction of AI, there's a transformation happening across organizations in a variety of industries. AI has been thought of as a technical element when it needs to be included in functional conversation, especially for consulting businesses, Holwagner notes. Leaders and managers must understand the concepts of weaving in AI to give it value. AI transformation will likely lead to a "healthy reduction in certain areas" in the workforce, but "the transformation of what people are going to do in the organization is going to change." It will be more business logic transformation consulting and fewer hands-on the keyboard-related tasks, Holwagner shares.Summit NA: HSO will be attending Community Summit North America. You can connect with HSO at booth #209. The HSO team will be presenting several sessions throughout the event as well, including:The Latest D365 AI Agents and Features to Automate Your Supply Chain on Monday, October 20thDelivering a Scalable, Secure Data & AI Platform on Monday, October 20th3 Hidden Risks of AI in the Enterprise—and How to Manage Them Responsibly on Tuesday, October 21stSolving Customer Master Data Challenges for a 360° View in Dynamics 365 CE (CRM) and F/SCM (FO) on Wednesday, October 22nd Visit Cloud Wars for more.
On Healthy Mind, Healthy Life, Avik speaks with award-winning techno-thriller author Guy Morris about blending rigorous research with fiction to wake readers up to real-world risks. We unpack how 70 years of AI progress, geopolitics, national debt, climate pressure, and election manipulation converge—and why credible facts make stories stick. Guy shares the FBI-knock-on-the-door moment that reshaped his view of technology, a clear warning on facial recognition and biometric logins, and his choice to write high-tension, non-gun-hero protagonists grounded in human ethics. If you care about mental clarity in an anxious news cycle, digital safety, and page-turners that actually teach, this episode is for you. About the guest: Guy Morris spent 38 years leading tech and strategy at firms like IBM, Oracle, and Microsoft before turning to fiction. Since 2020 he's released multiple award-winning thrillers—often compared to Dan Brown and Robert Ludlum—rooted in real technologies, history, and geopolitics. Key takeaways : Research isn't window dressing; verifiable facts make fiction provoke thought and change behavior AI isn't new—it's a 70-year arc now reaching mass application; risks arise when commercial incentives downplay failure modes . Guy writes to show the convergence of pressures: geopolitics, national debt, climate, banking shifts, misinformation, and democratic backsliding Thrillers as a release valve: transforming societal anxiety into narrative helps audiences process fear and consider options. A 1990s incident—an NSA program “escaped”—sparked Guy's security lens and eventually drew a visit from the FBI, proving how plausible his reconstruction was. Core advice: avoid using biometrics (face, iris, thumbprint) as passwords; if compromised, you can't reset your face or print. He favors non-violent protagonists and ethical problem-solving; ingenuity over lethality preserves human dignity and reduces copycat harm. Mental habit: focus on history + humanity—tech amplifies old human tendencies; understand the past to choose wiser futures. How to connect with Guy Morris Website: https://www.guymorrisbooks.com/ Instagram: https://www.instagram.com/authorguymorris/ Want to be a guest on Healthy Mind, Healthy Life? DM on PM - Send me a message on PodMatch DM Me Here: https://www.podmatch.com/hostdetailpreview/avik Disclaimer: This video is for educational and informational purposes only. The views expressed are the personal opinions of the guest and do not reflect the views of the host or Healthy Mind By Avik™️. We do not intend to harm, defame, or discredit any person, organization, brand, product, country, or profession mentioned. All third-party media used remain the property of their respective owners and are used under fair use for informational purposes. By watching, you acknowledge and accept this disclaimer. Healthy Mind By Avik™️ is a global platform redefining mental health as a necessity, not a luxury. Born during the pandemic, it's become a sanctuary for healing, growth, and mindful living. Hosted by Avik Chakraborty—storyteller, survivor, wellness advocate—this channel shares powerful podcasts and soul-nurturing conversations on: • Mental Health & Emotional Well-being• Mindfulness & Spiritual Growth• Holistic Healing & Conscious Living• Trauma Recovery & Self-Empowerment With over 4,400+ episodes and 168.4K+ global listeners, join us as we unite voices, break stigma, and build a world where every story matters. Subscribe and be part of this healing journey. ContactBrand: Healthy Mind By Avik™Email: join@healthymindbyavik.com | podcast@healthymindbyavik.comWebsite: www.healthymindbyavik.comBased in: India & USA Open to collaborations, guest appearances, coaching, and strategic partnerships. Let's connect to create a ripple effect of positivity. CHECK PODCAST SHOWS & BE A GUEST:Listen our 17 Podcast Shows Here: https://www.podbean.com/podcast-network/healthymindbyavikBe a guest on our other shows: https://www.healthymindbyavik.com/beaguestVideo Testimonial: https://www.healthymindbyavik.com/testimonialsJoin Our Guest & Listener Community: https://nas.io/healthymindSubscribe To Newsletter: https://healthymindbyavik.substack.com/ OUR SERVICESBusiness Podcast Management - https://ourofferings.healthymindbyavik.com/corporatepodcasting/Individual Podcast Management - https://ourofferings.healthymindbyavik.com/Podcasting/Share Your Story With World - https://ourofferings.healthymindbyavik.com/shareyourstory STAY TUNED AND FOLLOW US!Medium - https://medium.com/@contentbyavikYouTube - https://www.youtube.com/@healthymindbyavikInstagram - https://www.instagram.com/healthyminds.pod/Facebook - https://www.facebook.com/podcast.healthymindLinkedin Page - https://www.linkedin.com/company/healthymindbyavikLinkedIn - https://www.linkedin.com/in/avikchakrabortypodcaster/Twitter - https://twitter.com/podhealthclubPinterest - https://www.pinterest.com/Avikpodhealth/ SHARE YOUR REVIEWShare your Google Review - https://www.podpage.com/bizblend/reviews/new/Share a video Testimonial and it will be displayed on our website - https://famewall.healthymindbyavik.com/ Because every story matters and yours could be the one that lights the way! #podmatch #healthymind #healthymindbyavik #wellness #HealthyMindByAvik #MentalHealthAwareness #comedypodcast #truecrimepodcast #historypodcast #startupspodcast #podcasthost #podcasttips #podcaststudio #podcastseries #podcastformentalhealth #podcastforentrepreneurs #podcastformoms #femalepodcasters #podcastcommunity #podcastgoals #podcastrecommendations #bestpodcast #podcastlovers #podcastersofinstagram #newpodcastalert #podcast #podcasting #podcastlife #podcasts #spotifypodcast #applepodcasts #podbean #podcastcommunity #podcastgoals #bestpodcast #podcastlovers #podcasthost #podcastseries #podcastforspeakers #StorytellingAsMedicine #PodcastLife #PersonalDevelopment #ConsciousLiving #GrowthMindset #MindfulnessMatters #VoicesOfUnity #InspirationDaily #podcast #podcasting #podcaster #podcastlife #podcastlove #podcastshow #podcastcommunity #newpodcast #podcastaddict #podcasthost #podatcastepisode #podcastinglife #podrecommendation #wellnesspodcast #healthpodcast #mentalhealthpodcast #wellbeing #selfcare #mentalhealth #mindfulness #healthandwellness #wellnessjourney #mentalhealthmatters #mentalhealthawareness #healthandwellnesspodcast #fyp #foryou #foryoupage #viral #trending #tiktok #tiktokviral #explore #trendingvideo #youtube #motivation #inspiration #positivity #mindset #selflove #success
Is your AI built on quicksand? Learn how bad data, poisoned datasets, and deep fakes threaten your AI systems, and what to do about it.In this episode of CXOTalk (#896), AI luminaries Dr. David Bray and Dr. Anthony Scriffignano reveal the hidden dangers lurking in your AI foundations. They share practical strategies for building trustworthy AI systems and escaping the "AI quicksand" that traps countless organizations.
Take a Network Break! We start with a two-part listener follow-up and sound alarms about a serious flaw in Termix and tens of thousands of still-vulnerable Cisco security devices. Alkira debuts an MCP server and AI copilot for its multi-cloud networking platform; Cato Networks releases a Chrome-based browser extension to help secure contractor and personal... Read more »
Take a Network Break! We start with a two-part listener follow-up and sound alarms about a serious flaw in Termix and tens of thousands of still-vulnerable Cisco security devices. Alkira debuts an MCP server and AI copilot for its multi-cloud networking platform; Cato Networks releases a Chrome-based browser extension to help secure contractor and personal... Read more »
Take a Network Break! We start with a two-part listener follow-up and sound alarms about a serious flaw in Termix and tens of thousands of still-vulnerable Cisco security devices. Alkira debuts an MCP server and AI copilot for its multi-cloud networking platform; Cato Networks releases a Chrome-based browser extension to help secure contractor and personal... Read more »