POPULARITY
Categories
Our 235th episode with a summary and discussion of last week's big AI news!Recorded on 01/02/2026Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:* Major model launches include Anthropic's Opus 4.6 with a 1M-token context window and “agent teams,” OpenAI's GPT-5.3 Codex and faster Codex Spark via Cerebras, and Google's Gemini 3 Deep Think posting big jumps on ARC-AGI-2 and other STEM benchmarks amid criticism about missing safety documentation.* Generative media advances feature ByteDance's Seedance 2.0 text-to-video with high realism and broad prompting inputs, new image models Seedream 5.0 and Alibaba's Qwen Image 2.0, plus xAI's Grok Imagine API for text/image-to-video.* Open and competitive releases expand with Zhipu's GLM-5, DeepSeek's 1M-token context model, Cursor Composer 1.5, and open-weight Qwen3 Coder Next using hybrid attention aimed at efficient local/agentic coding.* Business updates include ElevenLabs raising $500M at an $11B valuation, Runway raising $315M at a $5.3B valuation, humanoid robotics firm Apptronik raising $935M at a $5.3B valuation, Waymo announcing readiness for high-volume production of its 6th-gen hardware, plus industry drama around Anthropic's Super Bowl ad and departures from xAI.Timestamps:(00:00:10) Intro / Banter(00:02:03) Sponsor Break(00:05:33) Response to listener commentsTools & Apps(00:07:27) Anthropic releases Opus 4.6 with new 'agent teams' | TechCrunch(00:11:28) OpenAI's new GPT-5.3-Codex is 25% faster and goes way beyond coding now - what's new | ZDNET(00:25:30) OpenAI launches new macOS app for agentic coding | TechCrunch(00:26:38) Google Unveils Gemini 3 Deep Think for Science & Engineering | The Tech Buzz(00:31:26) ByteDance's Seedance 2.0 Might be the Best AI Video Generator Yet - TechEBlog(00:35:14) China's ByteDance, Alibaba unveil AI image tools to rival Google's popular Nano Banana | South China Morning Post(00:36:54) DeepSeek boosts AI model with 10-fold token addition as Zhipu AI unveils GLM-5 | South China Morning Post(00:43:11) Cursor launches Composer 1.5 with upgrades for complex tasks(00:44:03) xAI launches Grok Imagine API for text and image to videoApplications & Business(00:45:47) Nvidia-backed AI voice startups ElevenLabs hits $11 billion valuation(00:52:04) AI video startup Runway raises $315M at $5.3B valuation, eyes more capable world models | TechCrunch(00:54:02) Humanoid robot startup Apptronik has now raised $935M at a $5B+ valuation | TechCrunch(00:57:10) Anthropic says 'Claude will remain ad-free,' unlike an unnamed rival | The Verge(01:00:18) Okay, now exactly half of xAI's founding team has left the company | TechCrunch(01:04:03) Waymo's next-gen robotaxi is ready for passengers — and also 'high-volume production' | The VergeProjects & Open Source(01:04:59) Qwen3-Coder-Next: Pushing Small Hybrid Models on Agentic Coding(01:08:38) OpenClaw's AI 'skill' extensions are a security nightmare | The VergeResearch & Advancements(01:10:40) Learning to Reason in 13 Parameters(01:16:01) Reinforcement World Model Learning for LLM-based Agents(01:20:00) Opus 4.6 on Vending-Bench – Not Just a Helpful AssistantPolicy & Safety(01:22:28) METR GPT-5.2(01:26:59) The Hot Mess of AI: How Does Misalignment Scale with Model Intelligence and Task Complexity?See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Global technology spending is projected to reach $5.6 trillion by 2026, with nearly two-thirds of this investment directed toward software and computer equipment, particularly servers, according to Forrester. Generative AI is cited as a primary driver of this increase, shifting the balance of power toward cloud providers such as AWS and Azure. This escalation has implications for operational margins and the position of IT service providers, as businesses increasingly migrate complex workloads to cloud infrastructure ecosystems.Supporting data shows a disconnect between tech employment trends and hiring activity. In January 2026, technology companies cut approximately 20,155 jobs, mainly in telecommunications, while job postings for tech positions rose by 13% compared to the prior month, based on CompTIA analysis. Dave Sobel interprets this as a shift away from permanent IT headcount to project-based, AI-focused engagements. This development places pressure on service providers, who must adapt to buyers reallocating spend from traditional staffing models to short-term, outcome-oriented contracts.Adjacent discussion covered two press releases: VirtuaCare launched a support offering for Windows-based MSPs needing Apple expertise, delivering an externally verifiable, Apple-certified service. In contrast, Miso announced a roadmap for an autonomous AI L1 technician but did not substantiate claims with deliverables or customer data. Dave Sobel emphasized the need for MSPs to demand piloting, outcome metrics, and auditable product maturity, warning against reliance on unproven AI solutions and highlighting the risk of outsourcing as only a temporary solution.The core implication for MSPs and IT providers is a need for tactical negotiation and operational risk management. Dave Sobel recommends using AI first to reduce internal labor costs before introducing it as a client offering, prioritizing outcome-based pricing and adjusting contracts to retain value from efficiency gains. Providers should avoid becoming displaced labor, rigorously test new technologies before adoption, and remain vigilant regarding vendor claims. The emphasis remains on capturing and defending margins through accountable operations and contract governance rather than chasing speculative innovation.Three things to know today00:00 Tech Spending Hits $5.6T but MSPs Face Margin Squeeze Without AI Pricing Reset05:31 VirtuaCare Ships Apple Support; Mizo Announces Roadmap—One's Testable Today08:17 MSPs Must Capture AI Efficiency Value or Face Margin CompressionThis is the Business of Tech. Supported by: Small Biz Thought CommunityCheck out Killing IT
Professors Jeremy Bearer-Friend and Sarah Polcz discuss their recent paper, “Sharing the Algorithm: The Tax Solution to Generative AI,” which outlines their proposal for taxing generative AI companies.For more, read Bearer-Friend and Polcz's article.***CreditsHost: David D. StewartExecutive Producers: Jeanne Rauch-Zender, Paige JonesProducers: Jordan Parrish, Peyton RhodesAudio Engineers: Jordan Parrish, Peyton Rhodes****The submissions period for the Tax Notes Student Writing Competition is open! For more information or to submit, visit taxnotes.com/students. This episode is sponsored by the University of California Irvine School of Law Graduate Tax Program. For more information, visit law.uci.edu/gradtax. This episode is sponsored by Crux. For more information, visit cruxclimate.com/contact.
Peer mentoring accelerates skill-building, boosts collaboration, and fosters innovation, helping organizations embrace generative AI effectively while creating a culture of learning, confidence, and shared expertise. That's the key take-away message of this episode of the Wise Decision Maker Show, which talks about an approach to learning that makes sure generative AI is not intimidating.This article forms the basis for this episode: https://disasteravoidanceexperts.com/generative-ai-isnt-intimidating-when-you-learn-it-this-way/
On Cloud Realities, the real insight rarely came from technology alone, it emerged at the intersection of People, Culture, Industry, and Technology. In the remix we bring back familiar voices and topics while going deeper into the wider impacts, influence, and potential of today's tech across society. The 2026 season trailer, arriving a little later than planned, opens with this renewed focus and sets the stage for Episode 1, launching on February 19. Here's a quick trailer to get you ready!TLDR00:11 The emergence of insight from Cloud Realities01:00 Where the magic happens 01:42 The real impact on People, Culture, Industry and Tech HostsDave Chapman: https://www.linkedin.com/in/chapmandr/Esmee van de Giessen: https://www.linkedin.com/in/esmeevandegiessen/Rob Kernahan: https://www.linkedin.com/in/rob-kernahan/ProductionMarcel van der Burg: https://www.linkedin.com/in/marcel-vd-burg/Dave Chapman: https://www.linkedin.com/in/chapmandr/ SoundBen Corbett: https://www.linkedin.com/in/ben-corbett-3b6a11135/Louis Corbett: https://www.linkedin.com/in/louis-corbett-087250264/ 'Realities Remixed' is an original podcast from Capgemini
Puh, was für ein Thema! Ja, KI-Chats wie ChatGPT, Gemini und Co. haben die Art und Weise, wie Menschen nach Informationen suchen, grundlegend verändert. Ist SEO deswegen nicht mehr relevant? Noch lange nicht! Mit GEO bezeichnen wir eine neue Ära, in der wir unsere Angebote und Informationen so gestalten, dass sie von Suchenden bestmöglich auffindbar sind. Stichwort „Generative Engine Optimization“: In den letzten zwei Jahren konnten wir mit unserer Agentur zahlreiche Erfahrungen und Erfolge für unsere Kunden damit erzielen. Diese möchte ich in diesem Video gerne mit dir teilen. Bitte bedenke: GEO ist eine Art Unterdisziplin. SEO bleibt das Grundgerüst bzw. das Fundament. Was GEO genau ist, wie es funktioniert, was LLMs, Grounding und vieles mehr sind und wie du in KI-Antworten besser sichtbar wirst, erfährst du in diesem langen Generative Engine Optimization Guide (2026). #geo #generativeengineoptimization #seo #chatgptseo Inhalt 00:00 Einleitung 00:38 Was ist GEO? 02:55 Warum wir GEO sagen 07:23 Was ist ein LLM? 08:10 Wie funktioniert ein LLM? 12:28 Verschiedene Arten von Generative Engines 14:20 Nutzerverhalten & Marktanteile 19:38 So wirst du in AI Searches gefunden 20:07 Prompts 33:12 Content 48:05 Technik 1:00:38 Entiät & EEAT 01:14:40 Mentions & Backlinks 01:26:47 GEO "Black Hat" 01:30:03 Monitoring 01:35:52 Praxisbeispiel 01:41:32 Der Perplexity Leak 01:46:12 Zusammenfassung 01:49:03 Ende
In this week's episode of Medicine: The Truth, hosts Jeremy Corr and Dr. Robert Pearl examine a sweeping set of developments shaping American healthcare. From the first state-approved use of generative AI to prescribe medications without human oversight to rising healthcare costs, from worsening vaccine misinformation to the stubborn persistence of preventable disease, this show focuses on biggest stories in medicine today. The episode opens with a groundbreaking and controversial pilot program in Utah that allows a generative AI system to renew prescriptions for chronic disease without physician involvement. From there, the conversation turns to the relentless rise in healthcare spending. New federal data show Americans now spend more than $15,700 per person annually on medical care, with costs growing twice as fast as the economy. While insurance coverage remains high for now, Pearl warns that expiring subsidies, Medicaid restrictions and rising premiums are already pushing millions out of coverage. For many families, healthcare affordability has become a top issue and, increasingly, a political fault line heading into the midterm election cycle. Here are more major storylines from MTT episode 103: Exercise as medicine for depression: A large meta-analysis finds that regular exercise can be as effective as antidepressant medication for many patients. Trump's healthcare plan fades quickly: Pearl explains why the president's proposal disappeared from the headlines. Measles returns in force: Cases are nearing 1,000 and outbreaks concentrated in under-vaccinated communities. Vaccine battles intensify under RFK Jr.: New appointments to federal advisory committees raise alarm among scientists, as anti-vaccine voices gain influence. Chronic disease remains America's top killer: Cardiovascular disease continues to claim nearly one million lives annually. Generative AI's biggest promise: Pearl makes the case that AI-driven, at-home monitoring could finally transform chronic disease management. Cancer trends turn ominous: Colorectal cancer deaths among Americans under 50 are rising sharply, becoming the leading cancer killer in this age group. Genetics vs. lifestyle revisited: New research suggests genetics may account for half of lifespan variation but lifestyle still determines how many of those years are lived in good health. High-deductible health plans: New data show cancer patients with high-deductible insurance have significantly higher mortality. GLP-1 weight-loss pills arrive: The first oral GLP-1 drug launches to record demand. A devastating flu season for children: Despite the availability of safe vaccines, pediatric flu deaths reach alarming levels among unvaccinated kids. As the episode closes, Dr. Pearl delivers a stark warning about the resurgence of pseudoscience in medicine. Tune in for more fact-based coverage and analysis of healthcare's biggest stories. * * * Dr. Robert Pearl is the author of the new book “ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine” about the impact of AI on the future of medicine. Fixing Healthcare is a co-production of Dr. Robert Pearl and Jeremy Corr. Subscribe to the show via Apple, Spotify, Stitcher or wherever you find podcasts. Join the conversation or suggest a guest by following the show on Twitter and LinkedIn The post MTT #103: Can generative AI safely prescribe medicine on its own? appeared first on Fixing Healthcare.
Coronavirus: The Truth with Dr. Robert Pearl and Jeremy Corr
In this week's episode of Medicine: The Truth, hosts Jeremy Corr and Dr. Robert Pearl examine a sweeping set of developments shaping American healthcare. From the first state-approved use of ... The post MTT #103: Can generative AI safely prescribe medicine on its own? appeared first on Fixing Healthcare.
Businesses are increasingly considering the use of generative AI for work that historically relied on human creativity, including in the area of marketing and advertising. But can ads made with gen AI really be more effective than human-created ads? Professor Vilma Todri of Emory University Goizueta Business School joins Kathleen Hu and Jaclyn Phillips to discuss her recent research on the impact of visual generative AI on advertising effectiveness. Listen to this episode to learn more about how gen AI is being used in advertising and the implications for ad effectiveness and AI disclosure policies. With special guest: Vilma Todri, Associate Professor, Goizueta Business School of Emory University Related Links: The Impact of Visual Generative AI on Advertising Effectiveness Hosted by: Kathleen Hu, Cornerstone Research and Jaclyn Phillips, Proskauer Rose
Send us a textAI is everywhere—and opinions about it are strong, especially within the Christian community. In this episode, Alyssa Avant breaks down what AI actually is (and isn't) and explains the key differences between generative and agentic AI in simple, practical terms. This conversation isn't about convincing you to use AI—it's about gaining understanding so you can make wise, prayerful decisions as a Christian business owner. Rooted in Proverbs 4:7, this episode will help you replace confusion and fear with clarity, discernment, and faithful stewardship.
On this episode, we have David Timm with us to discuss the misuse of Generative AI (GenAI) in reporting a protest. David, a bid protest attorney, walks us through how GenAI is impacting and changing the protest process, what issues arise when companies use GenAI and things to consider when using GenAI for a bid protest. Do not miss this conversation. To connect with David, find him on LinkedIn
Resetting the culture code is essential to unlock Gen AI's value — aligning people, ethics, and collaboration so AI becomes a trusted partner for innovation, not a source of fear or disruption. That's the key take-away message of this episode of the Wise Decision Maker Show, which discusses resetting the culture code for the generative AI era.This article forms the basis for this episode: https://disasteravoidanceexperts.com/resetting-the-culture-code-for-the-generative-ai-era/
00:00 Opening01:04 You can't put The Faith on the screen & do it well04:23 Stop debating if Jesus was an undocumented immigrant or refugee07:54 Don't use Christian images as a foil for ideological vision10:42 I care if the vulnerable meet God, which means I have to feed them18:40 Silencing the monstrous is the only righteous anger22:33 Response videos & click bait breed arrogance & division27:43 Our utopian vision tends to not include ourselves changing31:38 It is too easy to convince ourselves our sin does not matter34:08 Your anger is probably self righteous, which means it is sinful38:27 The lesson of Ebineezer Scrooge is "fear not!"40:44 God wants to look at us & have us look at Him the way husband & wife look at each other42:25 It is hard to be angry at other people if we know we're an idiot46:26 The utopian vision eliminates the possibility of forgiveness50:49 Dungeons & Dragons plane of Nirvana teaches us how scary utopia would actually be if we tried to live in it53:34 Closing~~~It Will All Fail - S7E19~~~We are nearing the end of season seven & Fr Symeon didn't want to miss the opportunity to talk about how the two dominant ideologies in our culture both rely on what we could call an unconstrained vision of humanity - the dream of The Enlightenment still alive.In this third part of three, we delve into a bit more pop culture references than is typical for us as we continue to try to drive home the idea the pursuit of utopia brings dystopia & the problem is human delusion. We must focus on dispelling delusion, not problem solving & system building.~~~Scripture citations for this episode:Matthew 21:12-17, Mark 11:15-19, Luke 19:45-48, John 2:13-17 - Cleansing of the templeMark 5: 1-20 - Gerasene DemoniacThe Christian Saints Podcast is a joint production of Generative sounds & Paradosis Pavilion. Our hosts are Father Symeon Kees of Iowa City & James John Marks of Chicago.Paradosis Pavilion - https://youtube.com/@paradosispavilion9555https://www.instagram.com/christiansaintspodcasthttps://x.com/podcast_saintshttps://www.facebook.com/christiansaintspodcasthttps://www.threads.net/@christiansaintspodcasthttps://bsky.app/profile/xtiansaintspodcast.bsky.socialIconographic images used by kind permission of Nicholas Papas, who controls distribution rights of these imagesPrints of all of Nick's work can be found at Saint Demetrius Press - http://www.saintdemetriuspress.comAll music in these episodes is a production of Generative Soundshttps://generativesoundsjjm.bandcamp.comDistribution rights of this episode & all music contained in it are controlled by Generative SoundsCopyright 2021 - 2026
Generative Artificial Intelligence is rapidly emerging as one of the most debated forces in education today. Tools such as ChatGPT and Claude are widely predicted to reshape how students learn and how teachers teach. Advocates argue that GenAI could democratise access to high quality education, offering personalised learning at scale while reducing administrative burdens for educators. Critics warn of significant risks, from undermining student learning to eroding teacher autonomy. In this episode of Top Class, we explore the latest emerging evidence shared in the OECD's Digital Education Outlook 2026. Senior OECD Analyst Stéphan Vincent Lancrin speaks to OECD Editor Duncan Crawford about the latest research, the potential and the risks of GenAI, and what this means for the future of teaching and learning.
The China Internet Network Information Center noted the broader adoption of generative AI technology across consumer and industrial sectors, with user numbers exceeding 600 million.
In this episode, Wesley Hartman, co‑author of the Journal of Accountancy's Technology Q&A column, discusses how AI is reshaping work for accounting firms. He explains the difference between generative and agentic AI and why both matter for firm workflows. Hartman also outlines the most pressing AI risks for CPAs, including hallucinations and emerging deepfake‑driven scams, which he wrote about in the February Tech Q&A. He closes the conversation with practical guidance for adopting AI tools methodically while avoiding common pitfalls. Also, here are a few Technology Q&A columns related to the discussion: "How CPAs Can Combat the Rising Threat of Deepfake Fraud," May 1, 2025 "AI-Powered Hacking in Accounting: 'No One Is Safe'," Oct. 1, 2025 "Creating an AI Agent in ChatGPT," Nov. 1, 2025 What you'll learn from this episode: The ways Hartman uses AI in his own work. The difference between agentic and generative AI. Why "confidently wrong" AI responses can present risks for firms. How inaction or "wait‑and‑see" thinking can create its own form of AI risk.
-The searches are part of an investigation that has been ongoing for nearly a year over the functioning of X's algorithms that are “likely to have distorted the operation of an automated data processing system,” investigators said at the time. -On February 24, or possibly earlier, Mozilla will roll out Firefox 148, which will include an AI controls section in the desktop browser settings. From here, you'll be able to block current and future generative AI features, or only enable select tools. -Developer Lyra Rebane created Xikipedia, a social media-style feed of Wikipedia entries. The web app algorithmically displays info from Simple Wikipedia Learn more about your ad choices. Visit podcastchoices.com/adchoices
Starting with Firefox 148 arriving later this month, users will find a new AI controls section within the desktop browser settings. Also, Ring's Search Party feature for finding lost dogs is now available across the U.S. — even if you don't own a Ring camera. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Voices of Search // A Search Engine Optimization (SEO) & Content Marketing Podcast
Third-party sites capture traffic by explaining what brands actually do. John Vantine, Director of SEO at GoodRx, has built cross-functional generative search frameworks over seven years that power discovery across Google and ChatGPT. He reveals how About Us and FAQ pages become critical ranking assets when they proactively address common brand misconceptions in plain language. Vantine demonstrates how predictive search volume around questions like "how does [brand] make money" signals untapped content opportunities that competitors exploit when brands fail to clearly explain their value proposition.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, Ricardo explains the difference between Generative AI, AI Agents, and Agentic AI—topics that are widely discussed but often misunderstood. He draws on a clear explanation by Filipa Peleja, presented during the O'Reilly Super Stream on Generative AI. Generative AI, based on large language models, responds to prompts and produces text, ideas, and analysis, but it has no initiative, goals, or independent decision-making. AI Agents, on the other hand, are given a goal and can plan tasks, use tools, interact with systems, and execute actions in sequence, with operational autonomy within defined rules. Finally, Agentic AI involves systems of agents working together, with memory, adaptability, and evolving strategies, raising major challenges around governance, ethics, and accountability. Catch the full episode to learn more!
Neste episódio, Ricardo esclarece a diferença entre IA Generativa, Agentes de IA e IA Agêntica, um tema muito falado, mas ainda confuso. Ele se inspira em uma explicação de Filipa Peleja, apresentada no O'Reilly Super Stream sobre IA Generativa. A IA Generativa, baseada em modelos de linguagem, responde a prompts e produz textos, ideias e análises, mas não tem iniciativa, objetivos ou tomada de decisão própria. Já os Agentes de IA recebem um objetivo e conseguem planejar tarefas, usar ferramentas, interagir com sistemas e executar ações em sequência, com autonomia operacional dentro de regras definidas. Por fim, a IA Agêntica envolve sistemas de agentes que cooperam, possuem memória, se adaptam e ajustam estratégias, trazendo desafios de governança, ética e responsabilidade. Ouça o episódio e confira todos os detalhes!
When Oscar-winning filmmaker Guillermo del Toro was a kid growing up in Guadalajara, Mexico, he would draw monsters all day. His deeply Catholic grandmother even had him exorcised because of it. But when del Toro saw the 1931 film ‘Frankenstein,' his life changed. "I realized I understood my faith or my dogmas better through Frankenstein than through Sunday mass." His adaptation of Mary Shelley's classic book is nominated for nine Academy Awards, including Best Picture. Del Toro spoke with Terry Gross about getting over his fear of death, the design of Frankenstein's creature, and his opinion on generative AI.Also, John Powers reviews the noirish drama ‘Islands.' Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
When Oscar-winning filmmaker Guillermo del Toro was a kid growing up in Guadalajara, Mexico, he would draw monsters all day. His deeply Catholic grandmother even had him exorcised because of it. But when del Toro saw the 1931 film ‘Frankenstein,' his life changed. "I realized I understood my faith or my dogmas better through Frankenstein than through Sunday mass." His adaptation of Mary Shelley's classic book is nominated for nine Academy Awards, including Best Picture. Del Toro spoke with Terry Gross about getting over his fear of death, the design of Frankenstein's creature, and his opinion on generative AI.Also, John Powers reviews the noirish drama ‘Islands.' Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
In 2025, what were university leaders looking to learn about with Gen AI? On today's episode, I'll mention the top 5 requests for my webinars and workshops on Gen AI.
The State of Generative AI in the Enterprise 2025In this episode of The Metrics Brothers, Ray Rike and Dave Kellogg break down the 2025 State of Generative AI in the Enterprise report from Menlo Ventures and explain what the data really says about where enterprise AI adoption is accelerating and where the market is consolidating.The headline takeaway: AI software is scaling faster than any software category in history. Enterprise AI spend has exploded from roughly $1.7B in 2023 to nearly $37B in 2025, reaching scale in just three years. This revenue milestone took SaaS more than 15 years to achieve. Foundational models now represent the single largest area of spend, highlighting how infrastructure and model access remain core to enterprise AI strategies.Ray and Dave also explore a major strategic shift inside the enterprise: buy is decisively beating build. In 2025, 76% of enterprise AI solutions are purchased rather than built internally, up sharply from 53% the year prior. Rapid model evolution, ongoing retraining costs, and model drift are making internal AI development far more expensive to maintain than many teams originally expected.One of the most surprising findings is on go-to-market efficiency. AI software pilots convert to production at nearly twice the rate of traditional software, with roughly 47% of AI pilots reaching production versus about 25% for conventional enterprise software. This runs counter to recent narratives suggesting enterprise AI pilots are stalling and points to clearer ROI and faster time-to-value.The episode also dives into what Menlo calls the first true “AI killer app”: AI-assisted coding. Coding tools now account for more than half of departmental AI spend, with over 50% of developers already using AI coding assistants and adoption exceeding 65% among top-quartile teams. Real-world examples show meaningful productivity gains, including double-digit increases in development velocity and significant time savings during legacy system upgrades.Industry-wise, healthcare emerges as the largest buyer of vertical AI, representing 43% of vertical AI spend. This is notable given healthcare's historically lower IT spend as a percentage of revenue. Much of the value is coming from administrative automation such as medical scribing, where AI directly reduces non-clinical workload and unlocks meaningful productivity gains for care providers.Finally, Ray and Dave examine the shifting competitive landscape among foundation model providers. Anthropic has surged to roughly 40% share of enterprise AI usage, up dramatically from prior years, while OpenAI's share has declined as Google continues to gain traction. The discussion centers on focus versus breadth and why enterprise positioning and reliability may matter more than consumer mindshare.Key takeaways from the episode:AI software is the fastest-scaling software category everEnterprises are rapidly moving from build to buyAI pilots convert to production at nearly 2x traditional softwareAI coding is emerging as the first true enterprise AI killer appAnthropic's enterprise focus is translating into meaningful market share gainsIf you care about how AI adoption actually translates into spend, productivity, and competitive advantage inside large organizations, this episode is a must-listen.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Learning to teach better with Dr. Drew Nucci, researcher at WestEd. He shares about his experiences with teaching mathematics, and his research findings from interviewing mathematics & science teachers about how they are thinking about and using generative AI in their practice. Show notes and links: Drew Nucci on LinkedIn Math Ed Podcast episode 2510: Drew Nucci - artificial intelligence and math education The AmplifyGAIN Center Colleague AI Playlab Study: Emerging Patterns of Gen AI Use in K-12 Science and Mathematics Education WestEd: Advancing AI in Education 2026 Annual AMTE Conference sessions with Drew: 068. Building Partnerships to Advance Mathematics Education Research, Policy, and Practice - Catherine Paolucci & Drew Nucci (Feb 6 @ 8:15 am) 214. Transforming Math Instruction with Generative AI: Implications for Math Teachers' Professional Learning - Drew Nucci & Sarah Nielsen (Feb 7 @ 11:30 am) Special Guest: Drew Nucci.
Discover how to leverage the efficiency of generative AI without creating "AI slop" that alienates your audience.In this episode of Content Amplified, Ben Ard sits down with Andy Brooks to discuss the delicate balance between utilizing new technology and maintaining the human connection. While AI offers incredible speed, it often acts as an "affirmation loop" that validates mediocre ideas rather than challenging them. Andy explains why marketers must treat AI as a partner rather than a replacement to ensure their brand narrative remains genuine.Topics discussed in this episode:Why AI should be viewed as a "not terribly bright" but fast marketing coordinator.How to overcome the learning curve of effective prompting vs. just getting an output.Why Gen Z audiences are reacting negatively to AI-generated imagery.The importance of keeping "real" elements (stories, products, and people) untouched by AI.Whether "No AI Used" will become a permanent badge of honor for brands.About the Guest:Andy Brooks is the Director of Marketing and Communications. With a diverse background in radio, television, and software development, Andy has devoted his recent career to mastering the ins and outs of marketing technology. He is the author of two books and teaches courses on creating with Generative AI on Coursera, focusing on helping people increase efficiency without losing authenticity.Connect with Andy on LinkedIn: https://www.linkedin.com/in/aceebro/View Andy's course: https://www.coursera.org/instructor/andrew-brooksText us what you think about this episode!
A lecture given at L'Abri Fellowship in Southborough, Massachusetts. For more information, visit https://southboroughlabri.org/ by Ben Keyes Artificial intelligence is in the news every day in part because it is such a controversial topic. As usual, the loudest voices are at the extremes: "AI is going to usher humanity into a beautiful new era of human history!" or "AI is going to kill us all!" In this lecture, we will limit our reflections to the role that generative AI is playing in the creative arts. Is the use of AI by artists a good thing, something that will aid and enhance human creativity? Or is it in danger of replacing one of the building blocks of our humanness: our creativity? The Copyright for all material on the podcast is held by L'Abri Fellowship. We ask that you respect this by not publishing the material in full or in part in any format or post it on a website without seeking prior permission from L'Abri Fellowship. ©Southborough L'Abri 2026
How do you Ai? I asked Christopher Mims about his new book, "How To Ai." He discusses his own job's risk level, how Ai removes what he calls "toil" and how Ai may very well make you and I just a bit more human. As the host of Bold Names and columnist of The Wall Street Journal's "Keywords," he details practical Ai use cases. For tech folks, you'll learn practical stories, for the uninitiated, you'll get caught up. Buy the book: https://www.amazon.com/How-AI-Through-Basics-Transform/dp/B0F6MKZ1DH Find Justin: justinbradyshow.com Chapters 01:06 - Ai removes toil, not jobs 07:00 - Job disruption or new opportunities 09:23 - Clorox uses Ai for previously impossible tasks 10:56 - Generative vs non-generative AI 12:49 - Is the construction industry at Ai risk? 14:57 - The urgency of adopting Ai 17:17 - Ai and law. A win for lawyers and consumers. 24:40 - Ai in Hollywood. Will it kill creativity? 28:40 - Tension of job loss and productivity gains 30:40 - Ai makes us more human 31:22 - Journalism in an AI World
What is a spillover crisis and how can AI contribute to it? Dan Laufer, professor and head of the School of Communication Studies at the Auckland University of Technology, explains. Dr Daniel Laufer, PhD, MBA (The University of Texas at Austin, USA), is a Professor and Head of the School of Communication Studies at the […]
AI is hitting entertainment like a sledgehammer ... from algorithmic gatekeepers and AI-written scripts to digital actors and entire movies generated from a prompt.In this episode of TechFirst, host John Koetsier sits down with Larry Namer, founder of E! Entertainment Television and chairman of the World Film Institute, to unpack what AI really means for Hollywood, creators, and the global media economy.Larry explains why AI is best understood as a productivity amplifier rather than a creativity killer, collapsing months of work into hours while freeing creators to focus on what only humans can do. He shares how AI is lowering barriers to entry, enabling underserved niches, and accelerating new formats like vertical drama, interactive storytelling, and global-first content.The conversation also dives into:• Why AI-generated actors still lack true human empathy• How studios and IP owners will be forced to license their content to AI companies• The future of deepfakes, guardrails, and regulation• Why market fragmentation isn't a threat — it's an opportunity• How China, Korea, and global platforms are shaping what comes next • Why writers and storytellers may be entering their best era yetLarry brings decades of perspective from every major media transition — cable, streaming, global expansion — and makes the case that AI is just the next tool in a long line of transformative technologies.If you care about the future of movies, television, creators, and culture, this is a conversation you don't want to miss.⸻
In this episode, host Etienne Nichols sits down with Ashkon Rasooli, founder of Ingenious Solutions and a specialist in Software as a Medical Device (SaMD). The conversation previews their upcoming session at MD&M West, focusing on the critical intersection of generative AI (GenAI) and quality assurance. While many AI applications exist in MedTech, GenAI presents unique challenges because it creates new data—text, code, or images—rather than simply classifying existing information.Ashkon breaks down the specific failure modes unique to generative models, most notably "hallucinations." He explains how these outputs can appear legitimate while being factually incorrect, and explores the cascading levels of risk this poses. The discussion moves from simple credibility issues to severe safety concerns when AI-generated data is used in critical clinical decision-making without proper guardrails.The episode concludes with a forward-looking perspective on how validation is shifting. Ashkon argues that because GenAI behavior is statistical rather than deterministic, traditional pre-market validation is no longer sufficient. Instead, a robust quality framework must include continuous post-market surveillance and real-time independent monitoring to ensure device safety and effectiveness over time.Key Timestamps01:45 - Introduction to MD&M West and the "AI Guy for SaMD," Ashkon Rasooli.04:12 - Defining Generative AI: How it differs from traditional machine learning and image recognition.06:30 - Hallucinations: Exploring failure modes where AI creates plausible but false data.08:50 - The Autonomy Scale: Applying standard 34971 to determine the level of human supervision required.12:15 - Regulatory Gaps: Why no generative AI medical devices have been cleared by the FDA yet.15:40 - Safety by Design: Using "independent verification agents" to monitor AI outputs in real-time.19:00 - The Shift to Post-Market Validation: Why 90% validation at launch requires 10% continuous monitoring.22:15 - Comparing AI to Laboratory Developed Tests (LDTs) and the role of the expert user.Quotes"Hallucinations are just a very familiar form of failure modes... where the product creates sample data that doesn't actually align with reality." - Ashkon Rasooli"Your validation plan isn't just going to be a number of activities you do that gate release to market; it is actually going to be those plus a number of activities you do after market release." - Ashkon RasooliTakeawaysRight-Size Autonomy: Match the AI's level of independence to the risk of the application. High-risk diagnostic tools should have lower autonomy (Level 1-2), while administrative tools can operate more freely.Implement Redundancy: Use a "two is one" approach by employing an independent AI verification agent to check the primary model's output against safety guidelines before it reaches the user.
Artificial intelligence (AI) is forcing legal systems worldwide to confront fundamental questions about creativity, ownership, and identity. Can companies train algorithms on copyrighted works without permission? What happens when technology makes it easy to clone someone's voice or face? In this episode of Brand & New, host Willard Knox speaks with two attorneys at the forefront of these rapidly evolving issues. Lynn Oberlander is Co-Editor of the Practising Law Institute's (PLI) comprehensive new treatise, Artificial Intelligence & Intellectual Property, which brings together leading practitioners to address the most pressing legal challenges in AI. Catie Seibel Sinitsa is the co-author the chapter covering copyright and AI. Ms. Oberlander has spent almost 25 years counseling media and entertainment companies on intellectual property (IP) and First Amendment issues. Ms. Sinitsa specializes in copyright and trademark law, working with clients across fashion, media, among other industries. Together, they unpack how this evolving technology is reshaping long-standing IP principles and why these questions are no longer theoretical but urgent, real-world concerns. This episode of Brand & New is sponsored by PLI. For more than 90 years, the Institute has helped legal professionals stay at the forefront of knowledge and expertise through world-class continuing legal education. Related Resources About Lynn Oberlander About Catie Seibel Sinitsa About the Practising Law Institute Access Artificial Intelligence & Intellectual Property AI-Related Sessions at INTA's 2026 Annual Meeting Related Brand & New Episodes:Certifying Human Music in the Age of AIThe AI Gender Gap
Join us as Gautam breaks down the evolution of tool use in generative AI and dives deep into MCP. Gautam walks through the progression from simple prompt engineering to function calling, structured outputs, and now MCP—explaining why MCP matters and how it's changing the way AI systems interact with external tools and data. You'll learn about the differences between MCP and traditional API integrations, how to build your first MCP server, best practices for implementation, and where the ecosystem is heading. Whether you're building AI-powered applications, integrating AI into your infrastructure workflows, or just trying to keep up with the latest developments, this episode provides the practical knowledge you need. Gautam also shares real-world examples and discusses the competitive landscape between various AI workflow approaches. Subscribe to vBrownBag for weekly tech education covering AI, cloud, DevOps, and more! ⸻ Timestamps 0:00 Introduction & Welcome 7:28 Gautam's Background & Journey to AI Product Management 12:45 The Evolution of Tool Use in AI 18:32 What is Model Context Protocol (MCP)? 24:16 MCP vs Traditional API Integrations 30:41 Building Your First MCP Server 36:52 MCP Server Discovery & Architecture 42:18 Real-World Use Cases & Examples 47:35 Best Practices & Implementation Tips 51:12 The Competitive Landscape: Skills, Extensions, & More 52:14 Q&A: AI Agents & Infrastructure Predictions 55:09 Closing & Giveaway How to find Gautam: https://gautambaghel.com/ https://www.linkedin.com/in/gautambaghel/ Links from the show: https://www.hashicorp.com/en/blog/build-secure-ai-driven-workflows-with-new-terraform-and-vault-mcp-servers Presentation from HashiConf: https://youtu.be/eamE18_WrW0?si=9AJ9HUBOy7-HlQOK Kiro Powers: https://www.hashicorp.com/en/blog/hashicorp-is-a-kiro-powers-launch-partner Slides: https://docs.google.com/presentation/d/11dZZUO2w7ObjwYtf1At4WnL-ZPW1QyaWnNjzSQKQEe0/edit?usp=sharing
00:00 Introduction02:20 Opening06:17 We are all nuts10:36 The sins others can't see are more dangerous13:30 Living enslaved to Sin is standing on a cliff17:39 We Can't Build Utopia18:45 What we do has far more consequence than we realize23:00 The Dream of The Enlightenment breeds cattle & monsters25:27 Pursuit of Eutopia breeds Dystopia28:45 A rabbit trail about our favorite fiction genres32:47 Self delusion causes us to make the means The End34:10 The Gerasene Demoniac & the pigs36:07 Why can't people see the path of The Kingdom43:55 Confession shatters self delusion47:45 The divine revelation to The Apostles was unique49:30 Humans tend to want kings, one way or another50:50 Another rabbit trail about movies about The Bible54:00 ClosingHuman Persons Are Not Infinite - S7E18We are nearing the end of season seven & Fr Symeon didn't want to miss the opportunity to talk about how the two dominant ideologies in our culture both rely on what we could call an unconstrained vision of humanity - the dream of The Enlightenment still alive.In this second part of three, we delve into a bit more pop culture references than is typical for us, we both admit to a love of dystopian fiction & we also speak of the core mission of The Church to usher all of Creation into Paradise, not to amass warriors to create Utopia through Power.Scripture citations for this episode:Matthew 21:12-17, Mark 11:15-19, Luke 19:45-48, John 2:13-17 - Cleansing of the templeMark 5: 1-20 - Gerasene DemoniacThe Christian Saints Podcast is a joint production of Generative sounds & Paradosis Pavilion with oversight from Fr Symeon KeesParadosis Pavilion - https://youtube.com/@paradosispavilion9555https://www.instagram.com/christiansaintspodcasthttps://twitter.com/podcast_saintshttps://www.facebook.com/christiansaintspodcasthttps://www.threads.net/@christiansaintspodcastIconographic images used by kind permission of Nicholas Papas, who controls distribution rights of these imagesPrints of all of Nick's work can be found at Saint Demetrius Press - http://www.saintdemetriuspress.comAll music in these episodes is a production of Generative Soundshttps://generativesoundsjjm.bandcamp.comDistribution rights of this episode & all music contained in it are controlled by Generative SoundsCopyright 2021 - 2023
Artificial Intelligence has become much more than a buzzword. It's transforming industries as it rapidly evolves, and the big question for CRNAs is what does this mean for anesthesia providers? Sharon and guest co-host Larry Sears, CRNA sit down with CRNA educator and technology thought leader Richard Wilson, DNPA, CRNA, FAANA to explore how AI is quietly reshaping perioperative care, education, and decision-making in the operating room. Here's some of what you'll hear in this episode:
If you've been reading this newsletter for a while, you'll have noticed I tend to focus on the big-picture stuff: organizational change, building design culture, getting stakeholder buy-in. This week I'm doing something different and getting into the weeds on generative imagery, a tool that's become part of my daily workflow. I'm genuinely curious whether you prefer the strategic content, the practical how-to pieces, or a mix of both. Hit reply and let me know.Generative imagery is quickly becoming an essential tool in the modern designer's toolkit. Whether you're a UI designer crafting interfaces, a UX designer building prototypes, or a marketer creating campaign visuals, the ability to generate exactly the image you need (rather than settling for whatever stock libraries happen to have) is genuinely useful.The Ethical DimensionThere's an ethical dimension here that makes me uncomfortable. Using generative imagery does, in theory, take work away from illustrators and photographers. I don't love that. But I also recognize that this is a pattern we've seen throughout history. Technology has consistently made certain professions more niche rather than making them disappear entirely. Blacksmiths still exist. Vinyl records still sell. And I suspect custom photography and illustration will follow the same path, becoming more specialized rather than vanishing completely.Besides, if we're being realistic, most of us weren't commissioning custom photography for every project anyway. We were pulling images from stock libraries, and I can't say I'll miss spending 45 minutes searching for a photo that almost works but has the person looking in the wrong direction.So with that acknowledged, let's get into the practical side of things.When to Avoid Generative ImageryBefore diving into how to use these tools well, it's worth noting when you shouldn't use them at all. Generative imagery has no place when you need to represent real people or real events. If you're showing your actual team, documenting a real conference, or depicting genuine customer stories, you need real photography. Anything else would be misleading, and your audience will likely spot it anyway.Why It Beats Stock LibrariesFor everything else, though, generative imagery offers some serious advantages over traditional stock. You can get exactly the pose you want, in exactly the style you need, matching your specific color palette. No more "this photo would be perfect if only the person was looking left instead of right" compromises.This matters more than you might think. Research suggests that users form initial impressions of a website in roughly 50 milliseconds. That's not enough time to read anything. Those snap judgments are based almost entirely on imagery, layout, color, and typography. The right image doesn't just look nice; it shapes how users feel about your entire site before they've processed a single word.Imagery also gives you a powerful tool for directing attention. A well-chosen image can guide users toward your key content or call to action in ways that feel natural rather than pushy.The right image composition can draw attention to critical calls to action.Copyright and Commercial UseBefore you start generating images for client work, you need to understand the legal landscape. And yes, it's a bit murky.The short version: most major AI image generators allow commercial use of the images you create, but the terms vary. Midjourney allows commercial use for paid subscribers. Adobe Firefly positions itself as "commercially safe" because it was trained on licensed content and Adobe Stock images. Google's Nano Banana Pro (accessible through Gemini) also permits commercial use.The murkier issue is around training data. Several ongoing lawsuits are challenging whether AI companies had the right to train their models on copyrighted images in the first place. These cases haven't been resolved yet, and depending on how they play out, the landscape could shift.For now, my practical advice is this: use reputable tools with clear commercial terms, avoid generating images that deliberately mimic a specific artist's recognizable style, and keep an eye on how the legal situation develops. For most standard commercial work (website imagery, marketing materials, UI mockups), you should be fine.Choosing the Right Tool: Style vs. InstructionsWhen selecting which AI model to use, you're essentially balancing two considerations: stylistic output and instructional accuracy.Stylistic OutputEvery model has its own aesthetic fingerprint. No matter how specific your prompts are, Midjourney images have a certain look, and Nano Banana images have a different one. You need to find a model whose default aesthetic works for your project.Instructional AccuracyThe other consideration is how well the model follows detailed instructions. If you need a specific composition (person on the left, looking right, holding a coffee cup, with a window behind them), some models handle that brilliantly while others will give you something that vaguely resembles your request but took creative liberties you didn't ask for.Use Multiple ModulesThe frustrating reality is that you rarely get both. The models with the most pleasing aesthetics tend to be worse at following precise instructions, and vice versa.This is why I often move between multiple models in a single workflow. I'll generate the initial image in Midjourney to get an aesthetic I like, then bring that image into Nano Banana Pro as a reference and use its stronger instruction-following capabilities to refine specific details. It's an extra step, but it gets you the best of both worlds.Tool RecommendationsThere are plenty of tools out there, but here are three I'd recommend depending on your needs and experience level.MidjourneyMidjourney produces what I consider the most aesthetically pleasing results, particularly for images of people and anything photographic. It's what I use on my own website. The downside is that Midjourney is terrible at following detailed instructions. Ask for something specific and you'll get something beautiful that bears only a passing resemblance to what you requested. It's also only available through its own website, so you can't access it through multi-model platforms.Nano Banana ProNano Banana Pro (Google's model, accessible through Gemini) is the opposite of Midjourney. It's remarkably good at following detailed prompts. You can specify gaze direction, facial expressions, items held, and positioning, and it will actually deliver something close to what you asked for. It can also produce transparent PNGs, which is genuinely useful for UI work where you need to overlay images on colored backgrounds. The aesthetic isn't quite as refined as Midjourney, but for many projects that trade-off is worth it.KreaKrea is where I'd recommend starting if you're new to all this. It gives you access to multiple models, letting you experiment and find which one works best for your particular needs. You can try different approaches without committing to a single tool's subscription. Unfortunately, Krea doesn't include Midjourney (since Midjourney doesn't make its model available to third parties), but it's still a great way to explore the landscape.Krea is great for beginners allowing you to experiment with different models to find which works best for you.Prompting StrategiesHow you write your prompts depends largely on which model you're using.For instruction-following models like Nano Banana Pro, you can be quite detailed. Describe the composition, the subject's position, their expression, what they're holding, the lighting, the background. The model will make a genuine attempt to deliver all of it. You won't get perfection every time, but you'll get something workable more often than not.For aesthetic-focused models like Midjourney, simpler prompts often work better. Focus on the overall mood, style, and subject matter rather than precise positioning. Fighting against the model's creative tendencies usually produces worse results than working with them.Reference Imagery for ConsistencyOne of the most useful techniques, particularly with models that struggle to follow detailed instructions, is using reference imagery.Most tools allow you to upload an "image prompt," which is an existing image that contains elements you want. The model will attempt to recreate those elements in whatever style you've specified, incorporating any changes you've requested. It's a way of showing the model what you want rather than trying to describe it in words.Even more valuable is the style reference feature. If you need to produce multiple images that all share a consistent visual identity (which you almost certainly do for any real project), create one image that nails the style you're after. Then use that image as a style reference for every subsequent generation. This keeps your visuals cohesive rather than having each image feel like it came from a different designer.I use a style reference image to keep my website illustrations consistent.Getting StartedIf you haven't experimented with generative imagery yet, now is a good time to start. Sign up for Krea, generate a few images for a project you're working on, and compare them to what you would have found in a stock library. You'll probably find that some results are worse, some are surprisingly good, and you'll start developing an intuition for what these tools can and can't do.That intuition is valuable. Generative imagery isn't going away, and the designers who learn to use it well will have a genuine advantage over those who don't. Not because AI replaces skill, but because it gives skilled designers another tool to work with.
People around the world are using AI more than Americans, a new poll finds. About 40% of adults in the U.S. told pollsters that they used generative AI in the last year. In Nigeria, the United Arab Emirates, and India, that number was about 85%. What's driving the divide? But first: a preview of markets before President Donald Trump's speech at Davos, and a look at the struggle between the Trump administration and the Fed.
People around the world are using AI more than Americans, a new poll finds. About 40% of adults in the U.S. told pollsters that they used generative AI in the last year. In Nigeria, the United Arab Emirates, and India, that number was about 85%. What's driving the divide? But first: a preview of markets before President Donald Trump's speech at Davos, and a look at the struggle between the Trump administration and the Fed.
Synopsis: Together, Laura and Donna consider expansive questions: how do we understand ourselves in an age of artificial intelligence? And how do we resist the pull of authoritarian “mono-thought” — the demand for certainty, sameness, and simple answers in a complex world? This show is made possible by you! To become a sustaining member go to LauraFlanders.org/donateDescription: “Thinking requires action and passion,” says feminist philosopher and scholar, Donna Haraway in this unique conversation. In her 1985 essay “A Cyborg Manifesto” and 2003 work, “The Companion Species Manifesto”, Haraway challenged patriarchal, capitalist, binary, species-ist ways of looking at the world. It's no surprise that people are looking to her work again now. Generative thinking, she tells Laura, requires “taking the risk to try a new pattern; to invent something that may very well fall apart in your collective hands but leaves threads to be picked up again.” In this episode, Haraway and Flanders sit down for an expansive conversation about what it means to be human in an age of AI and resisting what she calls authoritarian “mono-thought.” Plus, a commentary from Laura on staying in the present and “staying with the trouble.”“An individual is embedded deeply in worlds with other people, with other organisms, with living and non-living parts of the world. To be a self is to come to a thicker appreciation and accountability for the way we're embedded in the world and act in the world. That's what I mean by being a proper self.” - Donna HarawayGuest: Donna Haraway, Distinguished Professor Emerita, University of California Santa Cruz, History of Consciousness Department; Author, A Cyborg Manifesto, When Species Meet, Staying with the Trouble: Making Kin in the ChthuluceneWatch the episode released on YouTube; PBS World Channel 11:30am ET Sundays and on over 300 public stations across the country (check your listings, or search here via zipcode). Listen: Episode cut airing on community radio (check here to see if your station airs the show) & available as a podcast January 21st, 2026.Full Episode Notes are located HERE.Music Credit: Opolopo's 'No More Lies remix' of “We Rise” by Groove Junkies, Opolopo and Solara courtesy of More House Music; 'Steppin' by Podington Bear, and original sound design by Jeannie Hopper'Additional Credits: Audio Clip- Donna Haraway lecturing at the Next Nature Museum for Friday Next, organized in collaboration with the Premium Erasmium Foundation, and recorded by Emily Cohen IbañezSupport Laura Flanders and Friends by becoming a member at https://www.patreon.com/c/lauraflandersandfriendsRESOURCES:*Recommended book:“The Companion Species Manifesto: Dogs, People, and Significant Otherness” by Donna Haraway: *Get the book(*Bookshop is an online bookstore with a mission to financially support local, independent bookstores. The LF Show is an affiliate of bookshop.org and will receive a small commission if you click through and make a purchase.)Related Laura Flanders Show Episodes:• Pride Pioneers Holly Hughes & Esther Newton: How Queer Kinship Ties Help Us Survive: Watch / Listen: Episode Cut• Survival Guide for Humans Learned from Marine Mammals with Alexis Pauline Gumbs: Watch / Listen: Episode Cut and Full Uncut Conversation• “Powerlands”: Indigenous Youth Fight Big Oil & Gas Worldwide: Watch / Listen: Episode Cut and Full Uncut Conversation Related Articles and Resources:• Donna Haraway: Story Telling for Earthly Survival by Fabrizo Terranova - Watch• Making Oddkin: Story Telling for Earthly Survival lecture at Yale - Watch• You Are Cyborg by Hair Kunzru, February 1, 1997, WIRED• Donna Haraway, Erasmus laureate 2025 at the Next Nature Museum, November 21, 2025, by Next Nature• Rethinking Humanity with Donna Haraway: A Cyborg Manifesto for the AI Age, August 18, 2025, Philosopheasy Laura Flanders and Friends Crew: Laura Flanders-Executive Producer, Writer; Sabrina Artel-Supervising Producer; Jeremiah Cothren-Senior Producer; Veronica Delgado-Video Editor, Janet Hernandez-Communications Director; Jeannie Hopper-Audio Director, Podcast & Radio Producer, Audio Editor, Sound Design, Narrator; Sarah Miller-Development Director, Nat Needham-Editor, Graphic Design emeritus; David Neuman-Senior Video Editor, and Rory O'Conner-Senior Consulting Producer. FOLLOW Laura Flanders and FriendsInstagram: https://www.instagram.com/lauraflandersandfriends/Blueky: https://bsky.app/profile/lfandfriends.bsky.socialFacebook: https://www.facebook.com/LauraFlandersAndFriends/Tiktok: https://www.tiktok.com/@lauraflandersandfriendsYouTube: https://www.youtube.com/channel/UCFLRxVeYcB1H7DbuYZQG-lgLinkedin: https://www.linkedin.com/company/lauraflandersandfriendsPatreon: https://www.patreon.com/lauraflandersandfriendsACCESSIBILITY - The broadcast edition of this episode is available with closed captioned by clicking here for our YouTube Channel
As generative AI reshapes search, newswires are no longer simply distribution tools; they are authority signals. In this episode, Sarah Larson joins Jennifer Simpson Carr to discuss how trusted, high-domain sources influence generative engine results, why consistent presence matters more than clicks, and how law firms can future-proof visibility by feeding machines the right information.
In today's episode, we'll review deep research results that include 3 teacher strategies and 7 student-facing practices for effectively using generative AI in K-12 Education. Visit AVID Open Access to learn more.
Synopsis: A leading voice in feminist philosophy, Donna Haraway joins Laura for an incisive discussion on challenging patriarchal norms and cultivating a more inclusive understanding of humanity, one that prioritizes accountability and empathy in an increasingly complex world.This show is made possible by you! To become a sustaining member go to LauraFlanders.org/donateDescription: “Thinking requires action and passion,” says feminist philosopher and scholar, Donna Haraway in this unique conversation. In her 1985 essay “A Cyborg Manifesto” and 2003 work, “The Companion Species Manifesto”, Haraway challenged patriarchal, capitalist, binary, species-ist ways of looking at the world. It's no surprise that people are looking to her work again now. Generative thinking, she tells Laura, requires “taking the risk to try a new pattern; to invent something that may very well fall apart in your collective hands but leaves threads to be picked up again.” In this episode, Haraway and Flanders sit down for an expansive conversation about what it means to be human in an age of AI and resisting what she calls authoritarian “mono-thought.” Plus, a commentary from Laura on staying in the present and “staying with the trouble.”“An individual is embedded deeply in worlds with other people, with other organisms, with living and non-living parts of the world. To be a self is to come to a thicker appreciation and accountability for the way we're embedded in the world and act in the world. That's what I mean by being a proper self.” - Donna HarawayGuest: Donna Haraway, Distinguished Professor Emerita, University of California Santa Cruz, History of Consciousness Department; Author, A Cyborg Manifesto, When Species Meet, Staying with the Trouble: Making Kin in the ChthuluceneWatch the episode released on YouTube; PBS World Channel 11:30am ET Sundays and on over 300 public stations across the country (check your listings, or search here via zipcode). Listen: Episode airing on community radio (check here to see if your station airs the show) & available as a podcast January 21st, 2026.Full Episode Notes are located HERE.Full Conversation Release: While our weekly shows are edited to time for broadcast on Public TV and community radio, we offer to our members and podcast subscribers the full uncut conversation. Music Credit: 'Thrum of Soil' by Bluedot Sessions, 'Steppin' by Podington Bear, and original sound design by Jeannie HopperSupport Laura Flanders and Friends by becoming a member at https://www.patreon.com/c/lauraflandersandfriendsRESOURCES:*Recommended book:“The Companion Species Manifesto: Dogs, People, and Significant Otherness” by Donna Haraway: *Get the book(*Bookshop is an online bookstore with a mission to financially support local, independent bookstores. The LF Show is an affiliate of bookshop.org and will receive a small commission if you click through and make a purchase.)Related Laura Flanders Show Episodes:• Pride Pioneers Holly Hughes & Esther Newton: How Queer Kinship Ties Help Us Survive: Watch / Listen: Episode Cut• Survival Guide for Humans Learned from Marine Mammals with Alexis Pauline Gumbs: Watch / Listen: Episode Cut and Full Uncut Conversation• “Powerlands”: Indigenous Youth Fight Big Oil & Gas Worldwide: Watch / Listen: Episode Cut and Full Uncut Conversation Related Articles and Resources:• Donna Haraway: Story Telling for Earthly Survival by Fabrizo Terranova - Watch• Making Oddkin: Story Telling for Earthly Survival lecture at Yale - Watch• You Are Cyborg by Hair Kunzru, February 1, 1997, WIRED• Donna Haraway, Erasmus laureate 2025 at the Next Nature Museum, November 21, 2025, by Next Nature• Rethinking Humanity with Donna Haraway: A Cyborg Manifesto for the AI Age, August 18, 2025, Philosopheasy Laura Flanders and Friends Crew: Laura Flanders-Executive Producer, Writer; Sabrina Artel-Supervising Producer; Jeremiah Cothren-Senior Producer; Veronica Delgado-Video Editor, Janet Hernandez-Communications Director; Jeannie Hopper-Audio Director, Podcast & Radio Producer, Audio Editor, Sound Design, Narrator; Sarah Miller-Development Director, Nat Needham-Editor, Graphic Design emeritus; David Neuman-Senior Video Editor, and Rory O'Conner-Senior Consulting Producer. FOLLOW Laura Flanders and FriendsInstagram: https://www.instagram.com/lauraflandersandfriends/Blueky: https://bsky.app/profile/lfandfriends.bsky.socialFacebook: https://www.facebook.com/LauraFlandersAndFriends/Tiktok: https://www.tiktok.com/@lauraflandersandfriendsYouTube: https://www.youtube.com/channel/UCFLRxVeYcB1H7DbuYZQG-lgLinkedin: https://www.linkedin.com/company/lauraflandersandfriendsPatreon: https://www.patreon.com/lauraflandersandfriendsACCESSIBILITY - The broadcast edition of this episode is available with closed captioned by clicking here for our YouTube Channel
Have you ever felt overwhelmed by AI? Like…. There's certain aspects of Artificial intelligence that you barely understand to begin with, yet you're expected to use it AND it's changing every day? I understand where you're coming from. It's literally my only job to use, build with and teach AI every day and that's all I've done now for 3 years, and even I find it hard to keep up. But don't worry. That's where the ‘Start Here Series' comes into play. If one of your focuses is better understanding AI in 2026 or if you're an expert looking to double down, this Start Here Series is for you. In our first volume, we're going back to the basics. Generative AI: How it works and why it matters in 2026 more than ever -- An Everyday AI Chat. with Jordan Wilson.Other Start Here Series EpisodesEp 691: Generative AI: How it works and why it matters in 2026 more than ever (Start Here Series Vol 1)(In the future, we'll update with other 'Start Here Series' episodes)Start Here Series Community Sign up: Follow the Start Here Series with free access to our Inner Circle CommunityMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Generative AI Basics and 2026 ImpactExplosive Growth of Large Language ModelsAI Adoption Rates in EnterprisesAI Agents and Operating Systems OverviewHistory and Evolution of Artificial IntelligenceTransformer Architecture and Model BreakthroughsHow Large Language Models WorkModern AI Capabilities: Multimodal ToolsQuantifying ROI for Generative AI InvestmentWorkforce Disruption and Future Job TrendsScaling AI: From Pilot to Enterprise-WideUrgency for AI Upskilling and Competitive AdvantageTimestamps:00:00 "Start Here: AI Guide Series"04:10 "Join Our Free Community"09:17 "AI Operating Systems for Businesses"11:05 "Partner with Everyday AI"13:00 "AI Evolution Over Decades"16:34 "ChatGPT's Transformative Impact"22:17 "Generative AI and Memory Evolution"26:03 "AI Delivers Exponential ROI"29:59 "AI Demand Surges, Hiring Drops"32:15 "AI Transforming CRMs Rapidly"35:01 "AISend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
The 2024 presidential race was the first big election to happen in the new generative AI era. There have, of course, been major concerns that the technology could be used to deceive voters or interfere with the exercise of democracy. But so far, that kind of activity has been limited, according to Tim Harper, a senior policy analyst and coauthor of a recent report from the Center for Democracy and Technology.
The 2024 presidential race was the first big election to happen in the new generative AI era. There have, of course, been major concerns that the technology could be used to deceive voters or interfere with the exercise of democracy. But so far, that kind of activity has been limited, according to Tim Harper, a senior policy analyst and coauthor of a recent report from the Center for Democracy and Technology.