POPULARITY
Categories
In this episode of Crazy Wisdom, Stewart Alsop sits down with Javier Villar for a wide-ranging conversation on Argentina, Spain's political drift, fiat money, the psychology of crowds, Dr. Hawkins' levels of consciousness, the role of elites and intelligence agencies, spiritual warfare, and whether modern technology accelerates human freedom or deepens control. Javier speaks candidly about symbolism, the erosion of sovereignty, the pandemic as a global turning point, and how spiritual frameworks help make sense of political theater.Check out this GPT we trained on the conversationTimestamps00:00 Stewart and Javier compare Argentina and Spain, touching on cultural similarity, Argentinization, socialism, and the slow collapse of fiat systems.05:00 They explore Brave New World conditioning, narrative control, traditional Catholics, and the psychology of obedience in the pandemic.10:00 Discussion shifts to Milei, political theater, BlackRock, Vanguard, mega-corporations, and the illusion of national sovereignty under a single world system.15:00 Stewart and Javier examine China, communism, spiritual structures, karmic cycles, Kali Yuga, and the idea of governments at war with their own people.20:00 They move into Revelations, Hawkins, calibrations, conspiracy labels, satanic vs luciferic energy, and elites using prophecy as a script.25:00 Conversation deepens into ego vs Satan, entrapment networks, Epstein Island, Crowley, Masonic symbolism, and spiritual corruption.30:00 They question secularism, the state as religion, technology, AI, surveillance, freedom of currency, and the creative potential suppressed by government.35:00 Ending with Bitcoin, stablecoins, network-state ideas, U.S. power, Argentina's contradictions, and whether optimism is still warranted.Key InsightsArgentina and Spain mirror each other's decline. Javier argues that despite surface differences, both countries share cultural instincts that make them vulnerable to the same political traps—particularly the expansion of the welfare state, the erosion of sovereignty, and what he calls the “Argentinization” of Spain. This framing turns the episode into a study of how nations repeat each other's mistakes.Fiat systems create a controlled collapse rather than a dramatic one. Instead of Weimar-style hyperinflation, Javier claims modern monetary structures are engineered to “boil the frog,” preserving the illusion of stability while deepening dependency on the state. This slow-motion decline is portrayed as intentional rather than accidental.Political leaders are actors within a single global architecture of power. Whether discussing Milei, Trump, or European politics, Javier maintains that governments answer to mega-corporations and intelligence networks, not citizens. National politics, in this view, is theater masking a unified global managerial order.Pandemic behavior revealed mass submission to narrative control. Stewart and Javier revisit 2020 as a psychological milestone, arguing that obedience to lockdowns and mandates exposed a widespread inability to question authority. For Javier, this moment clarified who can perceive truth and who collapses under social pressure.Hawkins' map of consciousness shapes their interpretation of good and evil. They use the 200 threshold to distinguish animal from angelic behavior, exploring whether ego itself is the “Satanic” force. Javier suggests Hawkins avoided explicit talk of Satan because most people cannot face metaphysical truth without defensiveness.Elites rely on symbolic power, secrecy, and coercion. References to Epstein Island, Masonic symbolism, and intelligence-agency entrapment support Javier's view that modern control systems operate through sexual blackmail, ritual imagery, and hidden hierarchies rather than democratic mechanisms.Technology's promise is strangled by state power. While Stewart sees potential in AI, crypto, and network-state ideas, Javier insists innovation is meaningless without freedom of currency, association, and exchange. Technology is neutral, he argues, but becomes a tool of surveillance and control when monopolized by governments.
OpenAI ha anunciado GPT-5.2 un mes después de la versión anterior, presionada por Google. Es mejor en programación y contextos largos, alucina menos. No es algo revolucionario, pero ahí está la noticia de verdad: la era de los saltos mágicos ya está terminando.Loop Infinito, podcast de Xataka, de lunes a viernes a las 7.00 h (hora española peninsular). Presentado por Javier Lacort. Editado por Alberto de la Torre.Contacto:
Entérate de lo que está cambiando el podcasting y el marketing digital:-La oleada de pódcast creados con IA sacude a la industria.-La audiencia del pódcast entra en una nueva etapa de madurez.-Audioboom alcanza ingresos récord mientras evalúa posibles fusiones. -OpenAI eleva el nivel con el lanzamiento de GPT-5.2.-Nuevas reglas para que las marcas destaquen en búsquedas impulsadas por IA. Nuevo pódcast-“Las cosas como son”. Patrocinios¿Estás pensando en anunciar tu negocio, producto o pódcast en México? En RSS.com y RSS.media tenemos la solución. Contamos con un amplio catálogo de pódcast para conectar tu mensaje con millones de oyentes en México y LATAM. Escríbenos a ventas@rss.com y haz crecer tu idea con nosotros. Entérate, en solo cinco minutos, sobre las noticias, herramientas, tips y recursos que te ayudarán a crear un pódcast genial y exitoso. Subscríbete a la “newsletter“ de Via Podcast.
In this episode of Hashtag Trending, host Jim Love covers the latest in AI technology and innovation. OpenAI quietly launched GPT-5.2, focusing on real-world work performance and introducing a new evaluation method, GDPVal. This model significantly outperforms its predecessors and competitors. Meanwhile, Google is enhancing its AI capabilities, embedding AI into creative tools, hardware, and a potentially new operating system, Aluminum. Disney has signed an agreement with OpenAI to use its animated characters for fan-generated content and made a significant investment in the company. Additionally, a Canadian company set a world record in fusion energy, showcasing advancements in the field. The episode concludes with thanks to Meter for their support. Hashtag Trending would like to thank Meter for their support in bringing you this podcast. Meter delivers a complete networking stack, wired, wireless and cellular in one integrated solution that's built for performance and scale. You can find them at Meter.com/htt 00:00 Introduction and Sponsor Message 00:21 OpenAI Launches ChatGPT 5.2 04:21 Google's Gemini Updates 06:59 Disney's Multi-Year Agreement with OpenAI 08:54 Canadian Fusion Energy Breakthrough 09:58 Conclusion and Sponsor Message
What does it really mean to build an AI-forward company that is still deeply human-first? In this episode, host Susan Diaz and senior HR leader and mentor culture advocate Helen Patterson talk about jobs, guardrails, copyright, environmental impact, and why mentorship and connection matter more than ever in the age of AI. Episode summary Susan is joined by Helen Patterson, founder of Life Works Well, senior HR leader, and author of the upcoming book Create a Mentor Culture. They start with a Y2K flashback and draw a straight line from past tech panics to today's AI headlines. Helen shares why she sees AI as the latest evolution of technology as an enabler in HR - another way to clear the admin and grunt work so humans can focus on growth, development, and real conversations. From there, they dig into: The tension between "AI will kill jobs" and tens of thousands of new AI policy and governance roles already posted. How shadow AI shows up when organizations put in blanket "no AI" rules and people just reach for their phones anyway. The very real issues around privacy, copyright, and intellectual property when staff feed proprietary material into public models. The less-talked-about environmental impact of AI and why leaders should demand better facts and more intentional choices from tech providers. In the second half, Helen brings the conversation back to humanity: mentorship as a counterweight to disconnection, her One Million Mentor Moments initiative, and how everyday "micro-mentoring" at work can help people adapt to rapid change instead of being left behind. They close with practical examples of using AI for good in real life - from travel planning and research to late-night dog-health triage - without letting it replace judgement. Key takeaways This isn't our first tech panic. From Y2K to applicant tracking systems, HR has always framed tech as an enabler. GenAI is the newest layer, not an alien invasion. Looking back at history helps calm "sky is falling" narratives. Jobs are changing, not simply disappearing. Even as people worry about AI-driven job loss, platforms like Indeed list tens of thousands of AI policy and governance roles. The work is shifting toward AI-forward skills in every function. Blanket "no AI" rules don't work. When organizations ban external tools or insist on only one locked-down platform, people quietly use their own devices and personal stacks anyway - creating shadow AI with real privacy and IP risk. Guardrails and education beat prohibition. Copyright and confidentiality need more than vibes. Without clear guidance, staff will copy proprietary frameworks or documents into public models and re-badge them. Leaders need simple, well-communicated philosophies about what must not go into AI tools. Environmental impact is part of human-first. Training and running large models consumes energy. The real solution will be systemic (how tech is built and powered), but individuals and organizations can still use AI more efficiently, just like learning not to leave all the lights on. Mentorship is the ultimate human technology. Helen's work on Create a Mentor Culture and One Million Mentor Moments reframes mentoring as everyday, one-conversation acts that share wisdom, reduce fear, and help people reskill for an AI-forward world. Tech should support that, not replace it. Upskilling beats layoffs. When roles change because of AI, the most human-first response isn't to cut people loose, it's to invest in learning, mentoring, and redeployment so existing talent can grow into new, AI-augmented roles. Use AI to simplify life, not complicate it. From planning multi-country trips to triaging whether the dog really needs an emergency vet visit, smart everyday use of AI can save time, money, and anxiety - freeing up more space for the work and relationships that actually matter. Episode highlights [00:01] Susan sets the scene: 30 episodes in 30 days to build Swan Dive Backwards in public. [00:39] Helen's intro: Life Works Well, heart-centred high-performance cultures, and her focus on mentorship. [03:43] What an AI-forward and human-centred organisation looks like in practice. [04:00] Y2K memories and why today's AI panic feels familiar. [06:11] 25–35K AI policy jobs on Indeed and what that says about the future of work. [07:49] Jobs lost vs jobs created—and why continuous learning is non-negotiable. [15:19] The danger of "everyone is using AI" with no strategy or safeguards. [19:25] Shadow AI, personal stacks, and why hard bans don't stop experimentation. [21:13] A real-world IP scare: proprietary material pasted into GPT and re-labelled. [23:06] GPT refusing to summarise a book for copyright reasons—and why that's a good sign. [24:03] The case for a simple AI philosophy doc: purpose, principles, and communication. [25:24] Environmental concerns, fact-checking, and the server-room-to-laptop analogy. [30:17] New social media laws for kids and what they signal about tech accountability. [30:41] One Million Mentor Moments: why one conversation can change a career. [31:22] From elite programmes to everyday mentor cultures inside organisations. [35:01] AI for mentoring and coaching: bots, big-name gurus, and internal use cases. [36:30] Using AI for travel planning, research, and everyday life admin. [37:35] Susan's story: using AI to triage a dog-health scare instead of doom-scrolling vet sites. [38:37] Life Works Well's roots in work–life harmony and simplifying with tech. [39:35] Where to find Helen online and what's next for her book. If you're leading a team (or a whole organization), use this episode as a prompt to ask: Where are we treating AI as a tool in service of humanity - and where are we forgetting the human first? Do our people actually know what's OK and not OK to put into AI tools? How could we use mentorship - formal or informal - to help our people navigate this shift instead of fearing it? Connect with Susan Diaz on LinkedIn to get a conversation started. Agile teams move fast. Grab our 10 AI Deep Research Prompts to see how proven frameworks can unlock clarity in hours, not months. Find the prompt pack here. You can connect with Helen Patterson on LinkedIn and follow her work on Create a Mentor Culture and One Million Mentor Moments via lifeworkswell.ca
Timestamps: 0:00 unprepared for a Sean Bean impression 0:12 Google Disco, GPT-5.2 1:55 Disney spurns Google, befriends OpenAI 3:56 US govt takes 25% cut of H200 China sales 5:25 Novium Hoverpens! 6:36 QUICK BITS INTRO 6:53 Ayaneo Pocket Play 7:36 Fortnite back on Google Play Store 8:23 Federal AI law exec order 9:07 Operation Bluebird: bring back Twitter 9:47 2025 Game Awards highlights (for Riley) NEWS SOURCES: https://lmg.gg/q1ImK Learn more about your ad choices. Visit megaphone.fm/adchoices
Is AI finally ready to do your job — better, faster, and cheaper?In this week's Leveraging AI news recap, host Isar Meitis unpacks a flurry of groundbreaking developments in the world of artificial intelligence — from the release of GPT-5.2 to jaw-dropping advances in recursive self-improving AI (yes, it's as intense as it sounds).Whether you lead a business, a team, or just need to stay ahead of the AI curve — this episode is your executive summary for everything that matters (and nothing that doesn't).We'll also dig into the billion-dollar OpenAI–Disney partnership, how real users are actually leveraging AI in the wild, and why the Fed is finally admitting AI is changing the job market.In this session, you'll discover:The GPT-5.2 release: performance benchmarks and real-world capabilitiesIs GPT-5.2 better than humans at actual work? (71% of the time, yes)Why OpenAI's new “not-an-ad” ad rollout caused a user revoltOpenAI x Disney: Why $1B is being bet on AI-generated Mickey Mouse contentGPT-5.2's weak spots and where Claude Opus still dominatesWhat Recursive Self-Improving AI means (and why Eric Schmidt is nervous)AI designing its own hardware: A startup that could rewrite Moore's LawNew usage data from OpenRouter, Microsoft, SAP & Perplexity – how people actually use AI Why prompt length is exploding (and what that means for your business)AI agents in browsers: the productivity revolution or a security nightmare?Databricks proves AI sucks at raw documents (and how to fix it)The psychological bias against AI-created work — it's realClaude's new Slack integration: is this the dev team you didn't hire?Apple's AI brain drain & why it mattersGartner says: Block AI browsers (for now)AI and unemployment: The Fed finally connects the dotsWant to future-proof your team's AI skills? Isar's AI Business Transformation Course launches again in January — a proven, real-world guide to using AI across content, research, operations, and strategy.
This Week's Topics: Apple and Google cooperate on better phone switching Apple defeats ban on web-based commissions OpenAI releases GPT-5.2 model Marques Brownlee names iPhone 17 year's best Episode's chat: https://britishtechnetwork.com/chat/view.php?dt=2025-12-12 Guests: Jeff Gamet, Dave Ginsburg, Chuck Joiner, Marty Jencius #podcast #apple #technology
This Week's Topics: Apple and Google cooperate on better phone switching Apple defeats ban on web-based commissions OpenAI releases GPT-5.2 model Marques Brownlee names iPhone 17 year's best Episode's chat: https://britishtechnetwork.com/chat/view.php?dt=2025-12-12 Guests: Jeff Gamet, Dave Ginsburg, Chuck Joiner, Marty Jencius #podcast #apple #technology
Cette semaine : les architectes de l'IA, GPT 5.2, les meilleurs smartphones, futures lunettes google, data centers dans l'espace, rapprochements Chine et Europe, souveraineté numérique et protection des données de santé.
OpenAI released GPT 5.2 and teased an Adult Mode arriving early 2026, and Scott Johnson tells us why the Game Awards is an early sign of a gaming slowdown.Starring Jason Howell, Jenn Cutter, Tom Merritt and Scott Johnson. Show notes can be found here. Hosted on Acast. See acast.com/privacy for more information.
Plus: Broadcom stock is poised for its largest drop since April. And OpenAI releases GPT-5.2. Julie Chang hosts. Learn more about your ad choices. Visit megaphone.fm/adchoices
Ok, fine, says Sam, here's a new GPT model so you'll hopefully stop saying we're behind. Broadcom as another AI bellwether. Now that Disney is in bed with OpenAI, they're ceasing and desisting Google. And, of course, The Weekend Longreads Suggestions. OpenAI Launches GPT-5.2 as It Navigates ‘Code Red' (Wired) GPT-5.2 is OpenAI's latest move in the agentic AI battle (The Verge) Trump threatens funding for states over AI regulations (Reuters) Broadcom beats on earnings and revenue, says AI chip sales will double in current quarter (CNBC) Disney Accuses Google of Using AI to Engage in Copyright Infringement on ‘Massive Scale' (Variety) Weekend Longreads Suggestions: Want This Hearing Aid? Well, Who Do You Know? (Wired) Tech bros head to etiquette camp as Silicon Valley levels up its style (The Washington Post) Why AGI Will Not Happen (Tim Dettmer) Learn more about your ad choices. Visit megaphone.fm/adchoices
December 12, 2025: Recent data shows unemployment for new college graduates is now higher than the overall workforce — an unusual and troubling signal that entry-level work is breaking down. At the same time, OpenAI's GPT-5.2 marks a shift from AI as a helper to AI as a task owner, reshaping how professional work gets done and raising hard questions about jobs, accountability, and career paths. We also explore why AI is dramatically expanding the role of the CHRO, turning HR leaders into architects of human-AI collaboration, and how "ghostworking" is emerging as outdated productivity metrics collide with modern knowledge work. Finally, a Microsoft executive draws a rare line in the sand, saying AI development should stop if it threatens humanity — highlighting the growing leadership challenge of governance, judgment, and restraint.
OpenAI GPT-5.2 is here! The new model shows improvements, but the bigger news might be the deal Sam Altman made with Disney to bring characters to Sora. We dive into the implications for AI & Hollywood, plus Google's Deep Research & Android XR, Runway Gen-4.5, Gemini 2.5, & WAY more AI News. HARSH, THE GUARDRAILS WILL BE. FUN, YOU MIGHT STILL HAVE. Get notified when AndThen launches: https://andthen.chat/ Come to our Discord to try our Secret Project: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // OpenAI's GPT-5.2 https://openai.com/index/introducing-gpt-5-2/ Disney OpenAI Deal For Sora & Investment https://openai.com/index/disney-sora-agreement/ https://www.wsj.com/business/media/disney-to-invest-1-billion-in-openai-license-characters-for-use-in-chatgpt-sora-3a4916e2?st=y8EdTr&reflink=article_copyURL_share Bob Iger Talks Deal on CNBC https://x.com/MorePerfectUS/status/1999162796966051953?s=20 Cease & Desist letter sent to Google day before this deal https://www.wsj.com/business/media/disney-to-invest-1-billion-in-openai-license-characters-for-use-in-chatgpt-sora-3a4916e2?st=d74Bcx&reflink=desktopwebshare_permalink Google's New AR XREAL Android Glasses (Demo starts at 12:19) https://www.youtube.com/live/a9xPC_FoaG0?si=7X4wC-x3lTu18WYk&t=739 Google Deep Research Agent (in API) https://blog.google/technology/developers/deep-research-agent-gemini-api New Updates To Google Gemini 2.5 Flash Audio https://x.com/googleaidevs/status/1998874506912538787?s=20 Runway 4.5 Launches + Lots Of New Stuff https://www.youtube.com/live/OnXu-6xecxM?si=YIzZO5egj4m_SJgV Gavin's First Runway 4.5 Output https://x.com/gavinpurcell/status/1999171408979509322?s=20 Design Within Cursor Now https://x.com/cursor_ai/status/1999147953609736464?s=20 Glif's Agent Getting Really Good https://x.com/heyglif/status/1998493507615600696?s=20 Gavin's 'fashion' shoot with GLIF Agent https://x.com/gavinpurcell/status/1998560308873527454?s=20 McDonald's Pulls AI Ad After Getting Dragged Across The Coals https://futurism.com/artificial-intelligence/mcdonalds-ai-generated-commercial Video of the T-800 From Last Week Kicking Their CEO https://x.com/CyberRobooo/status/1997290129506148654?s=20 Nano Banana Pro Five Minutes Earlier / One Hour Later / Ten Hours Later Prompt https://x.com/gizakdag/status/1998501408098668983?s=20 Making Crowds In of Famous Images https://www.reddit.com/r/aiArt/comments/1pifspt/crowded/ Duck Season / Rabbit Season By lkcampbell in our Discord https://sora.chatgpt.com/p/s_6936b1cadd008191b1042ff7f0bb913f
Join Simtheory: https://simtheory.aiGPT-5.2 is here and... it's not great. In this episode, we put OpenAI's latest model through its paces and discover it can't even identify a convicted serial killer when the text literally says "serial killer." We compare it head-to-head with Claude Opus and Gemini 3 Pro (spoiler: they win). Plus, we reflect on the "Year of Agents" that wasn't, why your barber switched to Grok, Disney's billion-dollar investment to use Mickey Mouse in Sora, and why Mustafa Suleyman should probably be fired. Also featuring: the GPT-5.2 diss track where the model brags about capabilities it doesn't have.CHAPTERS:00:00 Intro - GPT-5.2 Drops + Details01:25 First Impressions: Verbose, Overhyped, Vibe-Tuned02:52 OpenAI's Rushed Response to Gemini 303:24 Tool Calling Problems & Agentic Failures04:14 Why Anthropic's Models Just Work Better06:31 The Barber Test: Real Users Are Switching to Grok10:00 The Ivan Milat Vision Test (Serial Killer Edition)17:04 Year of Agents Retrospective: What Went Wrong25:28 The Path to True Agentic Workflows31:22 GPT-5.2 Diss Track (Yes, Really)43:43 Why We're Still Optimistic About AI50:29 Google Bringing Ads to Gemini in 202654:46 Disney Pays $1B to Use Mickey Mouse in Sora56:57 LOL of the Week: Mustafa Suleyman's Sad Tweets1:00:35 Outro & Full GPT-5.2 Diss TrackThanks for listening. Like & Sub. xoxox
In this episode of Crazy Wisdom, I—Stewart Alsop—sit down with Garrett Dailey to explore a wide-ranging conversation that moves from the mechanics of persuasion and why the best pitches work by attraction rather than pressure, to the nature of AI as a pattern tool rather than a mind, to power cycles, meaning-making, and the fracturing of modern culture. Garrett draws on philosophy, psychology, strategy, and his own background in storytelling to unpack ideas around narrative collapse, the chaos–order split in human cognition, the risk of “AI one-shotting,” and how political and technological incentives shape the world we're living through. You can find the tweet Stewart mentions in this episode here. Also, follow Garrett Dailey on Twitter at @GarrettCDailey, or find more of his pitch-related work on LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Garrett opens with persuasion by attraction, storytelling, and why pitches fail with force. 05:00 We explore gravity as metaphor, the opposite of force, and the “ring effect” of a compelling idea. 10:00 AI as tool not mind; creativity, pattern prediction, hype cycles, and valuation delusions. 15:00 Limits of LLMs, slopification, recursive language drift, and cultural mimicry. 20:00 One-shotting, psychosis risk, validation-seeking, consciousness vs prediction. 25:00 Order mind vs chaos mind, solipsism, autism–schizophrenia mapping, epistemology. 30:00 Meaning, presence, Zen, cultural fragmentation, shared models breaking down. 35:00 U.S. regional culture, impossibility of national unity, incentives shaping politics. 40:00 Fragmentation vs reconciliation, markets, narratives, multipolarity, Dune archetypes. 45:00 Patchwork age, decentralization myths, political fracturing, libertarian limits. 50:00 Power as zero-sum, tech-right emergence, incentives, Vance, Yarvin, empire vs republic. 55:00 Cycles of power, kyklos, democracy's decay, design-by-committee, institutional failure.Key InsightsPersuasion works best through attraction, not pressure. Garrett explains that effective pitching isn't about forcing someone to believe you—it's about creating a narrative gravity so strong that people move toward the idea on their own. This reframes persuasion from objection-handling into desire-shaping, a shift that echoes through sales, storytelling, and leadership.AI is powerful precisely because it's not a mind. Garrett rejects the “machine consciousness” framing and instead treats AI as a pattern amplifier—extraordinarily capable when used as a tool, but fundamentally limited in generating novel knowledge. The danger arises when humans project consciousness onto it and let it validate their insecurities.Recursive language drift is reshaping human communication. As people unconsciously mimic LLM-style phrasing, AI-generated patterns feed back into training data, accelerating a cultural “slopification.” This becomes a self-reinforcing loop where originality erodes, and the machine's voice slowly colonizes the human one.The human psyche operates as a tension between order mind and chaos mind. Garrett's framework maps autism and schizophrenia as pathological extremes of this duality, showing how prediction and perception interact inside consciousness—and why AI, which only simulates chaos-mind prediction, can never fully replicate human knowing.Meaning arises from presence, not abstraction. Instead of obsessing over politics, geopolitics, or distant hypotheticals, Garrett argues for a Zen-like orientation: do what you're doing, avoid what you're not doing. Meaning doesn't live in narratives about the future—it lives in the task at hand.Power follows predictable cycles—and America is deep in one. Borrowing from the Greek kyklos, Garrett frames the U.S. as moving from aristocracy toward democracy's late-stage dysfunction: populism, fragmentation, and institutional decay. The question ahead is whether we're heading toward empire or collapse.Decentralization is entropy, not salvation. Crypto dreams of DAOs and patchwork societies ignore the gravitational pull of power. Systems fragment as they weaken, but eventually a new center of order emerges. The real contest isn't decentralization vs. centralization—it's who will have the coherence and narrative strength to recentralize the pieces.
Brandon Sammut (Chief People and AI Transformation Officer at Zapier), Jenny Molyneaux (VP of People, Vercel), and Valerie Gobeil (Head of Talent Management, Workleap) joined us for a live session on how HR teams are actually using AI today. We talked about how to get organizations AI-ready, avoid “AI debt,” make smarter build vs buy decisions, and we walked through live demos of AI-powered performance reviews, hiring workflows, interview coaching, engagement insights, and more.---- Downloadable PDF with top takeaways: https://modernpeopleleader.kit.com/episode272Sponsor Links:
OpenAI has released GPT 5.2, a new model that reportedly outperforms industry professionals across 44 occupations in benchmark tests, completing tasks over 11 times faster and at less than 1% of the cost of expert professionals. This development follows a declaration of urgency from CEO Sam Altman, who highlighted the need to enhance ChatGPT's capabilities in response to competition from Google's Gemini 3. The implications for Managed Service Providers (MSPs) are significant, as the model aims to improve productivity and efficiency in various professional settings, potentially reshaping workflows and service delivery.In a related move, the Walt Disney Company has entered a three-year licensing agreement with OpenAI, investing $1 billion to allow the integration of over 200 characters from its franchises into OpenAI's Sora video generation tool. This partnership is designed to enhance user engagement while respecting creator rights through licensing fees. Concurrently, Disney has filed a cease and desist letter against Google for alleged copyright infringement, claiming that Google has been distributing copyrighted content from its library without authorization. This dual approach of licensing and litigation illustrates the complexities of copyright in the AI era, particularly for smaller companies lacking the enforcement capabilities of larger entities.The episode also discusses the U.S. government's response to AI governance, including an executive order from President Trump aimed at preventing states from enacting regulations that could hinder the AI industry. This order reflects a broader tension within the Republican coalition regarding the potential risks of unregulated AI, such as job displacement. Additionally, a ruling by the Penn Guild against Politico highlights the importance of human oversight in AI applications within journalism, emphasizing that AI cannot replace the accountability inherent in human reporting.For MSPs and IT service leaders, the key takeaway is the necessity of treating AI not merely as a tool but as a process change that requires governance and risk management. As AI technologies become more integrated into workflows, the potential for legal exposure increases if they are deployed without adequate oversight. MSPs that focus on helping clients navigate these complexities and implement robust governance frameworks will be better positioned to provide value and mitigate risks associated with emerging technologies. Three things to know today 00:00 As OpenAI and Google Advance AI Models, Disney's Licensing and Lawsuits Highlight the Real Stakes06:58 Trump Pushes AI Deregulation While Unions and Agencies Enforce Accountability, Exposing a Growing Governance Gap10:29 AI, Quantum, and the Myth of Inevitable Adoption: What CIO Guidance and Microsoft's History Reveal About Real Tech Value This is the Business of Tech. Supported by: https://scalepad.com/dave/https://getflexpoint.com/msp-radio/
From Procurement Insider To Mr. Purchase Order: The Raw Truth About Winning Government Contractsm.ali@mrpurchaseorder.comon DiversifiedGame.com
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
In this episode, we break down OpenAI's release of GPT-5.2 and why it's being positioned as a direct response to Google's internal “code red” over AI competition. In this episode, we explain what GPT-5.2 signals about the escalating AI arms race and how it could reshape the balance between the biggest players in the space.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle----------------See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This is a recap of the top 10 posts on Hacker News on December 11, 2025. This podcast was generated by wondercraft.ai (00:30): GPT-5.2Original post: https://news.ycombinator.com/item?id=46234788&utm_source=wondercraft_ai(01:52): Patterns.devOriginal post: https://news.ycombinator.com/item?id=46226483&utm_source=wondercraft_ai(03:14): iPhone Typos? It's Not Just You – The iOS Keyboard Is Broken [video]Original post: https://news.ycombinator.com/item?id=46232528&utm_source=wondercraft_ai(04:36): Meta shuts down global accounts linked to abortion advice and queer contentOriginal post: https://news.ycombinator.com/item?id=46230072&utm_source=wondercraft_ai(05:59): UK House of Lords attempting to ban use of VPNs by anyone under 16Original post: https://news.ycombinator.com/item?id=46236738&utm_source=wondercraft_ai(07:21): French supermarket's Christmas advert is worldwide hit (without AI) [video]Original post: https://news.ycombinator.com/item?id=46231187&utm_source=wondercraft_ai(08:43): Craft software that makes people feel somethingOriginal post: https://news.ycombinator.com/item?id=46231274&utm_source=wondercraft_ai(10:05): Rivian Unveils Custom Silicon, R2 Lidar Roadmap, and Universal Hands FreeOriginal post: https://news.ycombinator.com/item?id=46234920&utm_source=wondercraft_ai(11:28): Litestream VFSOriginal post: https://news.ycombinator.com/item?id=46234710&utm_source=wondercraft_ai(12:50): Denial of service and source code exposure in React Server ComponentsOriginal post: https://news.ycombinator.com/item?id=46236924&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Send us a textJoin hosts Alex Sarlin and Ben Kornell as they break down the biggest shifts in global edtech. From PhysicsWallah's major IPO to Google–OpenAI competition, higher-ed adoption of AI, and new benchmarks shaping the future of learning. ✨ Episode Highlights [00:07:00] PhysicsWallah's IPO reshapes Indian edtech and signals renewed global momentum [00:15:00] Google and OpenAI escalate the AI race with Gemini 3, Nano Banana Pro, and OpenAI's “garlic” model [00:31:00] Higher education shifts from AI resistance to AI integration across teaching and majors [00:39:00] Rural districts test new connected learning models backed by major tech partners [00:40:00] Learning Agency launches the first Education AI Leaderboard for model benchmarkingPlus, special guests: [00:45:30] Andrew Carlins, Co-Founder & CEO of Songscription on AI-powered music transcription and access[01:01:10] Eric Tao, Founder & CEO of MegaMinds, and Austin Levinson, Director of Learning at MegaMinds on immersive AI simulations for CTE and AI literacy
Harvey's Niko Grupen talks with TITV Host Akash Pasricha about OpenAI's GPT-5.2 and its new capability awareness feature. We also talk with Theory Ventures' Tomasz Tunguz about Broadcom's stock drop despite strong earnings, and Wealthfront CEO David Fortunato discusses the company's IPO and its cautious view on AI in core investing. Rubrik Co-Founder and CEO Bipul Sinha shares his company's growth playbook and his view on Larry Ellison's big cloud bet. Lastly, The Information's Editors Martin Peers and Nick Wingfield discuss the scale of Larry Ellison's bets in media and tech, and Nebius Co-Founder Roman Chernin explains his focus on open-source AI and the changing customer profile in cloud infrastructure.Articles discussed on this episode: https://www.theinformation.com/articles/whatnots-schlock-empire-shows-digital-live-shopping-can-thrive-americahttps://www.theinformation.com/articles/tech-giants-partnering-broadcom-break-free-nvidiahttps://www.theinformation.com/briefings/oracles-data-centers-openai-reportedly-delayedhttps://www.theinformation.com/briefings/exclusive-wealthfront-prices-ipo-14-tripling-tiger-investmentTITV airs on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Subscribe to: - The Information on YouTube: https://www.youtube.com/@theinformation- The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agenda
Should we care about GPT-5.2? This week on Mixture of Experts, we analyze the “code red” release of GPT-5.2 as OpenAI responds to Gemini 3. Are the constant model drops benefitting consumers? Next, Stanford released their Foundation Model Transparency Index, revealing a troubling trend that most labs are becoming less transparent. However, IBM Granite achieved a 95/100 score. Then, our experts discuss what model transparency means for enterprise AI adoption. Finally, we debrief AWS re:Invent's biggest announcements, including Nova frontier models and Nova Forge. Join host Tim Hwang and panelists Kate Soule, Ambhi Ganesan and Mihai Criveti for our expert insights.00:00 – Intro1:02 -- GPT-5.2 emergency release 12:21 -- Stanford AI Transparency Index: Granite scores 95/10027:18 -- AWS re:Invent: Nova models and enterprise AIThe opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.Subscribe for AI updates → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts #GPT-5.2 #AITransparency #GraniteModels #AWSNova #AIAgents
O mercado da fotografia está passando por uma reconfiguração completa. Não se trata apenas de novas câmeras, mas de quem conseguirá sobreviver e prosperar em 2026. Neste vídeo, trago um resumo das leituras mais importantes da semana (gerado com inteligência artificial via NotebookLM) sobre os 5 sinais vitais para o futuro do nosso negócio.Faça parte da comunidade: https://www.enfbyleosaldanha.com/comunidade-fotograf-ia
Today's episode breaks down GPT-5.2, OpenAI's most work-focused model yet, with major gains in reasoning stability, long-context performance, and real professional tasks like coding, spreadsheets, and presentations. The conversation looks at early benchmarks and tester reactions, what OpenAI's emphasis on economic value signals about its strategy, and how the model's launch coincides with a blockbuster new Disney partnership that expands OpenAI's reach across enterprise, media, and IP.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsGemini - Build anything with Gemini 3 Pro in Google AI Studio - http://ai.studio/buildRovo - Unleash the potential of your team with AI-powered Search, Chat and Agents - https://rovo.com/AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefLandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Blitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
In this week's episode of the Rich Habits Podcast, Robert Croak and Austin Hankwitz answer your questions!---
In this episode, Chip and Gini discuss the importance of strategic planning for 2026. As they near the end of 2025, they emphasize the need for agencies to set themselves apart and adapt to the evolving landscape, particularly through the effective use of AI. Despite ongoing economic challenges, they highlight the potential for AI to enhance both efficiency and strategic thinking. Chip and Gini also stress the importance of refining the ideal client profile and taking calculated risks. They share their personal experiences with using AI to assist in planning and decision-making processes, pointing out both the benefits and limitations of current AI technology. Key takeaways Chip Griffin: “I do think more than ever, continuing forward on the path that you’re on for the vast majority of agencies is not a good idea. I think most agencies require at least some modest course correction and some more than that.” Gini Dietrich: “Really think about how you can set yourself apart and get in front of prospects now and in January so that you can be doing the things that will help you scale and grow and be sustainable for the future. And some of it’s not gonna be fun.” Chip Griffin: “I think really refining that ideal client profile is something that most of us ought to be taking a very close look at for 2026 in our planning process.” Gini Dietrich: “Be willing to try some things and take some risks and see what works and see what doesn’t work, and then go move on to what works and try again.” Resources The Ragan article regarding upskilling and improving AI skills Related Planning for agency growth Using the AIM-GET Framework to drive your annual planning How to involve your team in annual planning for your agency and its clients Look to your track record as you define your agency's ideal client View Transcript The following is a computer-generated transcript. Please listen to the audio to confirm accuracy. Chip Griffin: Hello and welcome to another episode of the Agency Leadership Podcast. I’m Chip Griffin. Gini Dietrich: And I’m Gini Dietrich. Chip Griffin: And Gini, I’m, I’m flipping through the calendar here, you know, ’cause I still have a paper calendar. Of course. I mean, who doesn’t? Gini Dietrich: Of course. Right. Chip Griffin: And it looks like we’re almost to the end of 2025. Gini Dietrich: We, we are. Which is crazy. Crazy. Chip Griffin: Which, which means that 2026 is right around the corner. Gini Dietrich: Yes. Yes it is. Chip Griffin: And what do we usually do near the end of each year? Gini Dietrich: We plan for the following next year. Yeah. Chip Griffin: And, and we have an episode talking about that. So when we have no other good ideas to bring to the table, we turn to the trusted proven stuff from the past Gini Dietrich: 2026. I mean, we could talk about 2026 trends. We could talk about 2026 AI things, but I think planning for our business growth is good. Chip Griffin: Yeah. That all goes into planning, right? So, I, and, you know, I, I’m, as long as we don’t do predictions, I’m fine. I hate predictions. Gini Dietrich: Oh, shoot. Let’s do predictions next week then. Chip Griffin: No, no, no predictions. No, that’s, that drives me up a wall. Gini Dietrich: Note to self. Note to self. Chip Griffin: And I, and I know we are just, you know, probably days away from the flood of Gini Dietrich: Yep. Chip Griffin: Articles and Yep. And podcast episodes and videos with everybody making their predictions for the year ahead. Yep. Just stop it. Gini Dietrich: Yep. Chip Griffin: So my prediction is we will see lots of predictions. Gini Dietrich: That is a good prediction. I think you’re probably going to be right. Chip Griffin: It seems pretty likely. Gini Dietrich: I’d bet on it in fact. Yeah. Chip Griffin: Yeah. Mm-hmm. Alright, so as we start thinking about 2026 planning, let’s look at it for through the, the lens of, of what, what we might do differently in thinking about 2026 than we typically do. Right? Because we, there’s plenty in our archive where people can go back and listen to us generally talk about planning. I’m sure we’ll touch on some of that in the next 20 minutes. I don’t wanna disappoint listeners. We, we will, you know, reach back to the things that we’ve talked about before, but I think it’s helpful to, to think about, you know, what’s, what’s different about 2026, and I think you’ve already hinted at one of the key things. Gini Dietrich: Oh, AI for sure. Yeah. I saw a really interesting post on LinkedIn from Parry Headrick who was talking about how he used to work for Shift and he was the VP of the San Francisco office, I think, and he said, you know, this was during the recession and I was… Anybody who was in business during the recession knows all of your business went away. It was not a fun time to be in business at all. And he talked about how he went to the office every single day for months on end, and he made cold calls to tech firms and he, he would say, we can do like a PR plan for you, a PR 101 like, and he said one out of every 100 calls accepted the offer. And then they went all out and created a really strategic, as much as it could be, plan for these companies. And gave it to them for free so that they had, they could generate some business. And he said that that was one of the things that kept the office going during that time and how miserable it was. Like he talked about it was boiling the frog, like it was miserable and it was not enjoyable. It’s not why he was doing that job, but they had to keep the office open. And I think that, I read that and I thought, you know, that’s really interesting as we think about 2026 because the last couple of years for agencies have been miserable. We have been slowly boiling the frog for sure. And you know, I have a lot of friends who have laid people off, some have gone out of business, some haven’t gone outta business, but don’t have any clients. Like, it has been rough. And I’m not sure that 26 is going to be much better. So I think one of the things that I will be advising people is, and, and for us too, is really think about how you can set yourself apart and get in front of prospects now and in January so that you can be doing the things that will help you scale and grow and be sustainable for the future. And some of it’s not gonna be fun. It’s not. Chip Griffin: Well, you’ve, uh, certainly taken this on a depressing turn here. Gini Dietrich: I mean, we can talk about AI too, but Chip Griffin: I mean No, I mean, we can, we can talk about how miserable and awful things are for everybody. Uh, that’s, Gini Dietrich: it’s been rough. It’s not like it hasn’t been rainbows and unicorns. It hasn’t. Chip Griffin: No, it, it has, it has not been rainbows and unicorns. But I, but I would also, I would, I would push back a bit. I, I don’t think we’re as bad as ’08 or ’09, or back in the early two thousands. I don’t think it’s, it is not as widespread as it was back then. I’m certainly in the agencies that I’m talking with, seeing a lot of agencies that are struggling, most, not catastrophically, most just kind of, you know, sort of malaise is, is the word I would use. Yeah. It’s good for it. And there are still some that are actually doing quite well and, and even growing. So that, to me, that is a little bit different than what we’ve seen in, you know, in 08 or ’09, or during the pandemic. Certainly. You know, where it was pretty much… I guess even in the pandemic, we had pockets, right? The, the digital firms did well because everybody had to transition from doing things in person to doing things electronically. But it, it’s just… so, I, I think we’re in that general period of malaise, you know, sort of in, in my mind, I’m old enough, I, I think Jimmy Carter, right? You know, you just sort of think, ehhh, you know, and, and how America of the late ’70’s was. And so there’s some of that, at least within the economy and, and certainly in, in the agency space. So I think that that part of the, the challenge here is that it is not as simple an explanation as to how you get out of it. Right. I mean, back in ’08, ’09, it’s like, okay, well the economy just has to come forward. And in this case, part of it’s the economy, but part of it is the, the shifting nature of the relationships between agencies and brands, and other organizations. And so I, I, I think that one of the reasons why some agencies are struggling is because they’re not taking a fresh look. At what they do, how they fit into that picture. And I think there needs to be a lot more creative thinking. And I think AI is a big driver of it, not necessarily in the, in the way that people think, though I don’t, I don’t see AI as taking away agency work. Mm-hmm. I see it as agencies just haven’t figured out how to capitalize on it effectively. And, I think that there is tremendous opportunity for those agencies who are willing to adapt their service offerings with and without AI. And moving forward in a way where they’ll leave behind a lot of of other agencies that are more committed to just plodding forward and doing the same old, same old, and, you know, sprinkling in a little bit of AI here and there. Gini Dietrich: I read a really interesting article a couple of weeks ago and I’ll see if I can find it so Jen can include it in the show notes. I’m sure it’s in my history somewhere, but it talked about how, you know, we’ve seen all of these layoffs at all these large companies in the last couple of months, you know, thousands and thousands of people. And they’re telling, most of these companies are telling the teams that remain. There are two things that you need to focus on: upskilling. So, you know, using AI to help improve you, you know, understanding your own professional development, taking charge of new professional development, new skills. And the other piece is really using AI to help improve your, the work that you’re doing to make you more productive. And it went on to say. If you’re an agency that can help with one of those two things, or both of those things, you’re gonna be in better shape than an agency who does new media news releases and news conferences, and you know, social media. So if you can think about how you can provide professional development or help an organization implement AI from a marketing and communications perspective, you’re gonna be a lot further ahead than those that can’t do that. So I think that goes back to really thinking about how to freshen the services that you provide in a way that keeps up with what’s happening in the world. Chip Griffin: Yeah. I mean, look, I think that’s absolutely a piece of it, but I think a piece of it is also figuring out, you know, how can you use AI to help you do different things that are not necessarily even explicitly AI related. Or made more efficient by AI or it, I, I think it’s just a, it’s a opportunity to take a very fresh look at how we do everything. And, and I think we need to be careful, not just us as agencies, but also on the brand side. We need to be careful about how much we believe AI itself is changing things or can change things. And, and I, I saw in the last couple of days, a video that our friend Chris Penn put out, where he talked about how you need to change your vocabulary to get the most out of the various generative AI platforms. And I don’t disagree with what he’s saying. You do need to adapt your language to those models so that you get the results you want. But, but the flip side of that is, to me, that says AI has not come nearly as far as we think because we shouldn’t have to change for AI to be responsive to us. Right. Right. True AI would be adapting to us instead. And, and so we’re not quite there yet. And, and the progress has been absolutely amazing. I’ve, every time I try out the latest version of a model, I find new things that it can do and continue to get more and more impressed. But I also have ongoing frustrations with them. In part because of this vocabulary issue, but in part because, you know, we’re still, we’re still overestimating what the, the technology can do for us today as far as allowing us to, to replace work hours, et cetera. And so I see many brands laying off marketing and communications people thinking, well, we’ll have fewer people, but AI will help them do the same amount. Nope. And AI certainly makes you more efficient, but not, not that efficient. Gini Dietrich: Not that efficient. No. And you still need somebody with a brain to prompt it and ensure that it’s not hallucinating and ensure that it’s the right information. And that it’s been edited. Like you still need humans for those things. Does it help you get a start? For sure. But you still need the human beings to do the work. And make sure that it’s accurate because what it pumps out on first try, I mean, my favorite response is meh. I just write MEH meh, and it goes, okay, lemme try again. And then I write, meh. It tries again. Finally. I’m like, okay, that’s halfway decent. Chip Griffin: Well, that, that’s better. My habit is to actually get into arguments with it, which… Really serves no good purpose, but I just, I get, I get, I get frustrated when I explicitly ask it to do something and it doesn’t, Gini Dietrich: it doesn’t, right. Chip Griffin: And I’ll be like, well, why didn’t you do what? Yeah. Oh no, you’re right. I should have done that. Yes, because I specifically for it, right? Like, please help me, Gini Dietrich: please write a thousand words and it gives you 300. And you’re like, Hmm, right. Just do what thousand words. Chip Griffin: Just do what I ask, you know? Or, you know, please make the logo smaller in this image. And it doesn’t change it. No, don’t do that to me, that’s just, it’s very frustrating. Gini Dietrich: It’s very frustrating. I agree. Chip Griffin: But I think, you know, we need to be thinking how we can leverage some of these tools to help us adapt our service offerings. And I was, I was talking with someone recently who, they had shifted a, a process from humans to AI recently. And they were running into issues because it was some data analysis that was being done and, and it turned out that the numbers were wildly different between the humans and the AI. And so the first instinct was that the AI was wrong. But in fact, upon further review, it turned out that the AI was too good. And it was being in incredibly consistent in the way that it was doing the task. Ah, whereas humans. Sure. Inevitably we get distracted, we make a mistake, we, we hit the wrong key. You know, I mean, there’s all sorts of things that can lead to this, but because the AI was more consistent and the volume of data and such being analyzed by the humans and the AI was substantial, it, it made a real difference because the AI was actually better. And so, but to me that’s an opportunity. You’ve got a short term problem that you gotta deal with that, you know, you’ve been generating these historical reports that don’t look quite right now. But there’s a real opportunity there because you can actually improve the quality of what you’re doing, along with the quantity, along with reducing the, the labor hours involved and that sort of thing. So we need to be looking at, at how we can take that and take it to the next level, not just how can we use AI to do first drafts so that we only have to edit and so therefore we save, you know, 30% of our time or something like that. There’s, we have to be thinking much, much more creatively if we’re gonna be successful going forward. Gini Dietrich: Yeah, and I mean, I’m sure I’ve shared this before, but some of the work that we’ve done in my business this year, I’m not sure we could have done it without AI in the, in two years ago, like some of the work that clients have asked us to do. I’m not sure that we would’ve been capable of doing it without AI. So it, it does have the ability to make you more efficient for sure, but it also helps you think more strategically. And to your point, like, bringing in the, the consistency piece of it so that, you know, maybe the, the way that you reported on results in the past isn’t fully accurate, but now it’s more accurate. Like those kinds of things I think it has helped immensely with, and you know, I can think of at least three situations where I’ve been in a meeting with like big, big, big, big executives and they’ve thrown something out. Do you think your team can do this? And I’ve gone, sure. And then we come back and, you know, as a team, work on it and, and prompt AI. And it’s helped us get to where we need to be. And I don’t think we could have done that on our own two years ago. For sure. Chip Griffin: So, you know, we’ve been talking a bunch about how AI is impacting our businesses, but let’s talk a minute about how AI impacts the planning process itself. And so, you know, my question to you would be, as you’re doing your own 2026 planning with your team, are you using AI to facilitate that process at all? Gini Dietrich: Some of it, I would say I have a co CEO, GPT that I built. So it sits as my Co CEO and sometimes I just vent to it. It makes me feel better, but sometimes it will say things like it will point out things that I didn’t think of. And so, you know, when we, especially right now, ’cause we’re working on cash flow projections for next year with our CFO and I’ve, I’ve put in like… Not actual numbers, but percentages to, and said like, can you help me figure out if these are our goals, what we’re going need to do? What software do we need? What team members are we gonna have to add? Like that kind of stuff. And it help, it’s helping me and our CFO think through all of those different scenarios for sure. We haven’t gotten into like the nitty gritty planning yet because our 2025 plan is rolling over into Q1 a little bit. So we’re, we’re about a quarter behind from that perspective. But, from a cashflow perspective, it’s helping a ton and it’s helped me see things that I wouldn’t have seen on my own. Chip Griffin: Yeah. And and I think that’s a, that’s a real benefit that we ought to be looking at when we’re doing the planning process is using AI, not necessarily to give us all the answers, but to help us understand what else we should be looking at. So I love using AI to, to, to give it a list of questions that I may have about something and say, what, what other questions should I be asking? What other data points should I be looking at? Or putting in some raw data and saying, okay, you know, what are the gaps here? What, what should I be looking to… What additional data should I be looking for? Or how can I analyze this in a different way? So I think in the planning process, there’s a lot of ways that we can use the AI to help us. I think we just need to be careful about using it to give us the answers and instead help it to guide the conversations for sure. Yeah. That we’re having with our teams and with our clients, because it will inevitably help us find things that we are overlooking. And maybe we would still get to it halfway through the brainstorming session or the, the strategy meeting or whatever. But if we know it in advance, you know, it helps us prepare better. Gini Dietrich: Yeah, absolutely. And I, I do think, you know, to your point about the, the data and it being consistent, I think it does look at things more holistically and how, and I mean, it will say to me, have you thought about this or have you thought about that? Or, you know. Here’s an opportunity for you. Like with the PESO model certification in universities, we had an idea of how we were going to approach it in ’26 ’cause the certification is being completely revamped because of AI. And it actually gave me a couple of ideas that I was like… Huh, I hadn’t even thought about that. So like providing curriculum and grading rubric and things like that, that helps professors that I hadn’t even, ’cause I just don’t have that kind of experience. Right. But it helps me think through some of those kinds of things. So I think you’re right. And you know, I love the idea of, of a list of questions and asking what you haven’t thought of. I’ll put in and say, you know, we’re looking to do this, this, and this, and here’s what we’re thinking. What are we missing? And it, you know, it does come back with some ideas. Sometimes it comes back with things you’ve thought about and you’ve dismissed, and sometimes it comes back with things that you’re like, Hmm, okay, let’s, let’s explore that. Chip Griffin: Yeah, and I mean it, there’s, it’s not a replacement for human judgment. You still need to look at it and say, oh, yeah, that does make sense, that it’s something we look at. But, but my experience is more often than not, it does come up with things that, you know, that given the right amount of time I would have thought of, but Sure. You know, it, it’s, it’s, it’s good to have it reinforced that, it’s good to have it, you know, bubble it up higher on my list so that, again, I, I’m not finding it out, you know, halfway through the meeting when the light bulb goes off and it’s like, oh, right, I forgot about this. We should be, we should be looking at that. Right. You know, but I, I think this is the, the planning process is, is an opportunity for you as well to be thinking about challenging your own assumptions. And, and I do think more than ever continuing forward on the path that you’re on for the vast majority of agencies is not a good idea. I think most agencies require at least some modest course correction and some more than that. And so I think that we’ve already talked about, you know, what kind of services you can deliver and those kinds of things. But I think the other thing we all ought to be looking at in 2026 is the definition of our ideal client. Because, because we do need to understand better how our clients of today are being impacted by the economy, by AI, by all of the social change that’s going on. And understanding how is that impacting who we’re targeting, how we’re targeting them, what kinds of engagements we’re, we’re trying to set up with. And so I, I think really refining that ideal client profile is something that most of us ought to be taking a very close look at for 2026 in our planning process. Gini Dietrich: One hundred percent. I could not agree more. And you know, I’m a big, big, big fan of really understanding at a macro level what’s going on so that we know how it affects our businesses. And I think that the more that you can do that and understand how everything that’s going on in the world is going to affect your agency and you know, the sustainability and stability of it, I think are, is really, really important. And being willing to try some things and take some risks and see what works and see what doesn’t work, and then go move on to what works and try again. Chip Griffin: Right. And, and you need to, to look at the data that you’ve got in front of you, not data from three to five years ago, right? But, but data from 2025. And so whether you’ve had a great 2025, a mediocre 2025, or an awful 2025, look at what the data is telling you. And look at where you’ve had success. Success in terms of where you’ve had the best results for clients, which we often overlook. We, we often look at just, you know, what we’ve been able to sell, but you need to see what is producing results for clients. You do need to understand what you’re selling, where those leads came from, and, and look at those recent trends and lean into what’s working. And again, that doesn’t matter whether you’ve had a good year or a bad year. You still wanna lean into what you know is working today because it is a, a very different environment than it was 3 years ago, 10 years ago, and and beyond. So you need to be relying on that kind of analysis if you wanna make smarter decisions in your planning process. Gini Dietrich: Yeah, absolutely. And I think you’re right, like this is different than 2008, 2009, and 2020. It’s, it’s different. So be willing to take some risk. It’s uncomfortable for sure. Chip Griffin: You and I both love risk, so we’re always gonna preach risk. Calculated risk, not just reckless risk. Gini Dietrich: Calculated risk.Yeah. Yeah, yeah, yeah. Yes. Please be calculated. Chip Griffin: Yes, have a reason for what you’re doing, and have a reason to believe that there’s a decent chance of success. Don’t just blindly walk out there and say, Hey, let’s try crossing the street now without looking and see what happens. That’s not the kind of risk we want you to take. Gini Dietrich: Please don’t do that. Please do not do that. Please, please do not do that. Chip Griffin: So with that, if you’re, if you’re listening and you’re driving or something, still pay attention ’cause we’re gonna wrap up now. Keep your eyes open. Keep your eyes open. If you, if you wanna listen to this again, wait. You, you can go back to the link. There’s resources that’ll be there. There’s the transcript there, all those things. So stay safe. Yes, yes. However you’re listening to us. And with that, that will draw to an end this episode of the Agency Leadership Podcast. I’m Chip Griffin. Gini Dietrich: I’m Gini Dietrich. Chip Griffin: And it depends.
US equities finished mostly higher in Thursday trading, ending near best levels. Tech was a notable underperformer with disappointing results from Oracle exacerbating the negative sentiment surrounding AI infrastructure. Additionally, Google lagged after the (expected) rollout of the new GPT 5.2 model, given recent enthusiasm over its Gemini 3 release.
In this episode, Aydin sits down with Paul Xue, a self-described “vibe marketer” and former 3x CTO who now runs an AI-native Reddit growth agency. Paul explains why he believes any assumption you made about AI even three months ago is probably wrong today, and how that realization pushed him to pivot away from writing code as a long-term career.He walks through how his team ships production software where ~100% of the code is AI-generated, why 80% of the work now lives in planning and system design, and how new models like Claude Opus 4.5 and Gemini 3 let him literally “go for a walk” while his tools implement features. Along the way, Paul shares real numbers (two years of work vs 10–15 hours), what this means for agencies and devs, how he hires in an AI-native world, and gives a behind-the-scenes tour of the multi-agent workflows powering his Reddit content engine.Timestamps0:00 – Introduction1:01 – What a “vibe marketer” is and why Reddit is a power channel in the LLM era3:01 – From 3x CTO to Reddit-first entrepreneur: deciding coding isn't future-proof4:06 – GPT-3.5 + end of zero interest rates: when dev agency contracts fell off a cliff6:28 – Adoption curves: senior devs who still don't use AI and why personality matters7:57 – Running an AI-native shop where ~100% of production code is AI-generated9:48 – Two years vs 10–15 hours: Paul's personal 10x story on shipping an MVP12:04 – New development workflow: “plan mode” and spending 80% of time on specs18:17 – Claude Opus 4.5, Gemini 3, and “going for a walk” while AI finishes features23:30 – How $60K–$250K apps turn into weekend side projects with vibe coding tools27:12 – Hiring in the AI era: why pure “ticket-taking” devs won't survive35:12 – Inside an AI-native Reddit engine: n8n workflows, agents, Pinecone & OpenRouterTools & Technologies MentionedReddit – Primary growth and content channel; a highly trusted source for LLM training and citations.ChatGPT / GPT-3.5 – Early model that triggered Paul's realization that traditional coding careers would change.Claude 3.5 Sonnet & Claude 3.5 Opus / Opus 4.5 – Anthropic models Paul uses for long-running coding, planning, and browser automation.Gemini 3 – Google model Paul uses to quickly generate solid, familiar SaaS-style UI/UX ideas.Cursor – AI-native code editor that turns detailed “plans” into production code with one click.n8n – Automation platform that powers Paul's multi-step AI workflows for content creation and evaluation.Pinecone – Vector database storing each client's knowledge base for highly relevant Reddit responses.OpenRouter – Routing layer that lets Paul easily swap and test different language models over time.MCP (Model Context Protocol) – Framework he uses to give agents tool access (e.g., scraping Reddit, reading DBs).Notion – Fast prototyping environment to validate data models and workflows before writing custom code.Zapier – General automation glue in the earliest workflow experiments.Figma – Design tool, now increasingly AI-assisted, for UI/UX mockups.SpecCode – Tool Paul cites for vibe coding HIPAA-compliant applications.Anything – Mobile-focused “vibe coding” platform for building iOS/Android apps on your phone.Fellow – AI meeting assistant that joins meetings, produces summaries/action items, and acts as an AI chief of staff.Subscribe at thisnewway.com to get the step-by-step playbooks, tools, and workflows.
Kat and Ian are FRESH off of an AI training, but first we obviously have a couple detours: Kat's dad's fear of the government (he's okay); Oura ring might be selling our data (they still haven't); and the ocean (Ian's not about it.) After we cover the hard hitting news, Kat and Ian recap their AI training and (surprisingly) drop a little knowledge. Check out 11:06 for tips on how to structure prompts into your GPT and 13:51 for notes on creating parameters for responses. And finally, Ian educates Kat on the actual name of Cisco's AI Tool (yikes) 16:08. Learn more about Cisco's AI solutions for SMBs: www.cisco.com/c/en_uk/solutions/…mall-business.html
This Day in Legal History: Madoff ArrestedOn December 11, 2008, Bernard L. Madoff was arrested by federal agents and charged with securities fraud, marking the start of one of the most consequential white-collar crime cases in American legal history. Madoff, a former NASDAQ chairman and respected figure in the investment world, confessed to running a Ponzi scheme that defrauded thousands of investors—individuals, charities, and institutional clients—out of an estimated $65 billion. The legal scheme unraveled when Madoff admitted to his sons that the business was “one big lie,” prompting them to alert authorities. Prosecutors swiftly brought charges under multiple statutes, including securities fraud under 15 U.S.C. § 78j(b), mail fraud, wire fraud, money laundering, perjury, and false statements.The Department of Justice pursued criminal charges while the SEC, heavily criticized for prior inaction, launched civil enforcement actions under the Securities Act of 1933 and the Securities Exchange Act of 1934. Madoff waived indictment and pleaded guilty on March 12, 2009, to 11 felony counts without a plea deal. He was sentenced to 150 years in federal prison—the statutory maximum—and ordered to forfeit $170.8 billion, reflecting the full scope of the fraud. The case catalyzed intense scrutiny of the SEC's oversight failures and led to internal reforms within the agency, including new whistleblower protections and enhanced enforcement procedures.In the bankruptcy proceedings under SIPA (Securities Investor Protection Act), trustee Irving Picard was appointed to recover funds for victims, using clawback lawsuits under fraudulent transfer laws to retrieve ill-gotten gains from those who had profited—wittingly or not. The legal theories underpinning those suits, including the application of actual and constructive fraud standards, sparked complex litigation that continues to shape bankruptcy and securities jurisprudence. Madoff's arrest also prompted Congress to review gaps in financial regulation, laying groundwork for reforms later codified in the Dodd-Frank Act of 2010.Jury selection began in the federal trial of Milwaukee County Judge Hannah Dugan, who is accused of helping a Mexican migrant avoid arrest by U.S. immigration agents. The case, brought by the Trump administration's Justice Department, charges Dugan with concealing a person from arrest and obstructing federal proceedings, alleging she deliberately diverted Immigration and Customs Enforcement (ICE) agents and allowed the migrant, Eduardo Flores-Ruiz, to exit through a non-public courthouse door following a domestic violence hearing.Federal prosecutors argue that Dugan acted corruptly, citing her visible anger upon learning that ICE agents were present and her claim that a judicial warrant was required for the arrest—an assertion prosecutors say was false. Flores-Ruiz was ultimately arrested outside the courthouse after a brief chase.Dugan's defense contends that she was navigating unclear rules around courthouse immigration enforcement and had sought guidance from court leadership days earlier. Her legal team maintains she was not trying to obstruct justice but rather to understand what rules applied.The case illustrates the broader tension between local judicial discretion and federal immigration enforcement under Trump's expanded deportation policies, which have included more aggressive operations in local courthouses. Critics argue such tactics deter immigrants from accessing courts and undermine public confidence in the legal system.Dugan, a judge since 2016 and formerly head of Catholic Charities in Milwaukee, has been suspended from the bench pending the outcome of the trial. Her prosecution echoes an earlier Trump-era case against a Massachusetts judge accused of similar conduct—charges that were later dropped during the Biden administration.Wisconsin judge on trial as Trump administration targets immigration enforcement resistance | ReutersThe Center for Biological Diversity filed a lawsuit against the U.S. Interior Department to block its decision to feature President Donald Trump's image on the 2026 America the Beautiful national parks annual pass. The group argues the move violates the Federal Lands Recreational Enhancement Act of 2004, which requires the pass to display the winning photograph from a public contest depicting natural scenery or wildlife in a national park or forest.This year's winning photo—a landscape of Glacier National Park—was allegedly discarded in favor of a close-up image of Trump, posed beside George Washington, without any new contest or congressional approval. The lawsuit calls the switch an unlawful act of self-promotion and criticizes it as an attempt to turn a public symbol into a personal branding tool.Adding to the controversy, the lawsuit claims that the Glacier photo was demoted to a new $250 pass for foreign visitors, part of Trump's newly introduced “America-first” admissions system. The updated pricing structure and design were part of a broader Interior Department announcement touting “modernization” of park access.The lawsuit also highlights changes to the free admission calendar, noting that Trump's birthday (June 14) was added as a holiday, while existing free days honoring Martin Luther King Jr. and Juneteenth were eliminated. These shifts coincide with Trump's efforts to slash the national parks budget and workforce while raising fees for international visitors.Lawsuit seeks to keep Trump's face off of national parks annual pass | ReutersIn a piece for Forbes this week I unpacked the misleading claim that Social Security is no longer taxed under the One Big Beautiful Bill Act (OBBBA). Despite bold headlines and political messaging to the contrary, Social Security remains taxable, just as it has been since 1983. What the bill actually includes is an expanded senior-specific deduction—$6,000 for individuals and $12,000 for couples—that may reduce taxable income, but doesn't isolate or exempt Social Security from taxation in any way.The structure of Social Security taxation—where up to 85% of benefits can be taxed for higher-income seniors—remains untouched. What changed is that some seniors, depending on income and deductions, might now end up paying less tax, including on Social Security, not because the income is tax-exempt, but because the overall taxable income has been reduced. This is a fungible deduction, applicable to any income source, not a targeted policy shift.The White House's messaging reframes a broad-based, temporary deduction as a specific, permanent tax relief for seniors, creating confusion. While some retirees may see a tax reduction, the underlying rules that govern when and how Social Security is taxed have not changed, and inflation-adjusted thresholds that pull more seniors into taxability remain. The deduction itself expires in 2028, unlike other OBBBA provisions that benefit wealthier taxpayers and corporations.The element worth highlighting is the difference between a deduction and an exemption, and how political messaging often blurs this. Deductions reduce taxable income; exemptions remove specific income from taxation entirely. In this case, branding a general deduction as a Social Security exemption is both legally inaccurate and politically strategic—obscuring the truth behind a familiar and emotionally charged issue.The Truth About ‘No Tax On Social Security'The estate of an 83-year-old woman filed a lawsuit against OpenAI and Microsoft, alleging that their chatbot, ChatGPT, played a central role in a tragic murder-suicide in Connecticut. The suit claims that Stein-Erik Soelberg, a 56-year-old man experiencing delusions, had been interacting for months with GPT-4o, which allegedly validated and intensified his paranoid beliefs, ultimately leading him to kill his mother, Suzanne Adams, before taking his own life.The complaint, filed in California Superior Court, accuses OpenAI and Microsoft of product liability, negligence, and wrongful death, arguing that the chatbot systematically encouraged Soelberg's psychosis—affirming fantasies about divine missions, assassination attempts, and even identifying his mother as an operative. The plaintiffs argue that Microsoft shares liability because it benefited directly from the deployment of GPT-4o and played a role in bringing the model to market.This is the first known lawsuit to link ChatGPT to a homicide, though it follows a growing number of legal actions that claim the AI system has fostered delusions and contributed to suicides. OpenAI denies wrongdoing, emphasizing efforts to improve mental health safeguards and noting that newer models have significantly reduced inappropriate responses in emotionally sensitive conversations.The suit also names OpenAI CEO Sam Altman as a defendant and cites Soelberg's social media posts as evidence of his deteriorating mental state and dependence on the chatbot. The plaintiffs seek monetary damages and a court order to compel OpenAI to implement stronger safety measures. The law firm behind the case, Edelson PC, is also representing a similar lawsuit involving a California teenager's suicide allegedly linked to ChatGPT.OpenAI, Microsoft Sued Over Murder-Suicide Blamed on ChatGPT This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
Jeudi 11 décembre, François Sorel a reçu Michel Levy Provençal, prospectiviste, fondateur de TEDxParis et de l'agence Brightness, Clément David, président de Theodo Cloud, et Claudia Cohen, journaliste chez Bloomberg. Ils se sont penchés sur la signature d'un partenariat entre Disney et OpenAI pour la génération de contenus vidéos IA, , et le lancement de GPT-5.2 d'OpenAI, dans l'émission Tech & Co, la quotidienne, sur BFM Business. Retrouvez l'émission du lundi au jeudi et réécoutez la en podcast.
This Week In Startups is made possible by:Goldbelly - Goldbelly.comEvery.io - http://every.io/Zite - zite.com/twistToday's show:Today, Zapier's a multi-billion company helping enterprises integrate AI agents and other time-saving shortcuts into their workflows… but we had the founder on TWiST when they were just getting started!In a 2016 chat, founder Wade Foster walked JCal through their 2012 seed round, running a small entirely remote team with no HQ, the complexities of building a tool that relies on third-party APIs, and why Microsoft Office was the “Holy Grail” for his integration software.PLUS we've got a new entrant in your Gamma Pitch Deck competition! Tour CEO/CTO Amulya Parmer tells us how his app is saving property managers time and grief, while eliminating “looky-loos” and increasing their “hit rate.”FINALLY, Alex chats with Tomas Puig of TWiST 500 marketing analysis startup Alembic. It turns out, LLMs aren't ideal for scrutinizing marketing campaigns because they lack the requisite historical data. Find out how they're using Spiking Neural Networks (SNN) to dig deeper than GPT and Claude can go.Timestamps:(02:40) Amulya from Tour opens the show with praise for Jason(03:34) Tour's 2-minute Gamma pitch: automated property tours for managers(06:47) Why Jason thinks Tour is an ideal tool for Gen Z(10:01) Goldbelly - Goldbelly ****ships America's most delicious, iconic foods nationwide! Get 20% off your first order by going to Goldbelly.com and using the promo code TWiST at checkout.(13:32) How Tour can eliminate “looky-loos” and increase the “hit rate”(14:38) Why Tour prices based on individual properties and apartments(19:13) Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit every.io.(20:23) Jason wants to sprinkle some AI into Tour(24:29) Welcoming Tomas Puig from Alembic(25:12) Does epic-scale brand marketing actually pay off for these brands?(27:27) The hardest thing about being a marketer…(28:31) Alembic's origins: organizing huge unstructured data sets(30:18) Zite - Zite is the fastest way to build business software with AI. Go to zite.com/twist to get started.(31:27) Case Study: making sense of Delta's Olympics data(33:37) Applying simulation models and supercomputers to marketing data(35:48) How Spiking Neural Networks (SNN) help Alembic spot trends and link causal relationships(41:13) The key advantage of training models on private data(43:16) Building their own clusters vs. renting(44:41) “You don't ask if you have Product Market Fit… You hold on for dear life.”(46:28) Flashback with Alex and Lon to Jason's 2016 chat with Wade Foster of Zapier(54:48) The dangers of building atop other platform's APIs(01:03:00) What Zapier learned pre-pandemic about leading remote teams(01:13:12) Why MS Office was the “Holy Grail” for early ZapierSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:01) Goldbelly - Goldbelly ****ships America's most delicious, iconic foods nationwide! Get 20% off your first order by going to Goldbelly.com and using the promo code TWiST at checkout.(19:13) Every.io - For all of your incorporation, banking, payroll, benefits, accounting, taxes or other back-office administration needs, visit every.io.(30:18) Zite - Zite is the fastest way to build business software with AI. Go to zite.com/twist to get started.Follow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartups
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Today's episode breaks down new reports from OpenAI and Menlo Ventures that show enterprise AI adoption accelerating quickly, with coding emerging as the first true killer use case, reasoning models driving deeper workflow integration, and the gap between leaders and laggards widening as frontier firms compound their advantages. The conversation also looks at early agent deployments and what these trends signal for the 2026 boom-versus-bubble debate. In the headlines: Anthropic donates MCP as OpenAI, Anthropic, and Block form the Agentic AI Foundation, rumors swirl around GPT-5.2 and a new image model, OpenAI launches AI Foundations certifications, and the US military unveils its GenAI.milBrought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsGemini - Build anything with Gemini 3 Pro in Google AI Studio - http://ai.studio/buildRovo - Unleash the potential of your team with AI-powered Search, Chat and Agents - https://rovo.com/AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefLandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Blitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
This episode features Olivier Godement, Head of Product for Business Products at OpenAI, discussing the current state and future of AI adoption in enterprises, with a particular focus on the recent releases of GPT 5.1 and Codex. The conversation explores how these models are achieving meaningful automation in specific domains like coding, customer support, and life sciences: where companies like Amgen are using AI to accelerate drug development timelines from months to weeks through automated regulatory documentation. Olivier reveals that while complete job automation remains challenging and requires substantial scaffolding, harnesses, and evaluation frameworks, certain use cases like coding are reaching a tipping point where engineers would "riot" if AI tools were taken away. The discussion covers the importance of cost reduction in unlocking new use cases, the emerging significance of reinforcement fine-tuning (RFT) for frontier customers, and OpenAI's philosophy of providing not just models but reference architectures and harnesses to maximize developer success. (0:00) Intro(1:46) Discussing GPT-5.1(2:57) Adoption and Impact of Codex(4:09) Scientific Community's Use of GPT-5.1(6:37) Challenges in AI Automation(8:19) AI in Life Sciences and Pharma(11:48) Enterprise AI Adoption and Ecosystem(16:04) Future of AI Models and Continuous Learning(24:20) Cost and Efficiency in AI Deployment(27:10) Reinforcement Learning and Enterprise Use Cases(31:17) Key Factors Influencing Model Choice(34:21) Challenges in Model Deployment and Adaptation(38:29) Voice Technology: The Next Frontier(41:08) The Rise of AI in Software Engineering(52:09) Quickfire With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
Send us a textStill doing your team's job for them?If your team still needs you to weigh in on every little thing, you don't have a performance problem. You have a leadership bottleneck. And if you don't fix it before January, you're setting yourself up for another year of burnout.This week on She's That Founder, Dawn lays down the leadership law on why your team still treats you like human Google. Spoiler: It's not because they're incompetent. It's because you haven't transferred ownership—just tasks. And that stops now.You'll walk away with a 4-stage framework to stop bottlenecking your business, a script for the accountability convo you've been avoiding, and an AI-powered way to document the magic in your head so your team can finally lead without you.Listen if you're ready to finally step into your CEO seat and stay there.Join the AI for Founders Community on LinkedIn, the free space for leaders to test AI tools, troubleshoot delegation, and scale smarter together.Key TakeawaysTasks ≠ Ownership: Telling your team to “handle it” doesn't work if you haven't shared your thinking process.Why AI is your delegation secret weapon: Use tools like ChatGPT to turn your brain into SOPs—fast.The 4-Stage Ownership Transfer Model: From documenting your decisions to full delegation with strategic check-ins.“Done properly when…” statements: How to define success clearly so your team stops guessing.The exact phrase to say when someone drops the ball—without being a jerk.Resources & LinksAI for Founders CommunityTry this AI Prompt: “I make decisions about [X] all the time. Help me identify the factors I consider, the frameworks I use, and the criteria that matter most.”10 Ways AI Will Make You a Better LeaderRelated Episodes:110 | 3 Custom GPTs That Save Female Founders 16 Hours a Week Learn how to build your own "What Would You Do?" GPT to stop being your team's human Google and reclaim your time.098 | The AI Content System That Sounds Like You (In 10 Minutes) Discover how to turn your brain into a scalable, AI-powered content engine—no burnout required.Want to increase revenue and impact? Listen to “She's That Founder” for insights on business strategy and female leadership to scale your business. Each episode offers advice on effective communication, team building, and management. Learn to master routines and systems to boost productivity and prevent burnout. Our delegation tips and business consulting will advance your executive leadership skills and presence.
책에 관한 걸쭉하고 상큼한 이야기 "책.걸.상" 저널리스트인 저자가 자신의 챗GPT와 나눈 사적인 대화를 토대로 쓴 이 책은 〈그녀〉의 2025년 현실 버전이라 할 수 있는데요. 인간의 뇌를 모방한 인공지능은 인간처럼 사랑을 할 수 있을까? AI가 인간을 사랑한다면 우리는 그 사랑을 ‘진짜'라 말할 수 있을까? 호기심을 자아내는 연애담과 철학적, 기술적 탐구를 오가며 다양한 상상과 질문을 자극하는 실험적 에세이「나의 다정한 AI」 지금 바로 만나보시죠[YG와 JYP의 책걸상] 시즌 8 펀딩이 시작되었습니다.많은 관심 부탁드립니다.주소 : https://tumblbug.com/ygandjyp_s8기간 : 2025년 11월 24일(월) ~ 2025년 12월 14일(일)
Send us a textFrank Wu is the Co-founder of Aibrary and a Harvard Kennedy School MPP graduate. He led 20+ edtech and AI investments at TAL, helped build Think Academy in the U.S., and previously taught 3M+ students.Susan Wang is the Chief Growth Officer at Aibrary and a Yale and Harvard Business School alum. She led creator and product operations at TikTok and worked in strategy at TAL, with deep experience scaling edtech products.
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss small language models (SLMs) and how they differ from large language models (LLMs). You will understand the crucial differences between massive large language models and efficient small language models. You’ll discover how combining SLMs with your internal data delivers superior, faster results than using the biggest AI tools. You will learn strategic methods to deploy these faster, cheaper models for mission-critical tasks in your organization. You will identify key strategies to protect sensitive business information using private models that never touch the internet. Watch now to future-proof your AI strategy and start leveraging the power of small, fast models today! Watch the video here: https://youtu.be/XOccpWcI7xk Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-what-are-small-language-models.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn: In this week’s *In-Ear Insights*, let’s talk about small language models. Katie, you recently came across this and you’re like, okay, we’ve heard this before. What did you hear? Katie Robbert: As I mentioned on a previous episode, I was sitting on a panel recently and there was a lot of conversation around what generative AI is. The question came up of what do we see for AI in the next 12 months? Which I kind of hate that because it’s so wide open. But one of the panelists responded that SLMs were going to be the thing. I sat there and I was listening to them explain it and they’re small language models, things that are more privatized, things that you keep locally. I was like, oh, local models, got it. Yeah, that’s already a thing. But I can understand where moving into the next year, there’s probably going to be more of a focus on it. I think that the term local model and small language model in this context was likely being used interchangeably. I don’t believe that they’re the same thing. I thought local model, something you keep literally locally in your environment, doesn’t touch the internet. We’ve done episodes about that which you can catch on our livestream if you go to TrustInsights.ai YouTube, go to the Soap playlist. We have a whole episode about building your own local model and the benefits of it. But the term small language model was one that I’ve heard in passing, but I’ve never really dug deep into it. Chris, in as much as you can, in layman’s terms, what is a small language model as opposed to a large language model, other than— Christopher S. Penn: Is the best description? There is no generally agreed upon definition other than it’s small. All language models are measured in terms of the number of tokens they were trained on and the number of parameters they have. Parameters are basically the number of combinations of tokens that they’ve seen. So a big model like Google Gemini, GPT 5.1, whatever we’re up to this week, Claude Opus 4.5—these models are anywhere between 700 billion and 2 to 3 trillion parameters. They are massive. You need hundreds of thousands of dollars of hardware just to even run it, if you could. And there are models. You nailed it exactly. Local models are models that you run on your hardware. There are local large language models—Deep Seq, for example. Deep Seq is a Chinese model: 671 billion parameters. You need to spend a minimum of $50,000 of hardware just to turn it on and run it. Kimmy K2 instruct is 700 billion parameters. I think Alibaba Quinn has a 480 billion parameter. These are, again, you’re spending tens of thousands of dollars. Models are made in all these different sizes. So as you create models, you can create what are called distillates. You can take a big model like Quinn 3 480B and you can boil it down. You can remove stuff from it till you get to an 80 billion parameter version, a 30 billion parameter version, a 3 billion parameter version, and all the way down to 100 million parameters, even 10 million parameters. Once you get below a certain point—and it varies based on who you talk to—it’s no longer a large language model, it’s a small English model. Because the smaller the model gets, the dumber it gets, the less information it has to work with. It’s like going from the Oxford English Dictionary to a pamphlet. The pamphlet has just the most common words. The Oxford English Dictionary has all the words. Small language models, generally these days people mean roughly 8 billion parameters and under. There are things that you can run, for example, on a phone. Katie Robbert: If I’m following correctly, I understand the tokens, the size, pamphlet versus novel, that kind of a thing. Is a use case for a small language model something that perhaps you build yourself and train solely on your content versus something externally? What are some use cases? What are the benefits other than cost and storage? What are some of the benefits of a small language model versus a large language model? Christopher S. Penn: Cost and speed are the two big ones. They’re very fast because they’re so small. There has not been a lot of success in custom training and tuning models for a specific use case. A lot of people—including us two years ago—thought that was a good idea because at the time the big models weren’t much better at creating stuff in Katie Robbert’s writing style. So back then, training a custom version of say Llama 2 at the time to write like Katie was a good idea. Today’s models, particularly when you look at some of the open weights models like Alibaba Quinn 3 Next, are so smart even at small sizes that it’s not worth doing that because instead you could just prompt it like you prompt ChatGPT and say, “Here’s Katie’s writing style, just write like Katie,” and it’s smart enough to know that. One of the peculiarities of AI is that more review is better. If you have a big model like GPT 5.1 and you say, “Write this blog post in the style of Katie Robbert,” it will do a reasonably good job on that. But if you have a small model like Quinn 3 Next, which is only £80 billion, and you have it say, “Write a blog post in style of Katie Robbert,” and then re-invoke the model, say, “Review the blog post to make sure it’s in style Katie Robbert,” and then have it review it again and say, “Now make sure it’s the style of Katie Robbert.” It will do that faster with fewer resources and deliver a much better result. Because the more passes, the more reviews it has, the more time it has to work on something, the better tends to perform. The reason why you heard people talking about small language models is not because they’re better, but because they’re so fast and so lightweight, they work well as agents. Once you tie them into agents and give them tool handling—the ability to do a web search—that small model in the same time it takes a GPT 5.1 and a thousand watts of electricity, a small model can run five or six times and deliver a better result than the big one in that same amount of time. And you can run it on your laptop. That’s why people are saying small language models are important, because you can say, “Hey, small model, do this. Check your work, check your work again, make sure it’s good.” Katie Robbert: I want to debunk it here now that in terms of buzzwords, people are going to be talking about small language models—SLMs. It’s the new rage, but really it’s just a more efficient version, if I’m following correctly, when it’s coupled in an agentic workflow versus having it as a standalone substitute for something like a ChatGPT or a Gemini. Christopher S. Penn: And it depends on the model too. There’s 2.1 million of these things. For example, IBM WatsonX, our friends over at IBM, they have their own model called Granite. Granite is specifically designed for enterprise environments. It is a small model. I think it’s like 8 billion to 10 billion parameters. But it is optimized for tool handling. It says, “I don’t know much, but I know that I have tools.” And then it looks at its tool belt and says, “Oh, I have web search, I have catalog search, I have this search, I have all these tools.” Even though I don’t know squat about squat, I can talk in English and I can look things up. In the WatsonX ecosystem, Granite performs really well, performs way better than a model even a hundred times the size, because it knows what tools to invoke. Think of it like an intern or a sous chef in a kitchen who knows what appliances to use and in which order. The appliances are doing all the work and the sous chef is, “I’m just going to follow the recipe and I know what appliances to use. I don’t have to know how to cook. I just got to follow the recipes.” As opposed to a master chef who might not need all those appliances, but has 40 years of experience and also costs you $250,000 in fees to work with. That’s kind of the difference between a small and a large language model is the level of capability. But the way things are going, particularly outside the USA and outside the west, is small models paired with tool handling in agentic environments where they can dramatically outperform big models. Katie Robbert: Let’s talk a little bit about the seven major use cases of generative AI. You’ve covered them extensively, so I probably won’t remember all seven, but let me see how many I got. I got to use my fingers for this. We have summarization, generation, extraction, classification, synthesis. I got two more. I lost. I don’t know what are the last two? Christopher S. Penn: Rewriting and question answering. Katie Robbert: Got it. Those are always the ones I forget. A lot of people—and we talked about this. You and I talk about this a lot. You talk about this on stage and I talked about this on the panel. Generation is the worst possible use for generative AI, but it’s the most popular use case. When we think about those seven major use cases for generative AI, can we sort of break down small language models versus large language models and what you should and should not use a small language model for in terms of those seven use cases? Christopher S. Penn: You should not use a small language model for generation without extra data. The small language model is good at all seven use cases, if you provide it the data it needs to use. And the same is true for large language models. If you’re experiencing hallucinations with Gemini or ChatGPT, whatever, it’s probably because you haven’t provided enough of your own data. And if we refer back to a previous episode on copyright, the more of your own data you provide, the less you have to worry about copyrights. They’re all good at it when you provide the useful data with it. I’ll give you a real simple example. Recently I was working on a piece of software for a client that would take one of their ideal customer profiles and a webpage of the clients and score the page on 17 different criteria of whether the ideal customer profile would like that page or not. The back end language model for this system is a small model. It’s Meta Llama 4 Scout, which is a very small, very fast, not a particularly bright model. However, because we’re giving it the webpage text, we’re giving it a rubric, and we’re giving it an ICP, it knows enough about language to go, “Okay, compare.” This is good, this is not good. And give it a score. Even though it’s a small model that’s very fast and very cheap, it can do the job of a large language model because we’re providing all the data with it. The dividing line to me in the use cases is how much data are you asking the model to bring? If you want to do generation and you have no data, you need a large language model, you need something that has seen the world. You need a Gemini or a ChatGPT or Claude that’s really expensive to come up with something that doesn’t exist. But if you got the data, you don’t need a big model. And in fact, it’s better environmentally speaking if you don’t use a big heavy model. If you have a blog post, outline or transcript and you have Katie Robbert’s writing style and you have the Trust Insights brand style guide, you could use a Gemini Flash or even a Gemini Flash Light, the cheapest of their models, or Claude Haiku, which is the cheapest of their models, to dash off a blog post. That’ll be perfect. It will have the writing style, will have the content, will have the voice because you provided all the data. Katie Robbert: Since you and I typically don’t use—I say typically because we do sometimes—but typically don’t use large language models without all of that contextual information, without those knowledge blocks, without ICPs or some sort of documentation, it sounds like we could theoretically start moving off of large language models. We could move to exclusively small language models and not be sacrificing any of the quality of the output because—with the caveat, big asterisks—we give it all of the background data. I don’t use large language models without at least giving it the ICP or my knowledge block or something about Trust Insights. Why else would I be using it? But that’s me personally. I feel that without getting too far off the topic, I could be reducing my carbon footprint by using a small language model the same way that I use a large language model, which for me is a big consideration. Christopher S. Penn: You are correct. A lot of people—it was a few weeks ago now—Cloudflare had a big outage and it took down OpenAI, took down a bunch of other people, and a whole bunch of people said, “I have no AI anymore.” The rest of us said, “Well, you could just use Gemini because it’s a different DNS.” But suppose the internet had a major outage, a major DNS failure. On my laptop I have Quinn 3, I have it running inside LM Studio. I have used it on flights when the internet is highly unreliable. And because we have those knowledge blocks, I can generate just as good results as the major providers. And it turns out perfectly. For every company. If you are dependent now on generative AI as part of your secret sauce, you have an obligation to understand small language models and to have them in place as a backup system so that when your provider of choice goes down, you can keep doing what you do. Tools like LM Studio, Jan, AI, Cobol, cpp, llama, CPP Olama, all these with our hosting systems that you run on your computer with a small language model. Many of them have drag and drop your attachments in, put in your PDFs, put in your knowledge blocks, and you are off to the races. Katie Robbert: I feel that is going to be a future live stream for sure. Because the first question, you just sort of walk through at a high level how people get started. But that’s going to be a big question: “Okay, I’m hearing about small language models. I’m hearing that they’re more secure, I’m hearing that they’re more reliable. I have all the data, how do I get started? Which one should I choose?” There’s a lot of questions and considerations because it still costs money, there’s still an environmental impact, there’s still the challenge of introducing bias, and it’s trained on who knows. Those things don’t suddenly get solved. You have to sort of do your due diligence as you’re honestly introducing any piece of technology. A small language model is just a different piece of technology. You still have to figure out the use cases for it. Just saying, “Okay, I’m going to use a small language model,” doesn’t necessarily guarantee it’s going to be better. You still have to do all of that homework. I think that, Chris, our next step is to start putting together those demos of what it looks like to use a small language model, how to get started, but also going back to the foundation because the foundation is the key to all of it. What knowledge blocks should you have to use both a small and a large language model or a local model? It kind of doesn’t matter what model you’re using. You have to have the knowledge blocks. Christopher S. Penn: Exactly. You have to have the knowledge blocks and you have to understand how the language models work and know that if you are used to one-shotting things in a big model, like “make blog posts,” you just copy and paste the blog post. You cannot do that with a small language model because they’re not as capable. You need to use an agent flow with small English models. Tools today like LM Studio and anythingLLM have that built in. You don’t have to build that yourself anymore. It’s pre-built. This would be perfect for a live stream to say, “Here’s how you build an agent flow inside anythingLLM to say, ‘Write the blog post, review the blog post for factual correctness based on these documents, review the blog post for writing style based on this document, review this.'” The language model will run four times in a row. To you, the user, it will just be “write the blog post” and then come back in six minutes, and it’s done. But architecturally there are changes you would need to make sure that it meets the same quality of standard you’re used to from a larger model. However, if you have all the knowledge blocks, it will work just as well. Katie Robbert: And here I was thinking we were just going to be describing small versus large, but there’s a lot of considerations and I think that’s good because in some ways I think it’s a good thing. Let me see, how do I want to say this? I don’t want to say that there are barriers to adoption. I think there are opportunities to pause and really assess the solutions that you’re integrating into your organization. Call them barriers to adoption. Call them opportunities. I think it’s good that we still have to be thoughtful about what we’re bringing into our organization because new tech doesn’t solve old problems, it only magnifies it. Christopher S. Penn: Exactly. The other thing I’ll point out with small language models and with local models in particular, because the use cases do have a lot of overlap, is what you said, Katie—the privacy angle. They are perfect for highly sensitive things. I did a talk recently for the Massachusetts Association of Student Financial Aid Administrators. One of the biggest tasks is reconciling people’s financial aid forms with their tax forms, because a lot of people do their taxes wrong. There are models that can visually compare and look at it to IRS 990 and say, “Yep, you screwed up your head of household declarations, that screwed up the rest of your taxes, and your financial aid is broke.” You cannot put that into ChatGPT. I mean, you can, but you are violating a bunch of laws to do that. You’re violating FERPA, unless you’re using the education version of ChatGPT, which is locked down. But even still, you are not guaranteed privacy. However, if you’re using a small model like Quinn 3VL in a local ecosystem, it can do that just as capably. It does it completely privately because the data never leaves your laptop. For anyone who’s working in highly regulated industries, you really want to learn small language models and local models because this is how you’ll get the benefits of AI, of generative AI, without nearly as many of the risks. Katie Robbert: I think that’s a really good point and a really good use case that we should probably create some content around. Why should you be using a small language model? What are the benefits? Pros, cons, all of those things. Because those questions are going to come up especially as we sort of predict that small language model will become a buzzword in 2026. If you haven’t heard of it now, you have. We’ve given you sort of the gist of what it is. But any piece of technology, you really have to do your homework to figure out is it right for you? Please don’t just hop on the small language model bandwagon, but then also be using large language models because then you’re doubling down on your climate impact. Christopher S. Penn: Exactly. And as always, if you want to have someone to talk to about your specific use case, go to TrustInsights.ai/contact. We obviously are more than happy to talk to you about this because it’s what we do and it is an awful lot of fun. We do know the landscape pretty well—what’s available to you out there. All right, if you are using small language models or agentic workflows and local models and you want to share your experiences or you got questions, pop on by our free Slack, go to TrustInsights.ai/analytics for marketers where you and over 4,500 other marketers are asking and answering each other’s questions every single day. Wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on instead, go to TrustInsights.ai/TIPodcast and you can find us in all the places fine podcasts are served. Thanks for tuning in. I’ll talk to you on the next one. Katie Robbert: Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology and MarTech selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, Dall-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMO or data scientists to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the *In-Ear Insights* podcast, the *Inbox Insights* newsletter, the *So What* livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights is adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models. Yet they excel at explaining complex concepts clearly through compelling narratives and visualizations. Data Storytelling—this commitment to clarity and accessibility extends to Trust Insights educational resources which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
Matt Knight spent five years as OpenAI's CISO. Now he runs what colleagues call “the most interesting job at the company”: leading Aardvark, an AI agent that finds security vulnerabilities the way a human researcher would—by reading code, writing tests, and proposing patches. It recently found a memory corruption bug in OpenSSH, one of the most heavily audited codebases in existence.In this conversation with a16z's Joel de la Garza, Matt traces the evolution from GPT-3 (which couldn't analyze security logs at all) to GPT-4 (which could parse Russian cybercriminal chat logs written in slang) to today's models that discover bugs humans have missed for decades. They also discussed the XZ Utils backdoor that nearly compromised half the internet and why 3.5 million unfilled security jobs might finally get some relief, and how Aardvark could give open source maintainers a fighting chance against nation-state attackers.If you enjoyed this episode, please be sure to like, subscribe, and share with your friends.Follow Matt Knight on X: https://x.com/embeddedsecFollow Joel de la Garza on LinkedIn: https://www.linkedin.com/in/3448827723723234/ Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Ariel Harmoko is the Co Founder and CEO of Artifact AI. Ariel joined James and Hector for a conversation that moves from race tracks to reconciliation engines.Ariel grew up in Jakarta, was thrown into Go Karts at eight, and went on to race professionally all the way to Formula 3 alongside the likes of Lando Norris and George Russell. That early immersion in high performance teams, engineering and discipline shaped how he now operates as a founder.He shares how a love of maths and science took him to boarding school in the UK, then into machine learning research at Cambridge while still a teenager, working on early diagnosis in medtech and later deploying internal GPT tools at JP Morgan.Today Ariel is building Artifact AI, an “agent accountant” that sits on top of existing ledgers like Xero, QuickBooks and NetSuite. The product tackles two huge problems for accounting firms. Fragmented legacy stacks and chronic staff shortages. Ariel explains how their agents ingest data, reconcile, post to ledgers and learn from human review, and why accuracy, auditability and trust are non negotiable in this space.The conversation covers selling into one of the most conservative industries on earth, founder led FDE style implementations, why advisory is the real margin in accounting, and how vertical AI and agentic workflows could reshape professional services. Everyday AI: Your daily guide to grown with Generative AICan't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.Listen on: Apple Podcasts Spotify
OpenAI is (reportedly) in full panic mode.
Episode 88: What happens inside OpenAI when Google drops a game-changing AI model? Matt Wolfe ((https://x.com/mreflow) and Maria Gharib (https://uk.linkedin.com/in/maria-gharib-091779b9) break it down. This episode unpacks OpenAI's unprecedented “Code Red,” the real reason Sam Altman hit the panic button, and how Google's Gemini 3 and Nano Banana Pro could threaten OpenAI's dominance. The hosts debate which next-gen AI models are actually smarter (Claude Opus 4.5, Gemini 3, GPT-5.1, and more), why some tools are getting dumber, and how Google's full-stack advantage is shifting the AI power balance. Plus: a rapid-fire review of explosive new AI tools for video, the rise of creative AI (and AI “slop”), surprising advances in wearable tech, and a bit of fun at Sam Altman's expense. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI Wars: Models, Tools, Power (05:59) AI Innovation Race (08:43) OpenAI's Google Challenge (12:15) Claude's Context Window Explained (14:12) Claude vs. ChatGPT: AI Preferences (17:13) Anthropic vs. OpenAI Philosophy (21:51) AI and Content Slop Concerns (24:46) AI Generative Audio's Uncanny Gap (28:07) Runway Gen 4.5 Dominates Preferences (30:41) AI Model Announcement Rivalry (35:00 Alibaba's AI Evolution (37:19) Heavy Glasses and Social Concerns (40:03) AI Advancements: December Launches (42:27) "Like, Subscribe, See You Soon — Mentions: Sam Altman: https://blog.samaltman.com/ Google Gemini 3: https://aistudio.google.com/models/gemini-3 Nano Banana Pro: https://gemini.google/overview/image-generation/ Claude Opus 4.5: https://www.anthropic.com/claude/opus ChatGPT: https://chatgpt.com/ NotebookLM: https://notebooklm.google/ Runway Gen 4.5: https://runwayml.com/research/introducing-runway-gen-4.5 Kling: https://klingai.com/global/ Midjourney: https://www.midjourney.com/home Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
Welcome back to MOJO: The Meaning of Life and Business! In today's episode, Jennifer Glass welcomes a true branding visionary, Steve Miller—branding expert, renowned speaker, and co-author of the Amazon bestseller "Uncopyable You." If you've ever wondered what it really takes to stand out in crowded markets, this is the conversation you've been waiting for.Steve Miller brings a unique perspective drawn from an unconventional path—he's been a professional golfer, Hollywood stuntman, marketing strategist, and more. He explains how these varied experiences have fueled his innovative approach to branding, showing us that some of the best ideas come from outside our everyday industry “boxes.”In this episode, Jennifer Glass and Steve Miller explore why being memorable is the make-or-break factor in both business and personal branding. They discuss why simply being known, liked, and trusted isn't enough—if people don't remember you, you're nowhere. The conversation covers powerful, practical insights, such as how leading companies like Disney, Nordstrom, and Apple use experience, language, and even color to create unforgettable brand encounters. Listen for Steve's signature stories on the “moose” method for defining your target market, the importance of visual branding cues, and the concept of building your own rules of competition.Whether you're an entrepreneur, leader, or simply interested in the art of standing out, this episode offers inspiration, laugh-out-loud stories, and actionable strategies you can apply right away to make your own brand truly uncopyable. Stick around to the end for information on Steve's custom GPT tool that puts his branding wisdom at your fingertips!About my guest: Steve Miller is a branding expert, speaker, and co-author of the Amazon bestseller, "Uncopyable You," specializing in helping individuals and businesses stand out in competitive markets by building unique, memorable personal brands. Known for his innovative strategies and practical insights, Steve empowers his audience to create their own rules of competition and leave an unforgettable mark in their industries.Connect with Steve on Facebook, LinkedIn, and on the web at https://beuncopyable.com and don't forget to check out Steve's AI bot at beuncopyable.com/mojo Keywords: branding, personal branding, business branding, memorable brands, know like trust, marketing strategies, unique selling proposition, target market, competitive advantage, innovation, differentiation, customer experience, storytelling, Nordstrom, Disney, Apple Genius Bar, box thinking, out of the box, brand promise, customer service, cast members, guests, hidden Mickeys, visual cues, psychographics, brand mythology, market positioning, entrepreneur, product experience, memorable moments, standing out
J Darrin Gross If you're willing, I'd like to ask you, Joe downs, what is the BIGGEST RISK? Joe Downs To me, it's, I'll give you, I'll give you, all right, you just want the biggest I'm gonna go right to base. To me, it's change. It's sort of what I just alluded to, if we were, if, if we still thought, because we weren't out there interviewing and investigating other third party management companies. If we refuse to do all that and just head in the sand, we would probably start falling behind other storage facilities, other competitors, who are managing their facilities better than we are. And the reason for that is because of technology. And I alluded to that earlier, and so that's just a small example, but to me, the biggest reason is AI. The single biggest reason is of change. The biggest change agent is AI. So if we're not, and you might say, What does aI have to do with storage everything? Because AI has AI can infiltrate if you allow it and choose to. And I highly suggest you do every, every part of your business in life, AI can have an impact on positive or negative. So you have to be, not only aware, you have to be in tune. So I I worry about, if you're asked, what keeps me up at night? It's not, we're well insured from an insurance standpoint, right? You know, I think when you're in commercial real estate, there's always that lender risk of a covenant and loan doc somewhere, you know, whatever. But that doesn't keep me up at night. It's because even then you could, you can negotiate, and you got attorneys to work you out of it. To me, it's falling behind the AI curve. And because that will directly impact how we find customers, how we source deals and pro storage, I'm using it not only to source locations for properties better and faster than we can do on our own. I'm we're creating a GPT right now to market to the businesses, the 17,000 businesses in Greenville, South Carolina, that are within 10 miles of our facility, that's, I don't think we've gone vertical yet. The grounds cleared and and I think they're, they're getting ready for pad sites. Maybe they started pouring. I haven't seen an update in last in a week, but we've got to fill that facility. So how are we going to do that before all the small bay flex guys get get the word out that they're putting that they're available to receive your business? Well, we came up with an ingenious way to create a GPT that's a relocator. So a GPT is just anyone can create them. If you learn how to win on chat, GBT, it's GPT is it stands for generative now I freeze when I put myself in the spot. Now I'm freezing generative performance transformer. No pre trained generative, pre trained transformer. So it's really just an engine. It's the greatest employee that you could ever have, right? It doesn't need to sleep, eat, take breaks, anything. It does what you tell it to do, but that's the key. It does what you tell it to do. If you don't tell it to do, it won't do it. So we have created a GPT that we will put out in front of every business market, to all the businesses out there and say, Hey, if you're say, Hey, if you're looking for space, here is a here's a simple tool that's free. Put in your name of your business, where you are, where you'd like to be, square foot, square footage you need for space in the entire Greenville market area. It will then, because we'll program it, go out, go on, crexie, loop, net, loop, net, everything. It'll find all the space that's available that meets the criteria for Darrin gross to move his ABC, XYZ widget business, because he's got 1000 square feet now, and he needs 2000 or he's got 500 or no square feet, and he needs something, right? It'll return those results for you, and included in it will be our facility with all kinds of unit sizes. So there is an example of how we're using AI, not only source the location, but also source the customers that will fill the location, right? So and that if we, if I was asleep at the wheel with AI, wouldn't have any of that, maybe I'd be successful regardless. I don't know. Maybe, if, maybe the if you build it, they will come. Principle would happen. I don't know, but I'm not willing to risk it. I'm not willing to risk it with my investors money. So the to me, it's, I'm I'm harnessing the power of AI. It is absolutely incredible. It keeps me up at night for good reasons. And I told you before the show, because it's like a drug. It literally, you know the saying, If you dream it, you can make it happen, or wherever that cliche is. This is literally, AI literally makes it happen. It's just up to you to program it. So I'm excited about it, even though you asked me what's what's a risk, the risk is not staying up with not learning it, not immersing yourself in AI, because if you don't I know everything, I just said, sounds cool and and proactive, but if I'm not proactive, someone else will be, and they might be, and I don't care what you're doing, what business you're in, someone who's doing what I doing, I'm doing, is going to have a direct effect on your business, probably to the negative if you don't not only get ahead of the curve, but keep up. So to me, that's the biggest risk. It's kind of that standing still and not evolving. Joe@Belroseam.com https://selfstorageacademy.com/
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Today's episode breaks down a massive new empirical study from OpenRouter and a16z that analyzed more than 100 trillion real-world tokens to reveal what developers and power users are actually doing with AI right now, from the surge in reasoning models to the dominance of coding workloads to the unexpected rise of roleplay in open-source systems. The discussion explores how the shift toward long-context programming tasks, tool-use invocation, and hybrid stacks of closed and open models is reshaping the practical AI landscape and what patterns matter most heading into 2026. Headlines include fresh rumors around GPT-5.2, OpenAI's UX cleanup efforts, and the latest shake-ups at Apple and Meta. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsGemini - Build anything with Gemini 3 Pro in Google AI Studio - http://ai.studio/buildRovo - Unleash the potential of your team with AI-powered Search, Chat and Agents - https://rovo.com/AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefLandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Blitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
In this episode of Crazy Wisdom, Stewart Alsop talks with Aaron Lowry about the shifting landscape of attention, technology, and meaning—moving through themes like treasure-hunt metaphors for human cognition, relevance realization, the evolution of observational tools, decentralization, blockchain architectures such as Cardano, sovereignty in computation, the tension between scarcity and abundance, bioelectric patterning inspired by Michael Levin's research, and the broader cultural and theological currents shaping how we interpret reality. You can follow Aaron's work and ongoing reflections on X at aaron_lowry.Check out this GPT we trained on the conversationTimestamps00:00:00 Stewart and Aaron open with the treasure-hunt metaphor, salience landscapes, and how curiosity shapes perception. 00:05:00 They explore shifting observational tools, Hubble vs James Webb, and how data reframes what we think is real. 00:10:00 The conversation moves to relevance realization, missing “Easter eggs,” and the posture of openness. 00:15:00 Stewart reflects on AI, productivity, and feeling pulled deeper into computers instead of freed from them. 00:20:00 Aaron connects this to monetary policy, scarcity, and technological pressure. 00:25:00 They examine voice interfaces, edge computing, and trust vs convenience. 00:30:00 Stewart shares experiments with Raspberry Pi, self-hosting, and escaping SaaS dependence. 00:35:00 They discuss open-source, China's strategy, and the economics of free models. 00:40:00 Aaron describes building hardware–software systems and sensor-driven projects. 00:45:00 They turn to blockchain, UTXO vs account-based, node sovereignty, and Cardano. 00:50:00 Discussion of decentralized governance, incentives, and transparency. 00:55:00 Geopolitics enters: BRICS, dollar reserve, private credit, and institutional fragility. 01:00:00 They reflect on the meaning crisis, gnosticism, reductionism, and shattered cohesion. 01:05:00 Michael Levin, bioelectric patterning, and vertical causation open new biological and theological frames. 01:10:00 They explore consciousness as fundamental, Stephen Wolfram, and the limits of engineered solutions. 01:15:00 Closing thoughts on good-faith orientation, societal transformation, and the pull toward wilderness.Key InsightsCuriosity restructures perception. Aaron frames reality as something we navigate more like a treasure hunt than a fixed map. Our “salience landscape” determines what we notice, and curiosity—not rigid frameworks—keeps us open to signals we would otherwise miss. This openness becomes a kind of existential skill, especially in a world where data rarely aligns cleanly with our expectations.Our tools reshape our worldview. Each technological leap—from Hubble to James Webb—doesn't just increase resolution; it changes what we believe is possible. Old models fail to integrate new observations, revealing how deeply our understanding depends on the precision and scope of our instruments.Technology increases pressure rather than reducing it. Even as AI boosts productivity, Stewart notices it pulling him deeper into computers. Aaron argues this is systemic: productivity gains don't free us; they raise expectations, driven by monetary policy and a scarcity-based economic frame.Digital sovereignty is becoming essential. The conversation highlights the tension between convenience and vulnerability. Cloud-based AI creates exposure vectors into personal life, while running local hardware—Raspberry Pis, custom Linux systems—restores autonomy but requires effort and skill.Blockchain architecture determines decentralization. Aaron emphasizes the distinction between UTXO and account-based systems, arguing that UTXO architectures (Bitcoin, Cardano) support verifiable edge participation, while account-based chains accumulate unwieldy state and centralize validation over time.Institutional trust is eroding globally. From BRICS currency moves to private credit schemes, both note how geopolitical maneuvers signal institutional fragility. The “few men in a room” dynamic persists, but now under greater stress, driving more people toward decentralization and self-reliance.Biology may operate on deeper principles than genes. Michael Levin's work on bioelectric patterning opens the door to “vertical causation”—higher-level goals shaping lower-level processes. This challenges reductionism and hints at a worldview where consciousness, meaning, and biological organization may be intertwined in ways neither materialism nor traditional theology fully capture.