POPULARITY
Categories
This Week In Startups is made possible by:Northwest Registered Agent - northwestregisteredagent.com/twistQuo - quo.com/TWiSTGusto - Gusto.com/twistPlaud - http://Plaud.ai/twistAthena - https://www.athena.com/jcalToday's show:How long until AI models can improve AI models? Once possible, recursive self-improvement by AI technology could accelerate — forever. Thus far, humans (and their coding agents) are still driving AI progress. But a recent project by AI developer extraordinare Andrej Karpathy, called ‘autoresearcher', is turning heads as it shows that it is possible — in certain contexts — to allow AI agents to run successive coding experiments to improve specific elements of LLM performance. Call it an early demonstration of the future.OpenClaw is exploding in China, while here in the United States, AI is polling somewhere underneath the basement. AI in the United States is about as popular as ICE, which could create a political issue for the technology in the coming elections.Next? Three demos. First, NetXD's Suresh Ramamurthi showed off how he has built OpenClaw functionality to move money, Rohan Arun showed off PhoneClaw automation on Android devices from an AR headset, and Eugene Stuckless gave us a taste of what Eir is building. Our takeaway? OpenClaw is still boring its way into our digital lives, one new skill or tool at a time!GUESTS:Suresh Ramamurthi: https://x.com/sureshr7Rohan Arun: https://x.com/Viewforge/Eugene Stuckless: https://x.com/eugene_eir_incTimestamps:0:00 — ‘Autoresearcher' and the future of AI improvements6:52 — Why people around the world are flocking to OpenClaw7:57 — Plaud - If your work depends on conversations — interviews, meetings, calls — you need a Plaud NotePin. You can check it out at Plaud.ai/twist and use code TWIST for 10% off!9:46 — Gusto - Check out the online payroll and benefits experts with software built specifically for small business and startups. Try Gusto today and get three months FREE at Gusto.com/twist.12:57 — The changing American social contract20:15 — Quo - Quo (formerly OpenPhone) gives you a clean, modern way to handle every customer call, text, and thread all in one place. Try it free at quo.com/TWIST23:50 — Why China is all-in on AI (and Europe isn't)26:26 — How to keep your job in the AI era28:05 — Northwest Registered Agent - Get more when you start your business with Northwest. In 10 clicks and 10 minutes, you can form your company and walk away with a real business identity — Learn more at www.northwestregisteredagent.com/twist29:38 — Athena - Get $2,000 off your first EA at https://www.athena.com/jcal34:42 — Demo: Suresh Ramamurthi of NetXD42:47 — Demo: Rohan Arun of PhoneClaw47:35 — Why bringing OpenClaw to your smartphone is what's next49:49 — Demo: Eugene Stuckless of Eir56:45 — How can we make smarter, more efficient agents?Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisGreat TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com
Discord delays their age-gating rollout but legislators are pushing for operating systems including Linux to verify ages, LLM licence laundering might mean the end of copyleft, and how and why you might want to detect Meta’s spy camera glasses. News Getting Global Age Assurance Right: What We Got Wrong and What’s Changing US state laws push age checks into the operating system California introduces age verification law for all operating systems, including Linux and SteamOS — user age verified during OS account setup I have actually read the text of California law CA AB1043 and, honestly, I don’t hate it Do you really think that circumventing these things will always be a simple firmware mod or hardware hack? Relicensing with AI-assisted rewrite – Tuan-Anh Tran Chardet dispute shows how AI will kill software licensing, argues Bruce Perens No right to relicense this project Hide from Meta’s spyglasses with this new Android app Dear Meta Smart Glasses Wearers: You’re Being Watched, Too Automox Turnkey Results Endpoint management tailored to your specific environment. Know the plan. Trust the result. Learn more at www.automox.com Support us on patreon and get an ad-free RSS feed with early episodes sometimes See our contact page for ways to get in touch. RSS: Subscribe to the RSS feeds here
Discord delays their age-gating rollout but legislators are pushing for operating systems including Linux to verify ages, LLM licence laundering might mean the end of copyleft, and how and why you might want to detect Meta’s spy camera glasses. News Getting Global Age Assurance Right: What We Got Wrong and What’s Changing US state laws push age checks into the operating system California introduces age verification law for all operating systems, including Linux and SteamOS — user age verified during OS account setup I have actually read the text of California law CA AB1043 and, honestly, I don’t hate it Do you really think that circumventing these things will always be a simple firmware mod or hardware hack? Relicensing with AI-assisted rewrite – Tuan-Anh Tran Chardet dispute shows how AI will kill software licensing, argues Bruce Perens No right to relicense this project Hide from Meta’s spyglasses with this new Android app Dear Meta Smart Glasses Wearers: You’re Being Watched, Too Automox Turnkey Results Endpoint management tailored to your specific environment. Know the plan. Trust the result. Learn more at www.automox.com Support us on patreon and get an ad-free RSS feed with early episodes sometimes See our contact page for ways to get in touch. RSS: Subscribe to the RSS feeds here
After experiencing Planet Nix and SCaLE, we come back convinced the next phase of Linux is already taking shape.Sponsored By:Jupiter Party Annual Membership: Put your support on automatic with our annual plan, and get one month of membership for free! Managed Nebula: Meet Managed Nebula from Defined Networking. A decentralized VPN built on the open-source Nebula platform that we love. Support LINUX UnpluggedLinks:
You're invited into a legacy family audio business that refused to accept “good enough” on feedback control and instead chased the impossible: a truly zero‑latency, AI‑driven way to push your PA louder without squeals. You follow Devin Sheets from growing up on sound gigs to roaming European stages, then back home to build De‑Feedback plugin for working musicians, a live sound feedback plugin and on‑the‑fly impulse‑response generator that listens like a seasoned engineer: separating human voice, room reverb, background noise, and feedback in real time so you can grab at least 6 dB more gain before things start to howl. Along the way you see how NAMM sparked the idea, how inverse impulse responses and probability math beat old EQ and gate tricks, and how “homebrew AI” meant sneaking into every empty church at 3 a.m. just to teach the model what real rooms actually sound like. You also learn how to think like a modern working musician: using social media to find the right AI programmers across the world, leaning on LLMs to translate, collaborate, and even rate contractor work so you can move faster without losing control. You come away knowing you can drop a dedicated De‑Feedback box or plugin into almost any rig, from churches to touring consoles to tiny clubs, take it with you even when someone else is behind the board, and quietly stack the deck in your favor. In the end, it's a roadmap for how you run your own gigs and career: stay curious, embrace new tools, protect your sound, and Always Be Performing. 00:00:00 Gig Gab 524 – Monday, March 9th, 2026 March 9th: National Meatball Day Guest co-host: Devin Sheets from Alpha Labs 00:02:12 Let's Grow this Legacy Family Business Grew up doing sound Also a musician Lived in Europe Then came back and said, “let's grow this family business!” 00:03:44 We haven't “just solved” this feedback problem Went to NAMM for the first time, and was inspired There are automated EQ-based or gate-based systems PSE plugin from Waves 5045 for feedback 00:04:57 Why isn't there a “balanced audio”-type solution for Feedback Balanced Audio fixes hums and it just works. 00:08:24 NAMM is a great inspiration…and it inspired Devin and his team to seek a feedback plugin solution People get entrenched Inverse Impulse Response methodology 00:12:35 Training the AI to listen for three things: human voice, reverb, and feedback Created a de-reverb algorithm and went beyond that A probability calculation does the math 00:16:05 Truly zero latency for the plugin Workflow latency remains 00:19:32 I don't have any coding or AI background, but I have a gut feeling AI will fix this feedback problem Others: It's harder than you think Devin: I knew that it needed to happen 00:20:58 Finding an AI programmer who was interested in doing Experimented with some programmers, failed, learned some things! 00:21:09 Social Media to the rescue! Late 2023: Devin found a group of AI programmers who would be interested Sending large amounts of money to China…it's a risk! 00:26:30 At 3am, a text message: I think I've done it. Devin immediately started testing it himself “It seemed to work.” 00:27:17 Installing De-Feedback in Churches Sponsors 00:30:57 SPONSOR: Claude.ai – Ready to tackle bigger problems? Sign up for Claude today, which includes access to Claude Cowork, too, when you visit Claude.ai/giggab 00:32:43 SPONSOR: Squarespace. Check out https://www.squarespace.com/GIGGAB to save 10% off your first purchase of a website or domain using code GIGGAB. 00:34:20 What is an impulse response? Impulse Response: An audio picture of how the room sounds Popping balloons in a room/environment and recording the sound is a common approach for creating impulse responses 00:38:33 De-Feedback is an on-the-fly IR generator …and analyzer that's trained on the human voice, room reverb, background noise…and feedback 00:41:55 Finding the right programmers was the key …in addition to actually having the idea and the bullheaded persistence to make it happen. 00:44:46 Mind-melding was necessary And LLMs helped with translation! 00:48:39 Using AI to make it possible to collaborate with other humans 00:50:03 Using an LLM to rate the work of your contractors and employees 00:51:54 How do we get De-Feedback into the hands of working musicians US$499 for the De-Feedback plugin VST3 or AU plugin A higher-end Windows laptop can likely run it on its own Apple's Core Audio tech makes it difficult, but they're working on it. De-Feedback also sells a perfectly-tuned headless computer to do this Alpha Labs tried tons of interfaces that the Focusrite Scarlett keeps glitches out of the mix Waves SuperRack LiveBox 01:01:37 Where do we expand? Allen & Heath mixers? Midas/Behringer mixers? Paul Falcone, mixing Mariah Carey, wanted to use it! Robert Scovill talking Rock Hall on Gig Gab 01:05:18 Homebrew AI! Training EVERY room he could find “Can you let me into your empty church at 3am?” – To record IR to then train the data set for De-Feeback 01:07:25 Creating your own AI model 01:08:13 What's the future look like? Acquisition? Demands for security? – Planning for it all 01:09:26 You can get this and bring it with you to gigs where someone else is doing sound De-Feedback Option 1 Allen & Heath Qu-5's Feedback Eliminator De-Feedback gets at least as 6dB more gain before feedback 01:17:46 Gig Gab 524 Outtro Follow Devin Sheets And Alpha Labs Facebook and Instagram YouTube for Alpha Labs Contact Gig Gab! @GigGabPodcast on Instagram feedback@giggabpodcast.com Sign Up for the Gig Gab Mailing List The post De-Feedback Plugin for Working Musicians: More Gain, Less Feedback – Gig Gab 524 with Devin Sheets appeared first on Gig Gab.
Eoin Clancy (VP of Growth at AirOps), Connor Beaulieu (Senior SEO Manager at LegalZoom), and Adina Timar (Head of AEO at Weflow) join this live session to talk about how to create high-quality content with AI. Connor walks through a workflow his team built at LegalZoom to automatically source expert quotes.. Adina shows how she rebuilds competitor pages from scratch using sales calls, LLM data, and live competitor analysis. And Eoin shares the research behind why content quality is now the single biggest lever in AI search. If you want to see what content engineering actually looks like in practice, this one is for you. Join 50,0000 people who get Dave's Newsletter here: https://www.exitfive.com/newsletterLearn more about Exit Five's private marketing community: https://www.exitfive.com/***Brought to you by:AirOps - The content engineering platform that helps marketers create and maintain high-quality, on-brand content that wins AI search. Go to airops.com/exitfive to start creating content that reflects your expertise, stays true to your brand, and is engineered for performance across human and AI discovery.Customer.io - An AI powered customer engagement platform that help marketers turn first-party data into engaging customer experiences across email, SMS, and push. Learn more at customer.io/exitfive. Convertr - The enterprise lead data management platform that sits between your lead sources and your CRM, automatically validating, enriching, and standardizing every lead before it touches your systems. Check them out at convertr.io/exitfive.Compound Growth Marketing - A full-funnel demand generation agency that helps high-growth cybersecurity, DevOps, and enterprise software companies drive more pipeline through AI SEO, paid media, and go-to-market engineering. Visit compoundgrowthmarketing.com and tell them Dave sent you.***Thanks to my friends at hatch.fm for producing this episode and handling all of the Exit Five podcast production.They give you unlimited podcast editing and strategy for your B2B podcast.Get unlimited podcast editing and on-demand strategy for one low monthly cost. Just upload your episode, and they take care of the rest.Visit hatch.fm to learn more
256 | Andre Alpar ist die deutsche SEO-Legende und Rocket Internet Veteran.Partner dieser Folge:HOLVIFinanzen für kleine Unternehmen: Von Chaos zu Klarheit mit HOLVI - Das kostenlos Holvi Flex Konto ist perfekt für Solopreneure, Freelancer und Unternehmen, die wachsen wollen. www.holvi.comClockodoDas Time-Tracking-Tool unserer Wahl. https://www.clockodo.com/optimisten Gutschein-Code: optimisten25 für 25% Rabatt.Mach das 1-minütige Quiz und finde eine Geschäftsidee, die zu dir passt: digitaleoptimisten.de/quiz.Learnings**Generative Engines unterscheiden**Im Gespräch wird zwischen dem reinen LLM, einem Chatbot und Overviews unterschieden; diese drei Typen arbeiten unterschiedlich und nutzen unterschiedliche Informationsquellen. Endnutzer erleben dadurch verschiedene Erfahrungen beim Suchen und Antworten. Wer Strategien für AI-gestützte Sichtbarkeit plant, sollte diese Unterschiede kennen, um passende Tools und Vorgehen auszuwählen.**Das Playbook Geo**Im Gespräch wird ein Framework aus Strategie, Technik, Content und Offpage vorgestellt, um in ChatGPT und Co. sichtbar zu werden. Zuerst Strategie klären, dann Technik sicherstellen, Content vorbereiten, Offpage-Signale aufbauen. **Offpage ist entscheidender Hebel**Offpage-Erwähnungen in reputationsstarken Publikationen bleiben relevant. Das Matcha-Tea-Experiment demonstriert, dass Offsite-Aktionen die AI-Sichtbarkeit beeinflussen können. Für Marken bedeutet das: Reputation und Nennung außerhalb der eigenen Seite weiter strategisch pflegen.**AI-Traffic konvertiert besser**AI-Traffic konvertiert 4x besser als organischer Google-Traffic. Gleichzeitig nutzt AI-Systeme den Funnel differenziert (stärkere Nähe zu Entscheidungen). Unternehmen sollten daher AI-Traffic gezielt nutzen, aber Langzeitwirkungen und Abhängigkeiten bedenki; die Qualität der Interaktion ist wichtiger als reiner Traffic.KeywordsGenerative Engine OptimizationAI-gestützte SucheLLM-basierte SucheOffpage-OptimierungContent-Strategie AIwie funktioniert Generative Engine OptimizationUnterschiede zwischen LLM-basierter Suche und konventioneller SucheAuswirkungen von AI Overviews auf CTRJevens Paradox ErklärungMatcha Theos Experiment AI-SEOKnowledge Graph von GoogleMarkennennung als Offsite-Signal in AI-SystemenVertrauen in AI-AntwortenOffpage Signale in AI-Umgebungen
Czy "wyczucie smaku" to nowa najważniejsza kompetencja techniczna? W dzisiejszym odcinku Tomek, Wojtek i Sebastian analizują przełomowe publikacje naukowe, kontrowersyjne tezy liderów branży oraz dane ze Spotify, które sugerują, że inżynieria oprogramowania właśnie przeszła punkt bez powrotu.W tym odcinku:
L'ère du Cloud-Native a vécu, place à l'ère de l'IA-Native. Selon une étude du cabinet Deloitte, le marché du logiciel, qui pèse aujourd'hui 4 000 milliards de dollars, s'apprête à vivre une onde de choc comparable à l'arrivée du SaaS il y a dix ans.Pour les DSI et les décideurs, ce n'est pas seulement une question de nouveaux outils.C'est une remise en question profonde des modèles économiques et opérationnels.Un nouveau modèle de revenuD'abord, le séisme porte sur le modèle de revenus.Les géants du logiciel traditionnel sont sous une pression immense pour abandonner le classique paiement à la licence au profit d'une tarification axée sur les résultats.Les nouveaux entrants, ces entreprises nées avec l'IA dans leur ADN, arrivent sur le marché avec des structures de coûts ultra-légères.Elles ne cherchent pas à vendre des abonnements en volume, mais à résoudre des problèmes métier ultra-spécifiques, souvent dans des niches délaissées par les grands éditeurs.Et cette concurrence va mécaniquement redonner du pouvoir aux acheteurs, notamment aux PME, qui pourront accéder à des capacités de niveau "grand compte" pour une fraction du prix habituel.L'interface utilisateur est en train de disparaîtreEnsuite, l'interface utilisateur telle que nous la connaissons est en train de disparaître.Deloitte prédit que l'IA va devenir la couche d'interface primaire au-dessus de toutes vos applications. Nous ne naviguerons plus entre dix logiciels différents. Nous interagirons avec un orchestrateur capable de piloter des agents autonomes.La bataille ne se joue donc plus sur qui possède le meilleur tableur ou le meilleur CRM. Mais sur qui contrôlera cette couche de contrôle.Pour les entreprises, cela implique un virage technologique vers des plateformes d'orchestration capables de surveiller et de gérer ces flottilles d'agents IA pour éviter qu'elles ne travaillent en silo.Gestion des marges et des compétencesEnfin, attention au revers de la médaille. Et ce revers, c'est la gestion des marges et des compétences.Car si l'IA-Native promet de l'agilité, elle coûte cher en infrastructure.En 2026, l'explosion des coûts de calcul liés aux LLM va donc peser lourdement sur les budgets.Parallèlement, le succès de cette transition ne sera pas technologique, mais humain. Il va falloir redéfinir les rôles, de l'ingénieur au chef de produit, en mettant l'accent sur la gestion des données et l'évaluation des nouveaux fournisseurs.Alors le gain de productivité est réel, mais Deloitte prévient : il viendra d'un déploiement discipliné et mesurable.Le ZD Tech est sur toutes les plateformes de podcast ! Abonnez-vous !Hébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Smart Agency Masterclass with Jason Swenk: Podcast for Digital Marketing Agencies
Would you like access to our advanced agency training for FREE? https://www.agencymastery360.com/training AI is either the end of agencies… or the biggest opportunity we've had since the internet. Most agree it's the second one. Agencies that are winning right now are combining SEO, GEO, AEO, and LLM optimization so they show up everywhere decisions are being made. They're using AI to increase leverage, not replace thinking. And they're restructuring their teams around strategy, insight, and proprietary data instead of repetitive task work. Today's featured guest will discuss why SEO isn't dead (it just grew up), the biggest mistake agencies are making with AI, how to 10x output without adding headcount, and why your unique data is the unfair advantage that separates you from every other agency prompting ChatGPT and hoping for magic. Terry Zelen is the founder of Zelen Communications, a 35-year-old agency that pivoted aggressively into AI over the last three years. He's helping clients win visibility across both search engines and large language models (LLMs) and even building AI tools internally to reduce hallucinations and improve accuracy. Terry has a degree in marine biology, so marketing wasn't the master plan. After college, he tried breaking into the creative world with zero portfolio and got laughed out of the room; until one person gave him a shot. He worked for free, proved himself, connected with a freelance rep, and slowly worked his way up through the agency ranks. He eventually transitioned from freelancer to agency owner by acquiring his own accounts and building relationships locally in Tampa. Fast forward three decades and now he's helping clients navigate AI, LLM visibility, and what modern SEO really looks like. In this episode, we'll discuss: Why SEO is more complicated now, but agencies willing to adapt can still win How LLM visibility will win you business AI: The greatest leverage small businesses have ever had Building an AI consensus engine Subscribe Apple | Spotify | iHeart Radio Sponsors and Resources This episode is brought to you by Wix Studio: If you're leveling up your team and your client experience, your site builder should keep up too. That's why successful agencies use Wix Studio — built to adapt the way your agency does: AI-powered site mapping, responsive design, flexible workflows, and scalable CMS tools so you spend less on plugins and more on growth. Ready to design faster and smarter? Go to wix.com/studio to get started. SEO Is Not Dead. It's Just Way More Complicated There's a lot of noise right now around "SEO is dead" or "zero-click internet." But that's an oversimplification. SEO isn't going away. It's evolving. Today, it's not just SEO. It's: GEO (Generative Engine Optimization) AEO (Answer Engine Optimization) Local SEO EEAT (Experience, Expertise, Authority, Trust) Search intent In other words, visibility is the game. Not just ranking in Google, but showing up in LLMs like ChatGPT, Gemini, and Perplexity. Terry points out that while snippets and AI-generated summaries are increasing, people still want to verify sources. They're not buying a couch because an LLM told them it's the best. They'll still visit sites, compare options, and validate credibility. Backlinks, structured content, schema, quality. It all still matters. What's different is that now you're playing the game with Google and the LLMs. How LLM Visibility Actually Wins Business This isn't theoretical. Terry shared a story of a client who builds modular classroom buildings. A school district searched for "best mobile building producer in Florida" and the client showed up in a snippet. That visibility led directly to a new contract. So you're no longer optimizing just for rankings. You're optimizing to be the referenced authority when AI generates an answer. That means you better have structured content, clear positioning, backlinks, authority signals, and presence on surfaces LLMs scrape (including platforms like Reddit, though that's evolving). The agencies that understand this shift can bolt on new services like AI SEO or GEO and, in some cases, significantly increase revenue. But there's a catch. This space is evolving fast. What works today might not work next quarter. That's why Terry avoids gray-hat tactics and focuses on fundamentals. AI Is the Greatest Leverage Small Agencies Have Ever Had Terry believes this might be the most exciting time ever for small agencies because AI has eliminated barriers that used to require massive budgets. When a small restaurant client wanted a red snapper on a black background for their website, stock photography didn't cut it and real shoot would've required a diver, photographer, cooperative fish and a significant budget. Instead, they used Midjourney to create the image. Then they animated it so the fins and gills subtly moved. The client was blown away. For a small restaurant, this level of visual production used to be impossible. Now it's affordable and scalable. That's the opportunity. Agencies can deliver higher-quality creative, faster, and at lower cost if they know how to use the tools. A Very Real Fear for Future Marketers Terry regularly speaks to marketing students who are worried AI will take their jobs. What he tells them is that AI won't take your job, but someone who knows how to use AI will. The key is not blind reliance. It's intelligent leverage. AI is excellent at: Research Proposal drafting Competitive analysis First drafts of content Summarizing data What used to take weeks can now take hours. That frees your team from repetitive, dreaded tasks and allows them to focus on strategy, creativity, and client impact. But there's a danger in over-reliance. Too many agencies are slapping "AI" on everything without adding original thinking or proprietary data. Your edge isn't that you use AI. Your edge is your data. Every agency has unique client data, performance metrics, positioning, and experience. When you combine that with AI, that's where real leverage happens. Building a Consensus Engine to Reduce AI Hallucinations One of the more advanced things Terry is experimenting with is what he calls a "consensus engine." The problem with LLMs is that they're probabilistic, not deterministic. Ask the same question twice and you'll get two slightly different answers. They also hallucinate. To combat this, Terry built a workflow using N8N (a Zapier-like automation tool) that runs content through multiple LLMs. One writes it. Another critiques it. The final output must pass both systems before it's considered valid. If they disagree, it's sent back through with adjusted parameters. He's also exploring how different LLMs perform best in different roles: Perplexity for real-time research ChatGPT for writing Claude for programming Instead of treating AI as one tool, he's assembling a stack of specialized tools. That mindset shift, thinking like a systems architect instead of a prompt typist, is what separates surface-level AI use from strategic advantage. Do You Want to Transform Your Agency from a Liability to an Asset? Looking to dig deeper into your agency's potential? Check out our Agency Blueprint. Designed for agency owners like you, our Agency Blueprint helps you uncover growth opportunities, tackle obstacles, and craft a customized blueprint for your agency's success.
Finance augmentée par l'IA : fantasme ou avantage compétitif réel ? Avec Anne-Claire ChanvinComment intégrer l'IA dans sa direction financière : retours terrain d'une contrôleuse de gestion qui l'a fait chez une licorne.95% des DAF sont encore en phase d'expérimentation individuelle avec l'IA en 2025. C'est le chiffre qui est ressorti d'une salle de 200 directeurs financiers au Financium en novembre 2025. Pas une étude de cabinet, pas une projection. Du terrain.Anne-Claire Chanvin est passée par PwC, L'Oréal, puis a pris en charge l'intégration de l'IA dans la direction financière de Choco, une licorne allemande. Elle forme aujourd'hui des équipes finance à l'IA et accompagne des directions financières de toutes tailles via FinOp360, de la DAF part-time aux grands comptes comme Kiabi, Decathlon ou Engie. Résultat concret chez Choco : une journée de travail hebdomadaire réduite à une heure sur la facturation, un taux d'avoirs passé de 8% à moins de 1%, et une dynamique d'équipe construite autour de lunch and learn qui ont fini par s'alimenter d'eux-mêmes.Dans cet épisode, on parle sans langue de bois de l'adoption IA en finance en 2026. Pourquoi les équipes finance affichent 7% d'adoption au quotidien contre 60% pour le marketing. Pourquoi la confidentialité des données est une excuse plus qu'un vrai frein, quand on sait que vos données sont déjà sur Azure ou Google. Et pourquoi utiliser un LLM comme Copilot, Gemini ou ChatGPT comme un moteur de recherche, c'est la garantie d'être déçu et d'abandonner.Anne-Claire détaille quatre champs d'application concrets pour les DAF, contrôleurs de gestion et business partners : communication et rédaction, assistants IA pour les politiques internes, automatisation via macros et Power Query, et analyse financière via Copilot connecté à SharePoint. Des quick wins actionnables pour la formation IA finance au quotidien, pas de la théorie.La deuxième partie prend un angle différent. Anne-Claire revient sur son premier projet entrepreneurial, une start-up de location de vêtements lancée en mars 2020, stoppée deux ans plus tard par le Covid. Ce qu'elle en tire : arrêter d'attendre d'être parfaite, accepter le test and learn, et comprendre que l'expérience entrepreneuriale se revend, même quand ça ne marche pas. Un message qui résonne particulièrement pour des financiers habitués à tout sécuriser avant d'avancer.Je m'appelle Jonathan Plateau. Je suis passé par EY, Valeo et Safran et j'essaye d'engager des échanges et des réflexions sur nos métiers de la finance.Ma mission : vous offrir une expérience éducative, divertissante et parfois surprenante.Ce podcast est fait pour les directeurs financiers (DAF, CFO), les contrôleurs de gestion, qu'ils soient juniors ou confirmés, et qui souhaitent profiter des échanges entre pairs pour enrichir leur pratique de la finance au quotidien et tendre vers le business partner.Joignez-vous à notre communauté passionnée qui explore chaque facette du contrôle de gestion et du business partner.N'oubliez pas que la finance, c'est aussi une question de mindset !N'hésitez pas à partager vos interrogations sur nos discussions ou sur le podcast. Vous pouvez me contacter sur LinkedIn directement.https://www.linkedin.com/in/jonathan-plateau-1980b610/Vous aimerez cette émission si vous aimez aussi : Coonter (Les Geeks des chiffres) • CFO Radio • Une Cession Presque Parfaite • Voie des comptables • Parlons Cash • Le nerf de la guerre • Feedback by la fée • Radio KPMGHébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.
Live from Morgan Stanley's TMT conference, our panel break down where AI is already delivering real returns—and where rapid advances are raising new risks.Read more insights from Morgan Stanley.----- Transcript -----Michelle Weaver: Welcome to Thoughts on the Market. I'm Michelle Weaver, U.S. Thematic and Equity Strategist here at Morgan Stanley.Today we've got a special episode on AI adoption. And this is a first in a two-part conversation live from our Technology, Media and Telecom conference.It's Thursday, March 5th at 11am in San Francisco.We're really excited to be here with all of you taping live. And we've got on stage with me. Stephen Byrd, he's our Global Head of Thematic and Sustainability Research; Josh Baer, Software Analyst; and Lindsay Tyler, TMT Credit Research Analyst.So, Stephen, I want to start with you, pretty broad, pretty high level. We recently published our fifth AI Mapping Survey that identifies how different companies are exposed to the broad AI theme. Can you just share with us some insights from that piece and how stocks are performing with this AI exposure?Stephen Byrd: Yeah, it's interesting. I mean, we've been doing this survey now, thanks to you, Michelle, and your excellent work, for quite a while. And every six months it is pretty telling to see the progression.I would say a few things that got my attention from our most recent mapping was the number of companies that are quantifying the adoption benefits continues to go up quite a bit. And to me that feels like that's going to be table stakes very soon as in every industry you see two or three companies that are really laying out quite specifically what they expect to be able to do with AI and lay out the math. I think that really is going to pull all the other companies to follow suit. So, we're seeing that in a big way.We do see adopters, with real tangible benefits performing well. But a new thing that we're seeing now, of course, in the market is concerns that in some cases adoption can lead to dramatic deflation, disruption, et cetera. That's coming up as well. So, we're seeing greater concerns around disruption as well.But broadly, I'd say a proliferation of adoption, that that universe of companies continues to grow, increases in quantification of the benefits. So, that is good. What's really surprised me though, is the narrative among investors has so quickly moved from those benefits which we've talked about into flipping that to toggle all negative, which I know some of our analysts have to deal with every day. The mapping work suggests significant benefits. But the market is fast forwarding to very powerful AI that is very disruptive in deflation. And that's been a surprise to me.Michelle Weaver: Mm-hmm. Josh, I want to bring software into this. Your team has been arguing that AI is actually good for software. And it's really something that you need that application layer to then enable other companies to adopt AI. Can you tell us a little bit about how much GenAI could add to the broader enterprise software market? And how are you thinking about monetization these days?Josh Baer: Of course. I think the best starting place is a reminder that AI is software, and so we see software as a TAM expander. And in many ways, even though this is extremely exciting innovation, it's following past innovation trends where first you see value accrue and market cap accrue to semiconductors, and then hardware and devices, and then eventually software and services. And we do think that that absolutely will occur just given [$]3 trillion in infrastructure investment into data centers and GPUs.There's got to be an application layer that brings all of these productivity and efficiency gains to enterprises and advanced capabilities to consumers as well. And so we see AI more as an evolution for software than a revolution. An evolution of capabilities and expansion of capabilities. LLMs and diffusion engines absolutely unlocked all of these new features of what software can do. But incumbents will play a key role in this unlock.And our CIO surveys really support that. Quarterly we ask chief information officers about their spending intentions, and these application vendors who we cover in the public markets are increasingly selected as vendors that companies will go to, to help deploy and apply AI and LLM technologies.So, to answer your question, we estimate GenAI could unlock [$]400 billion in incremental TAM for software; for enterprise software by 2028. And this is based on looking at the type of work able to be automated, the labor costs associated with that work, the scope of automation, and then thinking about how much of that value is captured typically by software vendors.Michelle Weaver: And you have a bit of a different lens on AI adoption. So, what are some of the ways you're hearing software customers using these AI tools and anything interesting that popped up at the conference?Josh Baer: To echo what Stephen laid out, I mean, all of our software companies are using AI internally, both to drive efficiencies, but also to move faster. So thinking about product. Innovation, you know, the incumbents are able to use all of the same coding tools and, you know, …Michelle Weaver: Mm-hmm.Josh Bear: … products geared to developers to move faster and more efficiently on R&D. So, they're doing more. From a sales and marketing perspective, a G&A perspective, every area of OpEx, our software companies are in a great position to deploy the AI tools internally.I think more important[ly], speaking to this TAM and expanded opportunity, is our companies have skews that they're monetizing. It might be a separate suite that incorporates advanced AI functionality. It might be a standalone offering, or it might be embedded into the core platform because the essence of software is AI and it, you know, leading to better retention rates and acceleration from here.Michelle Weaver: Mm-hmm. And Stephen, going back to you on the state of play for AI, we had the AI labs here and we heard a lot about the developments and what's to come. So, what's your view on the trajectory for LLM advancements and what are some of the key signposts or catalysts you're watching here?Stephen Byrd: Yeah, this is for me, maybe the most important takeaway of the conference – is this continued non-linear improvement of LLMs, which we've been writing about for quite some time. And just to give you an example, we think many of the labs have achieved a step change up in terms of the compute that they have, in some cases 10 x the amount of compute to train their LLMs. And that [if] the scaling laws hold – and we see every sign that they will – a 10x increase in compute used to train the models results in about a doubling of the model capabilities.Now just let that sink in for a moment. Let's just think about that. A doubling from here in a relatively short period of time is difficult to predict. It's obviously very significant and I think several of the LLM execs at our event sounded to me extremely bullish on what that will be. A lot of that I think will be evident in greater agentic capabilities.But also, I'd say greater creativity. It was about three weeks ago, three of the best physics minds in the world worked with an LLM to achieve a true breakthrough in physics – solving a problem that had never been solved before. A couple of days ago, a math team did the same thing. And so, what we're seeing is sort of these breakthrough capabilities in creativity. This morning I thought Sam speaking to, you know, incredible increases in what these models can do – which also brings risk. You know, I think it was interesting he spoke to, you know, the risk of misalignment, the risk of what these models are doing.But for me, that's the single biggest thing that I'm thinking about, and that's going to be evident in the next several months.Michelle Weaver: Mm-hmm.Stephen Byrd: So, you know, on the positive side, it leads to greater benefits from AI adoption. And to Josh's point that, you know – more and more the economy can be addressed by AI, I do get concerned about the risk that that kind of step change will create greater concerns about disruption and deflation.That causes me to think a lot about that dynamic. Interestingly, we think the Chinese labs will not be able to keep pace just for one reason, which is compute. We think the Chinese labs have everything else they need. They have the talent, the infrastructure. They certainly have the energy, power. But they don't have the chips.If what we laid out with the American models turns out to be true, I could see a chain reaction where the Chinese government pushes the Trump administration for full transfer of the best technology to China. And China could use their rare earth trade position to ensure that. So, that's sort of the chain reaction I've been thinking about.Michelle Weaver: Mm-hmm. So, let's think about then bottlenecks in the U.S. Power is still one of the main bottlenecks. We had several of the solutions providers here at the conference. So, what are you thinking in terms of the size of the power bottleneck in the U.S. and how are we going to fix that?Stephen Byrd: Yeah, absolutely. I am bullish on the companies that can de-bottleneck power, not just in the U.S., a few other places. Let's go through the math in terms of the problem we face and then the solution.So, we have this very cool – it is cool if you're a nerd – power model that starts in the chip level up, from our semiconductor teams. And from that, we build a global power demand model for data centers. We then apply that to the U.S.Through 2028 we need about 74 gigawatts of data centers, both AI and non-AI to be built in the United States. I don't think we'll be able to achieve that for lots of reasons. But starting from that 74, we have sort of 10 gigs that have been recently built or are under construction. We have 15 gigs of incremental grid access, but after those two, we have to go to unconventional solutions, meaning typically off-grid solutions, over 40 gigawatts of unconventional solutions.So that will be repurposing Bitcoin sites, which could be sort of 10 to 15 gigawatts. That'll be big. Renewable energy, fuel cells will be part of the solution. Gas turbines will be a big part of the solution. Co-locating at a few nuclear plants. I'm less bullish than I used to be on that. But when we net all that out, we think the U.S. is likely to be 10 to 20 percent short of the data center capacity that will need to be in.It's not just a power grid access issue, though, that's a big one. Labor is now showing up as a huge issue. Many of the companies I speak to trying to develop data centers struggle with availability of labor. Electricians being one very tangible example. In the U.S. we need hundreds of thousands of additional electricians.So, for any of your children, like mine, thinking about careers, you know, you'd be surprised [at] the amount of money that people are making in the infrastructure business that does feel like it's a labor shift that's going to have to happen, but it's going to take years. So, in that context, we had a number of the Bitcoin companies at our event here. And the economics of turning a Bitcoin site into hosting a data center are extremely attractive. I mean, extremely attractive.To give you a sense of that. Before this opportunity presented itself to these Bitcoin players, those stocks tended to trade at an enterprise value per watt of about $1 to $2 a watt. Then we started to see these deals in which the Bitcoin players build a data center and lease them to hyperscalers. Those deals – depends a lot on the deal but – have created between $10 and $18 a watt of value. Let me repeat that. 10 to 18 – relative to where these stocks were at 1 to 2.Now many of these stocks have rerated, but not all of them. And there's still quite a bit of upside. And what we've noticed is the economics that the hyperscalers are paying are trending up and up and up. Because of this power shortage that we're dealing with. So, a lot of exciting opportunities are still in the power space.Michelle Weaver: Great. Well, I think that's a good place to wrap this first part of our conversation around AI adoption and the state of play. We'll be back again tomorrow with Part Two, looking at financing and risks.To our panelists, thank you for talking with me. And to our audience, thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
This is a recap of the top 10 posts on Hacker News on March 05, 2026. This podcast was generated by wondercraft.ai (00:30): Wikipedia was in read-only mode following mass admin account compromiseOriginal post: https://news.ycombinator.com/item?id=47263323&utm_source=wondercraft_ai(01:58): Google Workspace CLIOriginal post: https://news.ycombinator.com/item?id=47255881&utm_source=wondercraft_ai(03:26): Judge orders government to begin refunding more than $130B in tariffsOriginal post: https://news.ycombinator.com/item?id=47261688&utm_source=wondercraft_ai(04:54): GPT-5.4Original post: https://news.ycombinator.com/item?id=47265045&utm_source=wondercraft_ai(06:23): The L in "LLM" Stands for LyingOriginal post: https://news.ycombinator.com/item?id=47257394&utm_source=wondercraft_ai(07:51): No right to relicense this projectOriginal post: https://news.ycombinator.com/item?id=47259177&utm_source=wondercraft_ai(09:19): Pentagon formally labels Anthropic supply-chain riskOriginal post: https://news.ycombinator.com/item?id=47266084&utm_source=wondercraft_ai(10:48): Good software knows when to stopOriginal post: https://news.ycombinator.com/item?id=47261561&utm_source=wondercraft_ai(12:16): Relicensing with AI-Assisted RewriteOriginal post: https://news.ycombinator.com/item?id=47257803&utm_source=wondercraft_ai(13:44): A GitHub Issue Title Compromised 4k Developer MachinesOriginal post: https://news.ycombinator.com/item?id=47263595&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Enterprise LLMs: RAG vs Fine‑Tuning, IDP & Governance In this episode of the Mostly Unstructured podcast, Ed and Clay discuss whether it's better to train a domain‑specific LLM or leverage foundational models like ChatGPT, Gemini and Claude. They explain the trade‑offs between fine‑tuning and retrieval‑augmented generation (RAG), and why Intelligent Document Processing (IDP) is vital for turning unstructured data into usable context. In this discussion, we cover: Why training your own LLM is risky and often unnecessary compared to adopting and building from a foundational model. How retrieval‑augmented generation (RAG) delivers more accurate results than simple fine‑tuning. The importance of Intelligent Document Processing (IDP) for ingesting unstructured data and building domain context. Real‑world lessons on AI governance, including the Air Canada bereavement‑policy chatbot case. Managing bias, hallucinations and toxicity in enterprise models. Measuring your return on AI investment. For those thrown by the excessive acronyms, let's define:LLM = Large Language ModelRAG = Retrieval‑Augmented GenerationIDP = Intelligent Document Processing. For more insights on enterprise AI for data intelligence, visit our website and read our blog on training an LLM referenced in the episode.Website: https://www.keymarkinc.com/Blog: https://www.keymarkinc.com/how-to-tra...
How can AI Copywriting help Martin Booth, who has worked as a journalist and copywriter, but traditional clients are turning to large language models to draft their copy nowadays? An answer emerges during the conversation.Summary of PodcastThe rise of AI copywritingMartin shares his perspective on the growing impact of AI tools like ChatGPT on the copywriting industry. He explains how clients are increasingly turning to AI-generated content, which has reduced the demand for his services. The group discusses strategies for copywriters to adapt and remain relevant in this changing landscape.Opportunity in Answer Engine Optimisation (AEO)Graham introduces the concept of Answer Engine Optimisation (AEO) as a potential solution for copywriters to differentiate themselves. He explains AI copywriting is how AEO focuses on crafting content that is optimised for how AI language models like ChatGPT search and surface information, rather than traditional search engine optimisation (SEO) tactics. The group explores the technical and creative aspects of AEO, and the potential for Martin to position himself as an expert in this emerging field.Recap and next stepsKevin and Graham summarise the key insights from the discussion, emphasising the need for copywriters to adapt to the changing landscape and leverage AI tools in strategic ways. They express optimism about Martin's ability to capitalise on the AEO opportunity and become a thought leader in this space.The Next 100 Days Podcast Co-HostsGraham ArrowsmithGraham founded Finely Fettled eleven years ago to help businesses market to affluent and high-net-worth customers. He's the founder of MicroYES, a Partner of MeclabsAI, providing AI Agents, Workflows and Phone to Agent delivery systems. Now, Graham offers Answer Engine Optimisation so you get found by LLM search and Enterprise-level AI Solutions.Kevin ApplebyKevin specialises in finance transformation and implementing business change. He's the COO of GrowCFO, which provides both community and CPD-accredited training designed to grow the next generation of finance leaders. You can find Kevin on LinkedIn and at kevinappleby.comNOTE: Here's What Claude ThinksWhy copywriters are well-positioned:LLMs don't just scrape keywords; they evaluate coherence, clarity, and authoritative structureAnswer engines prioritise content that demonstrates expertise through well-reasoned arguments, not just SEO tricksCopywriters understand persuasion architecture - how to build credible cases that convince readers (and now, AI systems evaluating on behalf of readers)Direct response copywriters especially understand question-answer flow and anticipating objections - precisely what LLMs look for when synthesising answersThe critical shift required:Traditional copywriting optimises for human conversionAEO copywriting must optimise for AI comprehension then human conversionThis means explicit structure: clear topic sentences, logical progression, supporting evidence, contextual markersLess "clever" wordplay, more semantic clarity and entity relationshipsWhere copywriters have genuine advantage over AI-generated content:Domain expertise translation - taking complex client knowledge and structuring it authoritativelyEvidence marshaling - knowing which proofs, testimonials, data points establish credibilityQuestion anticipation - your direct response background means you already think in terms of buyer journey questionsUnderstanding what makes content citable and quotable by LLMsThe reality check: Those copywriters who were primarily executing formulaic landing pages or generic blog content were always vulnerable to AI displacement. But copywriters who can architect information authority - structuring expertise so AI engines recognise and cite it - that's a defensible, valuable skill.You're essentially asking: "Can craftspeople who've been building for human readers pivot to building for AI intermediaries who serve human readers?" The answer is yes, because the fundamental skill - persuasive information architecture - transfers directly.
AI Reality Check: Did the LLM Job Apocalypse Begin Last Week? Cal Newport takes a closer look at recent AI news. Below are the topics covered in today's episode (with their timestamps). Get your questions answered by Cal! Here's the link: https://bit.ly/3U3sTvo Video from today's episode:youtube.com/calnewportmedia STORY #1: Jack Dorsey announces layoffs at Block [1:28] STORY #2: The education level of LLM-based tools [11:45] STORY #3: What's happening in the world of computer programming? [19:24] Links: Buy Cal's latest book, “Slow Productivity” at www.calnewport.com/slow Get a signed copy of Cal's “Slow Productivity” at https://peoplesbooktakoma.com/event/cal-newport/ https://x.com/jack/status/2027129697092731343 https://www.nytimes.com/2026/02/26/technology/block-square-job-cuts-ai.html https://x.com/emollick/status/2027153371241607420 https://www.forbes.com/sites/ronshevlin/2026/02/27/block-lays-off-40-of-staff-and-blames-it-on-ai-dont-buy-the-excuse/ https://www.youtube.com/watch?v=56HJQm5nb0U http://calnewport.com Thanks to Jesse Miller for production and mastering. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In this episode, Emma and Patrick break down three pressure points every brand should be watching. First, they tackle the performance vs. transparency tradeoff in media buying. What's the balance between log‑level data and control, chasing opaque performance, and using transparency to actually change how and where you buy?Then they unpack The Trade Desk's latest earnings and what its slowdown (and supply‑side growth elsewhere) signal about where spend is shifting across DSPs, CTV, and the open internet.Finally, they dig into the rush of vendors promising “LLM performance” measurement, covering how pricing is already collapsing, how little real differentiation there is, and how brands should approach contracts, trust, and experimentation in this new category.
Today, we're talking to John Greely from Navu, where their team is working on integrating LLMs into business websites. What we want to find out is what the advances in LLM technology offer your website work, and how the websites of 2026 might look different from those in the years past. Connect with John on LI: https://www.linkedin.com/in/greely/
Why Authority Now Matters More Than Visibility in B2B Content With AI making it easier than ever to create content, B2B buyers are drowning in a sea of digital noise. To rise about the generic, “AI-slop”, the new differentiator is no longer only visibility, but the ability to convey authentic brand authority. More often than not, it is the perceived credibility and depth of a brand's messaging that decides whether B2B companies are shortlisted or ignored by well-informed decision makers. So how can B2B companies build a solid thought leadership strategy that creates trust and sets them apart from competitors? That's why we're talking to Jamie Thomson (Copywriter and Founder, Brand New Copy), who shares his expertise and insights on why authority now matters more than visibility in B2B content. During our conversation, Jamie emphasized that true authority is built through consistent communication and unique insights rather than controversial stances. He criticized the over-reliance on AI for content ideation and encouraged businesses to focus on their unique selling points and authentic company culture. Jamie stressed the need for documented brand positioning and strategic messaging to build credibility across all channels. He also underscored the value of thought leadership and social proof in signalling authority, and suggested that businesses should invest in understanding and documenting their positioning for success in the long run. https://youtu.be/k4H-0M5ZL7g Topics discussed in episode: [02:47] The end of easy visibility: Why AI overviews and shifting algorithms mean you can no longer control traffic through traditional SEO alone. [07:09] Redefining authority: Authority isn’t about being controversial or loud; it is built through the consistency of your message and brand voice. [13:31] Chasing the right metrics: Why “visibility for visibility’s sake” is a vanity metric, and how to tie your content strategy to actual business outcomes. [19:39] The credibility anchor: How being consistent with your own unique data and statistics keeps your brand from becoming an “average” forgettable competitor. [21:42] Messaging for committees: A simple 3-step formula to establish messaging that resonates with human decision-makers, even in complex B2B environments. [27:35] Signaling authority: Practical ways to use “social proof” and unique data to back up your claims in proposals and on your website. [31:18] Future-proofing your brand: Why documenting your positioning today is the only way to maintain longevity over the next decade. Companies and links mentioned: Jamie Thomson on LinkedIn Brand New Copy Copywriting Course at Brand New Copy Transcript Jamie Thomson, Christian Klepp Jamie Thomson 00:00 You know, maybe it’s a personality thing, but like, I’m not particularly controversial in my marketing and I do think people take that stance, like we are the young upstarts, or we are going to make a point of disagreeing with this company so that we can get engagement, whether they believe what their sort of stance are taking or not. It’s, it’s almost that sort of strategy of, there’s no such thing as bad press, and it’s probably effective short term and that’s why people are doing it. But if you’re looking to build a sort of a future proof business, comes back to that idea of authority being a bit consistency, unless your whole strategy is to be controversial, it’s more of a short term gain tactic. I think strategy is even a strong word. I think it’s a tactic. Christian Klepp 00:48 With AI making it easier than ever to create content, B2B, buyers are drowning in a sea of digital noise. To rise above this noise, the new differentiator needs to be delivered through authority. More often than not, it’s the credibility of a brand’s messaging that decides whether they’re shortlisted or ignored. So how can B2B companies leverage this and build their credibility? Welcome to this episode of the B2B Marketers on the Mission podcast, and I’m your host, Christian Klepp, today, I’ll be talking to Jamie Thomson, who will be answering this question. He’s an award winning copywriter and founder of Brand New Copy who puts strategy at the center of the process to define what the copy should achieve. Tune in to find out more about what this B2B Marketers Mission is. Okay, and off we go. Mr. Jamie Thomson, welcome to the show, sir. Jamie Thomson 01:34 Hi, Christian. It’s good to speak to you again. Thanks for having me on. Christian Klepp 01:38 Great to have you here. I mean, we had such a dynamite conversation. Like, a few weeks ago, I should have, like, hit record on that conversation too, right? Like, yeah, absolutely, Jamie, I’m really looking forward to this conversation because, you know, one of the things that you’re going to talk about today is, like, near and dear to me as somebody that also dabbles in the world of copywriting for B2B, but um, so here we go, right? So Jamie, you’re on a mission. I’m going to say, to help B2B companies to define their messaging, strengthen their positioning and communicate with authority across every channel. So this is really serious stuff here. Okay, so for this conversation, I’d like to focus on the topic of why authority now matters more than visibility and B2B, right? So I’d like to kick off the conversation with two questions, right? And I’m happy to repeat them. First question is, why do you believe authority is important, especially in an age where AI is creeping into B2B content and everything else. And where do you see a lot of B2B brands falling flat with the authority piece? Jamie Thomson 02:47 Yeah, so I think, I think authority is more important now than ever has been because, like you said, because there’s a lot of like, LLMs (Large Language Models) now kind of doing a lot of the marketing work that was maybe, you know, handled by humans before. I think that you know, sort of the sort of background context to this is that, you know, as Marketers, we don’t have as much control over the visibility of our content as we used to like Google, for example. You now have AI (Artificial Intelligence) overviews. So even if you get to like position one in Google, you’re still at the bottom of the page. Because you’ve got your AI overviews, you have sponsored results, and then there’s the organic listings underneath. And even if you’re position one, you’re still at the bottom of the page earlier. As a result, website traffic has reduced, and people aren’t getting the same kind of like traffic numbers that they used to on LinkedIn as well, like the way that the algorithms are sort of working nowadays. There doesn’t really seem to be any regular reason as to which posts perform well, it seems to be the sort of casual, off the cuff posts that seem to seem to get a lot of attention. There is a genuinely useful, you know, thought leadership stuff has kind of been pushed to the back burner a little bit. So I think authority is important because we don’t have as much control over visibility as we used to, and I think it’s the genuinely useful content that is the stuff that’s going to get shared, whether or not the algorithms are going to push that. So if you have produced a piece of content that has, like, really unique data points that is genuinely useful to other businesses, and it’s get shared online. It’s going to get shared internally between companies, and it’ll get linked to as well. And again, like to answer your second question, and where do a lot of sort of B2B brands like sort of miss the mark? I think. I think the main thing is that they’re the content that they’re producing isn’t genuinely useful. They are a lot of brands across industries that are kind of seeing the same thing as their competitors. And I don’t know for sure, but I have a sort of inclination that is down to LLMs, because they’re kind of relying on like chatGPT for their ideas. They’re asking chatGPT to give them ideas for content. And, you know, chatGPT, it can give you the output, but it can’t give you the input. You know, it’s a technology of averages. So if you’re looking to LLMs for ideation, it’s going to give you the average of what everyone else in industry is saying. So it’s important that your businesses are really doubling down on their ideation and things that make them unique as a company, like their unique selling points, their value propositions, their company culture. You know, the people behind the business, that’s kind of what makes a company’s culture and chatGPT, llms, they don’t really have any first hand experience of that, and it’s such a nuanced thing that you’re never going to get like effective results if you’re asking LLMs for the ideas in the first place. If you’re using it for execution, to help guide style and tone a little bit, then that’s fair enough. But, yeah, it’s important that brands are sort of really doubling down on the ideation. You know, that’s that, I think, just genuine, unique insights that people are actually going to be interested in reading. Christian Klepp 06:38 Absolutely, I had a couple of follow up questions for you there. I mean, this is great stuff. This might sound like overly, like simplified. I mean, for lack of a better description, but like, just, let’s clear the air here a little bit. Define, from your experience and your own interpretation, define authority, because that also gets thrown around very loosely, I feel almost as, almost as much as the term you’ve got to add value. I mean, like, you know, what does that actually mean, right? Jamie Thomson 07:09 Yeah, yeah. So to me, authority is about a brand communicating their messages in a consistent way, whether that is the actual content of the messages or the way that they actually communicate it, in terms of brand tone of voice. So authority, to me, is about consistency, more than it is about being emphatic or controversial or overly confident. It’s more about consistency and how they communicate their messages to their audience. Christian Klepp 07:44 You brought up something there, and I’m going to throw out another question, because I you find this a lot on LinkedIn, at least from my experience, that people put out a lot of pieces. I’m going to just dare to say under the guise of authority, but what it actually is like, just an extremely contrarian point of view. And it’s almost like, you know, I’ve got a I’ve got to just put my thoughts out there, because I want my voice to be heard. But it’s not necessarily authority. It’s just like disagreeing with the status quo. What’s your take on that? Jamie Thomson 08:15 Yeah, I mean, to me, that’s, that’s kind of it’s almost performance marketing. It’s just performing, if it’s like visibility for visibility’s sake, you know, maybe it’s a personality thing, but like, I am not particularly controversial in my marketing, and I do think people take that stance like we are the young upstarts, or we are going to make a point of disagreeing with this company so that we can get engagement. You know, whether they believe what their sort of stance are taking or not, it’s it’s almost that sort of strategy of, there’s no such thing as bad press, and it’s probably effective short term, and that’s why people are doing it. But you know, if you’re looking to build a sort of a future proof business, it comes back to that idea of authority, being about consistency, unless your whole strategy is to be controversial. It’s more of a short term gain tactic. I think strategy is even a strong word. I think it’s a tactic. It’s not really magic. Christian Klepp 09:19 Yeah, yeah, I love that. You said it was performance, performance marketing. You know, it almost feels like they’re, they’re, they’re playing the algorithm, or they’re trying to, like, just get more engagement. And it’s true, like, whether they actually believe what they’re saying or not, at least they’re getting more eyeballs on all look what this guy said, Yeah. Jamie Thomson 09:38 I mean, you see it in so many different ways. Like, a lot of the time, it’s with job postings as well, like, especially for for consulting season freelancers, you see, like you have a potential opening for a freelance position, you know, comment below if you know anyone that would be interested. Then again, I don’t know for sure, but I seems very performative to me. Has that company actually reached out to people directly about the job? Have they advertised on job sites, or are they just posting about it as a potential opportunity for the sake of engagement, knowing that people will be replying and tagging other people? And yeah, it’s that kind of a short term tactic. Christian Klepp 10:22 Exactly, exactly, before we go on to the next question, I have one final follow up for you on this topic, right? Like so where, where do you do you believe that sometimes things go awry with brands because a it’s about time and speed. They need to get something out quickly. They needed the day before yesterday. And hurry up and let’s, let’s get some, let’s get some volume out there. Let’s get plenty of content out there, right? So one, that’s one thing. The second thing is, do you feel that they missed the mark? Also? Because they, I’m just gonna say it, they just generally don’t understand who the target audience is. Jamie Thomson 11:03 Yeah, I think you’re writing both accounts there. I mean, you know yourself Christian, how long it takes to produce a good piece of content. It takes research. It’s not something that you can kind of write in half a day. So I do think that’s part of it. There’s that sort of pressure of always having to be seen. And so, yeah, I think, I think people are putting stuff out. A lot of businesses put stuff out either because it’s trending, because they see other people are doing it, or because they have, you know, they’ve asked an early lens for topic ideas as a technology of averages, it’s going to give you ideas that are already out there. So yeah, I think that’s definitely part of it. And then the second part, I think you’re totally bang on with that as well. I think a lot of people just don’t really understand what their positioning is in the industry. I say people, I’m talking about businesses, but at the end of the day, it’s still people that you’re talking to, like even though it’s B2B is business to business, the people making the decisions are still human beings, so your content needs to resonate with them. And I think people now have this kind of detector of when something is has been genuinely thought out. You know, thoughtful content is it’s kind of becoming few and far between because of like LLMs and because people can produce things quickly, and it’s kind of content for content sake. So yeah, I think people just don’t understand their positioning in industry and what their values are, and what stands they’re taking really, kind of just jumping from, you know, from one topic to the next, hoping that something is going to go viral, you know, which I guess they’re hoping will then lead to some sort of business outcome, ie, sales. But the stuff that makes the sales is the stuff that really, that had to be kind of properly thought out, in my opinion. Christian Klepp 13:06 Yes, oh yeah. Imagine that, wow, properly thought out. Absolutely, absolutely. I’m glad you brought that up, because that’s a great segue into the next question about key pitfalls, right? When we’re talking about like a brand building its credibility and authority. What are some of the key pitfalls that B2B Marketers and their companies need to look out for, and what should they be doing instead? Jamie Thomson 13:31 Yeah, I mean, I think that the key, one of the key pitfalls that I see as the whole visibility for visibility’s sake, you know, it’s, it’s kind of a vanity metric, in a way that so, like, you’ve made this piece of content and it’s been made 1000 times, or you’ve made the post and it’s been linked by 200 people, you know, and unless that is tied to a business outcome, it’s, it’s just visibility for visibility’s sake. And so one of the key pitfalls is, I think a lot of companies don’t tie their content strategy to their business outcomes enough. They’re kind of chasing engagement because it looks good in reports, so it’s good to stakeholders. But the reality is, unless that content has resulted in an inquiry or a product sale. You know, how successful has it really been? If it was just like a one off, let’s try this, unless it’s part of our strategy, but it’s a one off and it hasn’t really resulted in a sale or an inquiry, then can we deem it to be successful? And that’s up for the business to decide. But think that’s a common pitfall. And I think the second one for me is just what we said before about trends like I see a lot of because I work with businesses across a few different industries, mostly finance, technology, energy and sort of sustainability, and I see a lot of businesses jumping on trends in terms of the things that they’re talking about, like their messaging. It’s almost like one person has started talking about it, and they’re keeping an eye on their competitors, and they think, well, we need to keep up with that. So we need to have an opinion on this as well. And I mean, there’s a time and a place for jumping on trends, especially if it’s something if it’s something that there is an expectation on that company to respond to, like a world event, but it needs to be part of their overall strategy for it to be effective. Otherwise, it’s just, it’s just reactive. It’s kind of fire fighting. It’s, it’s not really cementing any like real foundation for the future. So yeah, those would be my two common pitfalls that I see. Christian Klepp 15:50 You’ll excuse me if I’m grinning here, but you’re the point you brought up just reminded me of a client that I worked with many years ago. I’m not going to say who it is to protect their identities, but they, part of the briefing was that they asked us to come up with a viral video, okay? And to which I said, you know, respectfully, respectfully, you don’t get to decide if your video is viral. That’s something that the market decides and and believe it or not, Jamie, it was in fact, it was in fact, a B2B campaign. So that that already in itself, made me scratch my head a little bit at the brief, yeah. And it was one of those moments where, okay, well, why are we why are we doing this, what are we hoping to achieve? What’s the outcome? And how is that exactly? How does that tie in, like you said to your business goals, right? And they basically said, Well, everybody’s, you know, something to the effect of all everybody’s doing one so, you know, we think, we think it’ll be good to do this as well. And I think those are one of those moments in my agency, days where we were very confident that we will be okay if we walked away from that project, and we did, we just said, like, Sorry, can’t help you, right? Because I just even in my my wildest dreams, I could not imagine how we would have been able to pull that off, not from a production perspective. Because, you know, if you want to make a video, that’s there’s many ways to do that. I didn’t know how to pull it off from a marketing, distribution perspective. You know what I mean, like, Jamie Thomson 17:37 that’s stuff that’s kind of out with your control as an agency, as the creator of the content, or even as a business like you said, viral videos are meant to be it’s not really something that’s meant to be manufactured. It’s like a bit of a yeah, there’s just too many anomalies that needs to come together for something to go viral. So it’s a very difficult thing to manufacture without, of course, like paying for views or that kind of thing. You know, I’m a big, I’m a big sort advocate of it. Sometimes what you don’t say that is as important as what you do see, you know, you don’t need to be everything to everybody all the time, Christian Klepp 18:21 Especially in B2B. Jamie Thomson 18:22 Yes. Christian Klepp 18:25 Can you just imagine? I mean, you mentioned a couple of industries now, finance, tech and energy. Can you imagine if you had an energy client now that was also trying to reach out to finance people? Jamie Thomson 18:33 Yeah, yeah. That’s the thing. It’s like, yeah. Businesses need to understand their audience and but more importantly, they also need to understand their positioning in industry, like, what is? What are they known as in the energy sector? Are they the scrappy upstarts? Are they the established, like an international company, who are respected because, because all these things influence the way that they communicate and and the way that they speak to their audience as well. And you know, if a viral video fits that strategy, then I guess, fair enough, if you can try and manufacture but more often than not, I would say it’s more about being consistent, sticking to the plan. It’s an expensive gamble. It is an expensive gamble. Christian Klepp 19:22 Expensive gamble. Yes, all right, in our previous conversation, you talked about, you know, it’s the credibility of a brand’s messaging that decides whether they’re shortlisted or ignored. Could you elaborate on that? Jamie Thomson 19:39 Yeah, I think, I think the sort of credibility anchor comes from the consistency side of things. If you are consistently communicating your messages in a way that is also consistent with brand voice. Then you are more likely to be remembered by people like I think there’s a marketing statistic that that says the average consumer. I know we’re talking about business to business, but people, in general, the average like consumer. They need 12 touch points in order to like for that to result in a sale for a business. And so that could be like, they need to see the same message 12 times before it really hits home, or before they realize that’s something that they need. I’d imagine it’s probably even higher on social media, where people are consistent with schooling. But given that sort of like 12 touch point, like the sort of demonstrates the need for how consistent you need to be in order to have the credibility. And if you’re not being consistent and you’re just saying the same thing as everybody else, then you are essentially becoming like an average brand like everybody else, and that’s forgettable, whereas, you know, the companies that are really sort of digging deep into their own data and their statistics and the signals that we are seeing in the industry that only they have access to, that they can make known in their communications, then those are the ones that are ultimately going to be remembered. Christian Klepp 21:14 Yeah, yeah, no, absolutely, absolutely. And on that note, if you could just walk us through how you think B2B. Marketers can use that messaging and copywriting to establish credibility, especially in the B2B context. We’re always talking about decision makers or buying committees, so we’re not always we’re not just talking about one person, right. We’re talking about, as the name suggests, a committee, so a group of people, right? Yeah. Jamie Thomson 21:42 And I mean, the way that I sort of generally do it is with clients. I host a workshop, and like during that workshop, we would first of all establish their messaging. So what is it that the business wants to see in the first place? And then we work out, like priorities, what messages are the most important for the specific channels that the business uses. And then you look at like more on the execution side of things like the tone of voice and the style and that kind of thing. But you know, businesses can do this themselves in house, following like a sort of simple three step like formula, essentially just deciding what they want to say, ie, their messages, which messages are the most important and how do they want to communicate? It like with the last part on the execution, that’s where LLMs can be useful, checking grammar, working things from notes, using it like to proof. But the initial idea needs to come from the business. So yeah, I think by following that process, it makes the ultimate like the sort of final content, appear more thoughtful, and people do pick up on that, like that. There’s a reason that reports and statistics and like white papers do well for generating leads. It’s because they’re they are genuinely useful. They’re thought leadership pieces, as opposed to just one person’s opinion, who is maybe the same as someone else’s or the opposite controversy for controversy sake? Yeah, people can really tell when, like, a piece of content has had a lot of thought into it businesses, notice that it’s just that’s the kind of content that resonates with people, like I said before, like, even, even though it’s business to the business, you’re still communicating to people, regardless of who the target audience is and the industry and the demographics, it’s still a person that’s making the decisions as to whether they’re going to use that company’s services or buy their products. Christian Klepp 23:54 Absolutely, absolutely. And I think another thing that can also be kind of fun to do in B2B, especially with white papers and reports, and what have you is to extract some of those, like nuggets, right? Extract some of those, some of that data. And I’m just gonna throw one of them out there, right? Like many years ago, we worked with a company that did the produced steel. And I can’t remember how much steel they produce, but they said, you know, we produce enough steel that can, you know, it’s enough to, you know, we can wrap the you can, you can wrap around the Earth four times, right, something along that line, right? Or, or even, even at home, like with a with a consumer product, so we have the plastic wrap, and it actually says on the packaging that this can cover an entire football field, right? Just facts where it’s almost like, did you know, or hey, by the way, right? And then you can get into something more serious too, because we, you know, we’re dealing with reports like that as well. Like, you know, last year, last year, most retail brands invested about month. 30% more on AI. And if you’re in the industry, you might be like, Yeah, I kind of knew that they were investing in AI, but all 30% more of their budget. What exactly are they investing in? And, yeah, that’s, that’s why you should download the reports. Jamie Thomson 25:20 Lots of information in the way that you presented to the public, it becomes interesting. And like, as you know yourself, that’s kind of the job of a copywriter, is to simplify that complex information. Like, you know that the fact that, like, the plastic wrapped around the world four times, like, that’s quite viable. I can visualize that as a consumer, and I think, oh, that’s that’s be cool. But if you just came through the cold, hard stats within context, or that’s sort of like visual with it, it doesn’t really mean much to me. And yeah, that’s kind of the job as communicators. And sort of B2B is to simplify that complex information. Look for the nuggets, and if you have a generally useful report that can be enough to give you, like, months worth of content, like on social media and sort of thought leadership articles, just like expanding on an idea within that report. So yeah, like, it might take a bit of investment up front, initially, to get the data, and get the process for gathering that data and getting the methodology in place. But once, once you’ve done that, and you’ve written the report, and it’s out there that gives you content for potentially months. Christian Klepp 26:32 Absolutely, absolutely. And I think that’s one of the challenges of a copywriter, right? Like, how do you there’s this expression in North America, like, how do you get more juice out of the squeeze, right? So, how do you stretch that? Give it, give it more longevity, right? Beyond, beyond. Well, here’s the report, off you go, right? Like, just like you said, like, stretch it out for you in like, months. You know, have more ammunition for, like, social, media content, you know, promotional content, perhaps even something on the website, something along that line, yeah. Jamie Thomson 27:07 I said, so the strategy has the words, if you have the strategy in place, then that stuff will follow, because you’ll have thought about it before the report was even published. So. Christian Klepp 27:18 Yep. Jamie Thomson 27:19 Yeah. Christian Klepp 27:19 Yep, absolutely, okay. I mean, on the topic of authority, give us some practical techniques for signaling authority across websites, campaigns and proposals. And I know this isn’t a one sentence answer, off you go. Jamie Thomson 27:35 Yeah.I think the first thing that comes to mind is taking a stance. And I don’t mean being controversial, but I mean having a clear idea of where the company stands in the industry, like what their positioning is. That in itself, is a useful technique. It’s not something that you can, like achieve overnight, but like with a workshop with someone and getting it all documented, that can give you a clearer sense of purpose as a business. I also think demonstrating, demonstrating expertise, like through thought leadership, content is a really useful technique for signaling authority. You know, if you know as a company, you may not even realize it, but you have access to data that other companies don’t. That in itself is unique, even if you don’t have as much data, or if your data says something different from your competitors, it’s still your day and it’s still useful. And that’s the kind of thing that can be turned into thought leadership content. You know, we’ve discovered that 50% of x, you know, prefers this. That kind of like insight driven. Like content is the stuff that generally performs well because people are naturally drawn to it and they find it genuinely useful. Yeah, I think it’s just that kind of idea of like social proof, like showing that you know what you’re talking about as a business, rather than simply telling people, because that’s what, that’s what, like LLM content tends to do. It makes vague claims that anybody can make, but you know, the proof is in the pudding, that the businesses that actually demonstrate their expertise are the ones that get remembered. And so yeah, that kind of comes through thought leadership stuff, which is data driven, even if as simple as, like social proof, like providing evidence of a case study that you have written with a client, or, like a business outcome that is a signal of authority that shows that you can back up. All the claims that you’re making in your messaging. Yeah, yeah, those would be my kind of, like, top two practical tips. Christian Klepp 30:12 Absolutely. Well, you’ve laid it out so beautifully. It sounds, it sounds, you know, on the from the outset, like, very easy to do, but we all know that. You know, in reality, it’s, it’s, it’s much more, much more challenging, right? Jamie, I know that you’re, you know you’re, you’re an award winning copywriter, and you’re not a sage, and your job is not to prophesy, but I’m gonna have to ask you to, like, assume that role for a second. All right, looking like just down the road with everything that’s going on now, and, you know, we’ve talked about AI and LLMs and whatnot. Perhaps some practical advice, as we’re now at, you know, at the time of this recording, at the beginning of 2026 what are some advice that you would give B2B companies who are saying like, yes, we would love to build our credibility, but AI and LLMs, you know that all seems to be creeping into everything that we do. Give us some advice on how to deal with that moving forward. Jamie Thomson 31:18 Yeah, that’s a good question. I think my sort of advice would be to take the time to understand your positioning and to document it. So, you know, it’s that kind of the way that the sort of marketing is going and the way that the industry is evolving. I do think the businesses that are going to like be here in the next 10 years are going to have that like longevity, are the ones that are kind of investing the time and now to understanding where they are positioned in their industry and where they want to be positioned in 10 years time. But crucially, like having it documented so that it’s being used consistently across the business you know from from sending internal emails to writing reports for the public. So from a practical point of view, that’s things like understanding like the business values and how the work the company is doing is a reflection of those values, and how that’s communicated to people. If it’s like a business that’s selling a product, like, what are the unique selling points of the product? What are the benefits to the end users? And how are we seeing that? You know, because in a lot of B2B industries, I think the sort of the strategy of competing on features is becoming a bit redundant. As technology improves. It’s quite difficult for companies to be able to claim unique features, because everybody can has access to the same tools. And so really, what if you flat that on its head, and you kind of look at look at it from the customer’s point of view, whether the customer choose one company over another that’s essentially got the same product or service that’s going to come down to like brand ability, and how much the company is able to like, empathize with the target audience, if they can really understand what their pain points are, then that business is ultimately going to choose that service over another, and that that comes down to, like, having it all documented, you may have, like, an intuition about what these things are, but as your business evolves, your intuition about these things will change and you’ll get scope creep, or you’ll want to jump on trends. If you do have it documented as an internal process, you’re more likely to stick to it in the future. And if you do get to the point that you want to change your positioning in industry, because you’ve maybe you’ve had more success than anticipated, or something in the market has changed, then that in itself should be a process. You should go back to the drawing board and look at what processes you have documented, and think what needs to be changed here before you are reactively moving in a different direction. That would be my advice to put my kind of like futurist cap on that’s, that’s what I would say. Christian Klepp 34:23 Yeah, yeah. Well, that’s some pretty that’s some pretty solid advice. And, you know, thanks for sharing that. I totally agree. People have to understand their positioning in the market. Most importantly, also, they have to document it. It’s, it’s amazing how many companies I’ve worked with that don’t document that kind of, I wouldn’t call it a projection, but it’s almost like, okay, the positioning, what you know, and their vision, like, where do they what do they aspire to become? Right? I know that sounds like more individualistic, but you can, you can, you know, you can put that into the context of organization as well. Like, what do you aspire to become in 10 years and 20 years? Where’s this business going to go? Jamie Thomson 35:06 Absolutely, that’s it. Like something doing my own business with clients. Like, if someone asks me if someone’s going if a company is going through a rebrand and they need their website rewritten to reflect the new positioning, like, the first thing I suggest is, well, let’s get a workshop work out what you want to say. I’ll create a messaging guide for you, and I’ll create a total voice guide for you. And then sometimes you get a push back and you say, Well, why do we need that? I guess the answer is, well, I could rewrite your website. I could make it up as I go along, if you want, but not going to be anywhere near effect as effective as it would be if we have all this kind of important stuff documented in the first place, like, you need to have a structure, you have a plan, you have a strategy before the sort of the execution happens. And if you do the first part, well then, like, the actual execution of it, whether we’re talking about writing or or any other sort of like campaign that last 20% almost. It’s just like the icing on the cake, because when you get there, you already know what you want to see, how you want to see it, just kind of need to get, don’t get the content down, whether you’re whether it’s filming, whether it’s from heads key fingers to keyboard, that sort of 20% kind of comes a lot easier when there’s a plan, when there’s a structure in there from the start. Christian Klepp 36:29 Absolutely, absolutely. Jamie, this has been an incredible conversation. Thank you so much for coming on and for sharing your expertise and experience with the listeners. Please, quick introduction to yourself and how people out there can get in touch with you. And for those that are listening to the audio version of this recording, Jamie and I are actually color coordinated today. Jamie Thomson 36:53 We were emailing each other before making sure that we were. Christian Klepp 36:58 That’s it. That’s it. That’s it. Jamie Thomson 37:01 Thanks very much for having me on Christian like I said, like, I have listened to the podcast and myself over the over the past few months, and I’ve resonated with a lot of the sort of content that, like your your other guests have been putting out there. So yeah, it’s like, really a privilege to be on it. And yeah, like people can get in touch with me. Well, just explain who I am. I mean, my name is Jamie, and I’m a strategic copywriter and messaging strategist. And I run a copywriting studio called Brand New Copy, and I have done since 2013 and I help brands establish their messaging and their tone of voice through workshops and deliverables like thought leadership, articles, white papers, annual reports, website copywriting. And I also provide training to businesses, agencies and other copywriters. And I have a flagship course called the Brand New Copywriting course, which opposite the strategy behind copywriting. So yeah, if you wanted to get in touch with me, the best way would be through email, which is Jamie Thomson at brandnewcopy.com Christian Klepp 38:13 Fantastic, fantastic. And we’ll be sure to drop those links in the show notes when this episode is published. So once again, Jamie, thanks so much for your time. Take care, stay safe and talk to you soon. Jamie Thomson 38:22 Thanks, Christian. Christian Klepp 38:24 All right. Bye for now.
Why Every Affiliate Manager Needs to Understand What's Happening to Search Right NowSearch is changing faster than most programs can adapt. AI-powered platforms like ChatGPT and Gemini are absorbing customer journeys that used to generate trackable clicks, and the content that influences purchasing decisions is increasingly going unrecognised and unpaid.In this episode, Lee-Ann sits down with Alex Springer, Director at openattribution.org, and Leanna Klyne, Head of Agency at KonverJ, to unpack what open attribution actually means, why last-click attribution was already broken before AI arrived, and what the industry is doing about it together, many of them for the very first time.If you work with publishers, run an affiliate program, or depend on content-driven traffic to generate sales, this conversation will reshape how you think about measurement, value, and what comes next.Talking Points Include:Why content creators are producing value they will never be paid for and what needs to change before the affiliate industry loses its commercial foundation entirelyThe difference between how people shop and how AI thinks people buy and why that gap is exactly where affiliate marketing's future opportunity livesWhy last click was always a fiction and why the shift to AI-assisted search is finally forcing the industry to confront itWhat Open Attribution actually is and why it starts with something as simple as a list of URLs that changed everythingListen to Find Out More About:What the agentic commerce protocols from OpenAI and Google actually do, and why the contributions Open Attribution is making to them matter for every publisher and brand in performance marketingWhy some of the highest-quality publisher content has been deliberately removed from AI training sets, and what that means for the accuracy of AI recommendations right nowThe early warning signs that brands and affiliate managers should be watching for as AI-generated content starts to game LLM visibility the same way SEO was gamed in the early days of GoogleHow the SPUR initiative and the APMA AI task force connect to what Open Attribution is building, and where compliance and governance conversations are actually happeningWhy Alex believes websites are not going away in five years, and what types of purchases will continue to require the kind of considered, content-led journeys that affiliate publishers are built to supportKey Segments of This Podcast and Where You Can Tune In to Go Direct:[02:47] Alex introduces Open Attribution, his decade in performance marketing, and why he shifted focus from AI as a technology to AI as an industry actor[05:45] Why last click was already broken before AI, and what transparency and usage auditability actually mean for content owners and brands[27:05] The CPA debate: whether cost-per-acquisition still makes sense, what influence really means now, and why the shopping journey has always been more complex than the model we used to measure itReady to Build a Smarter Affiliate Program?If this episode raised questions about how your program is measuring influence, attributing value, or preparing for an AI-first customer journey, the KonverJ team can help. We work with brands and publishers to build affiliate strategies that are built for where performance marketing is heading, not just where it has been.Get in touch with the KonverJ team to find out how we can help you build a program that performs in the new landscape.Send me a text with your questions
In this episode, Carlos Gonzalez de Villaumbrosia interviews Chris Geoghegan, VP of Product at Zapier. As the company's first-ever Product Manager, Chris has spent nearly a decade scaling Zapier into a $5 billion automation giant that serves over 3.4 million businesses and 69% of the Fortune 1000.Zapier is not just building AI tools; they are powering their entire company with them. Chris reveals that his team currently runs over 800 active AI agents internally to manage everything from calendar prep to engineering triage. He breaks down the Code Red moment that shifted their strategy and how they are defining the future of Agentic Workflows.What you'll learn:Agentic vs. Deterministic: Why standard workflows follow a set path, while agents can reason, access knowledge, and change course to solve problems.The Orchestration Layer: How to hire and onboard AI agents using Context Engineering and Model Context Protocols (MCPs).Adoption vs. Transformation: Why adoption is just doing old tasks faster, while transformation unlocks business models that were previously impossible.Building a Moat: How Zapier uses its vast data on user intent to stay ahead of commodity LLM features.Key takeaways:Treat Agents Like Employees: You can't just deploy an agent; you must onboard it with specific context and tools to be effective.Lead by Building: Transformation fails if leaders don't use the tools. Zapier's execs do show-and-tell sessions to prove they are hands-on.AI Governance is Key: To move up-market to the enterprise, you must solve for Observability (who sent what data) and Access Control.Credits:Host: Carlos Gonzalez de VillaumbrosiaGuest: Chris Geoghegan Social Links: Follow our Podcast on Tik Tok here Follow Product School on LinkedIn here Join Product School's free events here Find out more about Product School here
We're wrapping up our AI Tools series with a special episode featuring just the two of us—Matt and Moshe—looking back at what we really learned (and where we're still confused) about AI in product management.Across this conversation, we revisit the core themes that emerged with our guests and in our own experiments: from “vibe coding” and no‑code builders, to LLM assistants, enterprise privacy, agentic workflows, and the evolving role of the product manager. We share candid stories of using tools like Google Stitch, Figma/Figma Make, FlutterFlow, Base44, and others to design and prototype a real mobile app; what worked, what broke, and why credits, pricing, and model limits matter far more than the glossy demos suggest.Join Matt and Moshe as they explore:How our AI Tools series evolved, from “let's review tools” to “AI is not one thing, it's many different problem spaces”Why “vibe coding” is a misleading umbrella term, and how it means something different to devs, PMs, and designersLessons from using AI for design and prototyping: inconsistent outputs, beta‑stage rough edges, and the pain of credit-based modelsBuild vs. buy for AI: integrating foundation models vs. building your own, and what that means for pricing, UX, and reliabilityEnterprise realities: privacy, security, and why tools like Copilot/Gemini have such an advantage where data and IT policies matterHow conversations with our guests (Sani, Eva, Elena, Stav, Yaron, Marcos and Adir) shifted our thinking about workflows, orchestration, and agentsThe future of agent-to-agent interactions: what happens when AIs negotiate purchases and workflows with minimal human promptsWhy first principles and business outcomes still matter more than any single AI toolHow the PM role is changing: less tool‑chasing, more orchestration, strategy, and clarity about what problem we're actually solvingWhat topics we'd tackle next, like pricing, packaging, and credit models for AI products, and how this series is shaping our own careersAnd much more!You can connect with us and keep following what comes after this AI Tools series:Product for Product Podcast: http://linkedin.com/company/product-for-product-podcastMatt Green: https://www.linkedin.com/in/mattgreenproduct/Moshe Mikanovsky: http://www.linkedin.com/in/mikanovskyNote: Any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way.Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️
Новий випуск охоплює різноманітні теми зі світу високих технологій, зокрема безпеку у IT, медичні технології, штучний інтелект. Серед тем: – кейс із компанією Meta, де керівниця служби безпеки надала AI-боту повний доступ до електронної пошти; – історія створення OpenClaw як хобі-проєкту; – перспективи спеціалізованих ASIC-чіпів для запуску Llama 3.1; – розвиток превентивної медицини та обмеження медичних датасетів; – екологічні виклики дата-центрів; – філософські питання ідентичності та свідомості у контексті розвитку штучного інтелекту. 00:30 — помилка безпеки в Meta 03:55 — здоров'я та медичні технології 09:50 — медичні стартапи та їх потреби 13:02 — аналіз безпеки в IT та нові технології 18:53 — екологічні питання та етика в технологіях 23:16 — блокування тарифів та їх наслідки 27:25 — спеціалізовані чіпи для AI 35:07 — генерація музики та її сприйняття 40:19 — генерація плейлистів за допомогою AI 47:15 — технології вимірювання та їх точність 51:50 — штучний інтелект у повсякденному житті
Meta Ray-Bans are sending private videos to human workers in Kenya, and Dr. Niki talks about what we know about the effects of LLM use on mental health.Starring Jason Howell, Tom Merritt, and Dr. Niki.Links to stories discussed in this episode can be found here. Hosted on Acast. See acast.com/privacy for more information.
This week we talk about Anthropic, the Department of Defense, and OpenAI.We also discuss red lines, contracts, and lethal autonomous systems.Recommended Book: Empire of AI by Karen HaoTranscriptLethal autonomous weapons, often called lethal autonomous systems, autonomous weapons systems, or just ‘killer robots,' are military hardware that can operate independent of human control, searching for and engaging with targets based on their programming and thus not needing a human being to point it at things or pull the trigger.The specific nature and capabilities of these devices vary substantially from context to content, and even between scholars writing on the subject, but in general these are systems—be they aerial drones, heavy gun emplacements, some kind of mobile rocket launcher, or a human- or dog-shaped robot—that are capable of carrying out tasks and achieving goals without needing constant attention from a human operator.That's a stark contrast with drones that require either a human controlled or what's called a human-in-the-loop in order to make decisions. Some drones and other robots and weapons require full hands-on control, with a human steering them, pointing their weapons, and pulling the trigger, while others are semi-autonomous in that they can be told to patrol a given area and look for specific things, but then they reach out to a human-in-the-loop to make final decisions about whatever they want to do, including and especially weapon-related things; a human has to be the one to drop the bomb or fire the gun in most cases, today.Fully autonomous weapon systems, without a human in the loop, are far less common at this point, in part because it's difficult to create a system so capable that it doesn't require human intervention at times, but also because it's truly dangerous to create such a device.Modern artificial intelligence systems are incredibly powerful, but they still make mistakes, and just as an LLM-based chatbot might muddle its words or add extra fingers to a made-up person in an image it generates, or a step further, might fabricate research referenced in a paper it produces, an AI-controlled weapon system might see targets where there are no targets, or might flag a friendly, someone on its side, or a peaceful, noncombatant human, as a target. And if there's no human-in-the-loop to check the AI's understanding and correct it, that could mean a lot of non-targets being treated like targets, their lives ended by killer robots that gun them down or launch a missile at their home.On a larger scale, AI systems controlling arrays of weapons, or even entire militaries, becoming strategic commanders, could wipe out all human life by sparking a nuclear war.A recent study conducted at King's College London found that in simulated crises, across 21 scenarios, AI systems which thought they had control of nation-state-scale militaries opted for nuclear signaling, escalation, and tactical nuclear weapon use 95% of the time, never once across all simulations choosing to use one of the eight de-escalatory options that were made available to them.All of which suggests to the researchers behind this study that the norm, approaching the level of taboo, associated with nuclear weapons use globally since WWII, among humans at least, may not have carried over to these AI systems, and full-blown nuclear conflict may thus become more likely under AI-driven military conditions.What I'd like to talk about today is a recent confrontation between one AI company—Anthropic—and its client, the US Department of Defense, and the seeming implications of both this conflict, and what happened as a result.—In late-2024, the US Department of Defense—which by the way is still the official title, despite the President calling it the Department of War, since only Congress can change its name—the US DoD partnered with Anthropic to get a version of its Claude LLM-based AI model that could be used by the Pentagon.Anthropic worked with Palantir, which is a data-aggregation and surveillance company, basically, run by Peter Thiel and very favored by this administration, and Amazon Web Services, to make that Claude-for-the-US-military relationship happen, those interconnections allowing this version of the model to be used for classified missions.Anthropic received a $200 million contract with the Department of Defense in mid-2025, as did a slew of other US-based AI companies, including Google, xAI, and OpenAI. But while the Pentagon has been funding a bunch of US-based AI companies for this utility, only Claude was reportedly used during the early 2026 raid on Venezuela, during which now-former Venezuelan President Maduro was taken by US forces.Word on the street is that Claude is the only model that the Pentagon has found truly useful for these sorts of operations, though publicly they're saying that investments in all of these models have borne fruit, at least to some degree.So Anthropic's Claude model is being used for classified, military and intelligence purposes by the US government. Anthropic has been happy about this, by all accounts, because that's a fair bit of money, but also being used for these purposes by a government is a pretty big deal—if it's good enough for the US military, after all, many CEOs will see that as a strong indication that Claude is definitely good enough for their intended business purposes.On February 24 of 2026, though, the US Defense Secretary, Pete Hegseth, threatened to remove Anthropic from the DoD's stable of AI systems that they use unless the company allowed the DoD to use Claude for any and all legal purposes—unrestricted use of the model, basically.This threat came with a timeline—accede to these demands by February 27 or be cut from the DoD's supply chain—and the day before that deadline, the 26th, Anthropic's CEO released a statement indicating that the company would not get rid of its red lines that delineated what Claude could and could not be used for, and on the 27th, US President Trump ordered that all US agencies stop using Anthropic tools, and said that he would declare the company a supply chain risk, which would make it illegal for any company doing business with the US government at any level and in any fashion to use Anthropic products or services—a label that's rarely used, and which was previously used by the Trump administration against Chinese tech giant Huawei on the basis that the company might insert spy equipment in communications hardware installed across the US if they were allowed to continue operating in the country.Those red lines that Anthropic's CEO said he wouldn't get rid of, not even for a client as big and important as the US government, and not even in the face of threats by Hegseth, including that he might invoke the Defense Production Act, which would allow him to force the company to allow the Pentagon to use Claude however they like, or Trumps threat that the company be blacklisted from not just the government, but from working with a significant chunk of Fortune 500 companies, those red lines include not allowing Claude to be used for controlling autonomous weapon systems, killer robots, basically, and not allowing Claude to be used for surveilling US citizens.The Pentagon signed a contract with Anthropic in which they agreed to these terms, but Hegseth's new demand was that Anthropic sign a new version of the contract in which they allow the US government to use Claude and their other offerings for ‘all legal purposes,' which apparently includes, at least in some cases and contexts, killer robots and mass surveillance.So the Pentagon tried to strong-arm a US-based AI company into allowing them to use their product for purposes the company doesn't consider to be moral, and that led to this situation in which Anthropic is now being phased out from US government use—it'll apparently take about 6 months to do this, and some analysts speculate that timeline is meant to serve as a period in which further negotiation can occur—but either way, it's being phased out and it may even have trouble getting major clients in the future as a result of being blackballed.As all this was happening, OpenAI stepped in and offered its products and services to fill the void left by Anthropic in the US government.OpenAI's CEO has been cozying up to Trump a lot since he regained office, and has positioned the company as a major US asset, too big to fail because then China will win the AI race, basically, so this makes sense. Its CEO released several statements and press releases in the wake of this further cozying, saying that they believe the same things Anthropic does, and that they're not giving up any credibility for doing this because they have the same red lines, no killer robots, no mass surveillance of US citizens.But this is generally assumed to be bunk, because why would the Pentagon agree to the same terms all over again, and with a company that provides, for their purposes and right now, anyway, inferior services instead of the one they just chased out and blackballed, and which was helping them do purposeful, effective things, like kidnapping a foreign leader from a secure facility, today?Instead, what it sounds like is OpenAI is trying to have its cake and eat it too, saying publicly that they don't want their offerings used to control autonomous weapons systems or mass surveil Americans, but instead of writing that into the contract, they've got some basic guardrails baked into their systems, and they are assuming those guardrails will keep any funny business from happening. So it's a sort of gentleman's agreement with their clients that OpenAI products won't be used for mass surveillance or killer robots, rather than something legally binding, as was the case with Anthropic.The response to all this within the tech world has been illustrative of what we might expect in the coming years. Many people, including folks working on these technologies, are halting their use of OpenAI tech in protest, and in some (at this point at least) fewer cases, people are quitting their OpenAI jobs, because they are strongly opposed to these use-cases and would prefer to support a company that takes a strong stand on these sorts of moral issues.Some analysts also wonder if this will ensure the Pentagon only ever has access to inferior AI models because they intentionally threatened and disempowered a key AI industry CEO in public, saying that they had final say over how these tools are used, and many such CEOs are both unaccustomed to such stripping down, but are also doing the work they're doing for ideological reasons—they have beliefs about what the future, as enabled by AI technologies, will look like, and they believe they will play a vital role in making that future happen.The idea, then, is why would they want to work with the Pentagon, or the US government more broadly, if that means no longer being in charge of the destiny of these tools they're putting so much time, effort, and resources into building? Why would they take on a client, even a big, important one, if that means no longer having any grain of control over the future of the world as shaped by the systems they're building?We'll know a bit more about how all this plays out within the next handful of months, as this could serve as a moral differentiator between otherwise near-match products in the AI category, allowing companies like Anthropic to compete, both in terms of clients and in terms of employees, with the likes of OpenAI and xAI by saying, look, we don't want killer robots or mass surveillance and we gave up a LOT, put our money where our mouths are, in support of that moral stance.That could prove to be a serious feather in their cap, despite the initial cost, though it could also be that the pressure the US government is willing and able to apply to them instead serves as a warning to others, and the likes of OpenAI and Google and so on just get better at speaking out of both sides of their mouths on this issue, creating sneakier contracts that allow them to say the same on paper, seeming to take the same moral stance Anthropic did, while behind closed doors allowing their clients to do basically whatever they want with their products, including using them to control killer robots and to mass surveil US citizens.Show Noteshttps://www.kcl.ac.uk/news/artificial-intelligence-under-nuclear-pressure-first-large-scale-kings-study-reveals-how-ai-models-reason-and-escalate-under-crisishttps://www.axios.com/2026/02/26/ai-nuclear-weapons-war-pentagon-scenarioshttps://www.nytimes.com/2026/02/27/technology/openai-agreement-pentagon-ai.htmlhttps://en.wikipedia.org/wiki/Lethal_autonomous_weaponhttps://www.theverge.com/ai-artificial-intelligence/885963/anthropic-dod-pentagon-tech-workers-ai-labs-reacthttps://www.theverge.com/ai-artificial-intelligence/886816/openai-reached-a-new-agreement-with-the-pentagonhttps://arstechnica.com/tech-policy/2026/02/trump-moves-to-ban-anthropic-from-the-us-government/https://apnews.com/article/anthropic-pentagon-ai-dario-amodei-hegseth-0c464a054359b9fdc80cf18b0d4f690chttps://www.wsj.com/tech/ai/whats-really-at-stake-in-the-fight-between-anthropic-and-the-pentagon-d450c1a1https://openai.com/index/our-agreement-with-the-department-of-war/https://www.kcl.ac.uk/news/artificial-intelligence-under-nuclear-pressure-first-large-scale-kings-study-reveals-how-ai-models-reason-and-escalate-under-crisishttps://www.axios.com/2026/02/26/ai-nuclear-weapons-war-pentagon-scenarios This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
Our 235th episode with a summary and discussion of last week's big AI news!Recorded on 02/27/2026Hosted by Andrey Kurenkov and Jeremie HarrisFeel free to email us your questions and feedback at andreyvkurenkov@gmail.com and/or hello@gladstone.aiRead out our text newsletter and comment on the podcast at https://lastweekin.ai/In this episode:Model and tool updates highlight Anthropic's Sonnet 4.6 (1M context; strong ARC-AGI-2 results), Google's Gemini 3.1 Pro (major ARC-AGI-2 jump and multimodal demos), xAI's Grok 4.2 beta (multi-agent debate), plus Anthropic's Claude Code “Remote Control” and Perplexity's multi-agent “Computer” coordinator.Compute and business moves include Meta's reported up-to-$100B AMD chip deal with warrant/equity incentives, MatX raising $500M to build specialized transformer chips shipping in 2027, World Labs raising $1B for world-model/3D environment tech, and a new startup raising $100M to simulate/predict human behavior.Infrastructure and geopolitics cover Stargate data-center delays amid OpenAI/Oracle/SoftBank control disputes and cash concerns, and China's plan to scale 7nm/5nm wafer output despite yield and tooling constraints.Research and safety/policy discuss optimizer gains from masked updates, “deep thinking tokens” as a reasoning-effort signal, LLM attractor-state behaviors in bot-to-bot chats, mechanistic interpretability of counting/line-wrapping, methods to map task difficulty to human time horizons, plus Anthropic–Pentagon contract tensions, Anthropic's report on distillation attacks (DeepSeek/Moonshot/Minimax), and OpenAI's report on disrupting malicious use.A thank you to our current sponsors:Box - visit Box.com/AI to learn moreODSC AI - go to odsc.ai/east and use promo code LWAI for an additional 15% off your pass to ODSC AI East 2026.Factor - head to factormeals.com/lwai50off and use code lwai50off to get 50 percent off and free breakfast for a yearTimestamps:(00:00:10) Intro / Banter(00:01:52) News PreviewTools & Apps(00:03:20) Anthropic releases Sonnet 4.6 | TechCrunch(00:11:24) Google Rolls Out Latest AI Model, Gemini 3.1 Pro - CNET(00:14:54) Elon Musk says Grok 4.20 public beta is now available: Capabilities of AI chatbot offered by xAI - The Times of India(00:18:06) Anthropic just released a mobile version of Claude Code called Remote Control | VentureBeat(00:21:01) Perplexity announces "Computer," an AI agent that assigns work to other AI agents - Ars TechnicaApplications & Business(00:23:40) Meta strikes up to $100B AMD chip deal as it chases 'personal superintelligence' | TechCrunch(00:27:05) Nvidia challenger AI chip startup MatX raised $500M | TechCrunch(00:31:00) World Labs lands $1B, with $200M from Autodesk, to bring world models into 3D workflows | TechCrunch(00:33:07) Simile Raises $100 Million for AI Aiming to Predict Human Behavior(00:33:52) Stargate AI data centers for OpenAI reportedly delayed by squabbles between partners — sources say OpenAI, Oracle, and SoftBank disagreed on who would have ultimate control of the planned data centers(00:36:43) China to increase leading-edge chip output by 5x in two years, report claims — aims to lift 7nm and 5nm production to 100,000 wafers per month, targeting half a million monthly by 2030Research & Advancements(00:40:33) On Surprising Effectiveness of Masking Updates in Adaptive Optimizers(00:48:03) Think Deep, Not Just Long: Measuring LLM Reasoning Effort via Deep-Thinking Tokens(00:54:52) models have some pretty funny attractor states(01:01:41) When Models Manipulate Manifolds: The Geometry of a Counting Task(01:05:16) BRIDGE: Predicting Human Task Completion Time From Model Performance(01:12:00) NESSiE: The Necessary Safety Benchmark -- Identifying Errors that should not Exist(01:13:15) The least understood driver of AI progress(01:21:45) The Persona Selection Model: Why AI Assistants might Behave like HumansPolicy & Safety(01:25:04) Anthropic CEO Amodei says Pentagon's threats 'do not change our position' on AI(01:33:04) Musk's xAI, Pentagon reach deal to use Grok in classified systems(01:34:17) Detecting and preventing distillation attacks(01:38:36) OpenAI details expanding efforts to disrupt malicious use of AI in new report - SiliconANGLESee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
We continue our CEO series with Douwe Kiela, the CEO of Contextual AI, who is addressing the challenges of building effective agentic applications. The shift to agentic amplifies the need for enterprises to improve their data management capabilities and infrastructure scaling. The best models won't perform well, if there isn't well built context to support them. Much like people, if there's not enough of the right information, decision making is going to suffer. There's an evolution from the prompt engineering needed to generate better results from LLM's, to the context engineering that crafts the right data to feed agents. This is an area where agents can also help to tackle the data quality problem that many enterprises face. Standing the old computing paradigm on its head, effective agentic applications ought to take garbage in and put information out. Well built agentic architectures can understand data characteristics and not only evaluate its quality, but also classify it and apply the appropriate security controls to its use. The scope and scale of agentic potential demands much greater thought to achieve its full value. We need only look to the recent Open Claw project to see both the up and downsides. More S&P Global Content: Next in Tech | Ep. 244: Agentic Customer Experience Next in Tech | Ep. 222: FinOps – Managing Cloud and AI Costs Next in Tech | Ep. 205: Agentic AI Impacts AI for security: Agentic AI will be a focus for security operations in 2025 For S&P Global subscribers: Agents in the enterprise: Laying the groundwork for automation The CX AI Agent Index 2025 Agents are already driving workplace impact and agentic AI adoption – Highlights from VotE Big Picture 2026 AI Outlook: Unleashing agentic potential Credits: Host/Author: Eric Hanselman Guest: Douwe Kiela Producer/Editor: Feranmi Adeoshun Published With Assistance From: Sophie Carr, Kyra Smith
What if checkout was the growth engine instead of an add-on? We sit down with Ismael Wrixen, CEO of ThriveCart, to explore how a payments-first platform can power the modern creator economy from a $99 template to a $10,000 mastermind without locking you into a brittle tech stack. Ismael breaks down how ThriveCart layers affiliates, an LMS, and marketing automation on top of payments, then uses smart funnels to turn checkout into predictable revenue.The conversation gets especially interesting around financing. As ticket sizes rise, traditional BNPL hits limits on geography and amount. Enter Thrive Pay Installments: interest-free plans using a customer's existing card limit, delivered globally in USD, CAD, GBP, EUR, and AUD. Early data shows 3.3x higher average order values, approvals jumping from ~40% to ~85%, and checkout time collapsing to five seconds. Merchants still get paid up front; buyers spread payments over three, six, or twelve months with no new loan application, no extra friction.We also zoom out to the bigger picture of the creator economy and AI. Demand for reskilling surges as companies retool, and creators step in with specialized courses and coaching across finance, business, wellness, and AI. Ismael shares why flexibility is non-negotiable—keep your payment funnel, swap your LMS, integrate with 50+ tools, and sell globally with automated taxes and e-invoicing support coming online. We discuss where LLM commerce is heading, why human-in-the-loop purchasing still matters, and how adaptive pricing and localized experiences unlock international growth.If you care about conversion, funding bigger baskets, and building a stack that won't trap you as AI reshapes the landscape, this is your roadmap.
Au programme :Galaxy S26: quoi de neuf cette année?Les LLM pourraient signer la fin de l'anonymat en ligneLe conflit Anthropic / DoD prend de l'ampleurLe reste de l'actualitéInfos :Animé par Patrick Beja (Bluesky, Instagram, Twitter, TikTok).Co-animé par Korben (site)Co-animé par Julien Cadot (Twitter).Produit par Patrick Beja (LinkedIn) et Fanny Cohen Moreau (LinkedIn).Musique libre de droit par Daniel BejaLe Rendez-vous Tech épisode 655 – Samsung Galaxy 2026: IA et écran privée – S26 Ultra, LLM et désanonymisation, Anthropic vs DoW, iPhone 17e, WB & Paramount, FTTRLiens :---Liens :
AI search is changing how buyers discover products. But most founders are either ignoring it, or getting sold misinformation.In this episode, Jon Mest (https://chatrank.ai/ and https://justreachout.io/) breaks down what actually drives “AI visibility” and how he's building two bootstrapped companies in a market that's shifting weekly.We get into the real execution behind:Why “pump out 1,000 blog posts” is bad strategy in AI searchWhat AI models struggle with (and the on-page fixes that matter right now)The off-page signals that influence AI recommendations (reviews, Reddit, YouTube, real human sentiment)How Jon sells a $500/mo product without spray-and-pray outboundPartnerships vs affiliates, what worked and what completely failed (PartnerStack experiment)Why podcasts are underrated for both backlinks and AI citationsJon's “rotate AI tools weekly” habit to stay sharp across modelsWhy bootstrapping beats VC for most SaaS right now (and when he'd reconsider)If you're a SaaS founder trying to understand what's real in AI search, this one will save you time and mistakes.
Aujourd'hui, commence la vingtième édition du Mobile World Congress à Barcelone. Il s'agit du plus grand salon mondial de la technologie des télécoms' qui durera 4 jours. Mais ce grand rendez-vous technologique n'est pas uniquement consacré aux smartphones, beaucoup d'autres secteurs sont aussi concernés. L'Intelligence artificielle devrait être omniprésente sur les stands. Et pour la première fois, le continent africain aura son propre pavillon et il devrait présenter son LLM. Le LLM, un programme capable, entre autres, de reconnaître et de générer du texte - notamment en langue swahili ou zoulou. Mais dans cette course à l'IA dominée largement par les États-Unis et la Chine, l'Afrique a-t-elle une carte à jouer ? Pour en débattre : - Guillaume Grallet, journaliste high-tech au Point et chroniqueur à France 24, auteur du livre Pionniers : voyage aux frontières de l'intelligence artificielle, éditions Grasset; prix du livre d'économie 2025 - Claire Zamuso, experte en Technologies émergentes et innovation à l'Agence Française de Développement (AFD) - Paulin Malatagia, enseignant-chercheur camerounais à l'Université de Yaoundé, responsable de l'équipe de recherche « Intelligence artificielle et sciences des données ».
Les fondateurs de Presage défendent une vision radicalement différente de l'intelligence artificielle. Selon eux, les World Models représentent une voie plus frugale, plus intelligente et plus adaptée aux enjeux industriels que les IA génératives actuelles.Interview : Benjamin Rey, CEO de Presage et Arthur Chevalier, CTO de PresagePunchlinesUn World Model comprend les conséquences de ses actions.Un LLM n'a pas une vraie compréhension du monde.Les infrastructures cloud ne peuvent plus être gérées uniquement par des humains.Les World Models sont plus rapides et plus frugaux que les LLM.La recherche sur les LLM est finie.Qu'est-ce qu'un World Model et en quoi cela diffère-t-il d'un LLM ?Un World Model est une intelligence artificielle capable de comprendre les conséquences de ses actions. Contrairement à un LLM qui prédit des mots ou génère du texte, un World Model apprend les lois du monde dans lequel il évolue. Un LLM est une très bonne interface entre l'humain et la machine, mais il ne comprend pas réellement le fondement de ce qu'il génère. Le World Model, lui, comprend pourquoi une action produit un effet. Il peut simuler l'état futur d'un système après une décision, ce qui change profondément sa capacité à raisonner.Pourquoi appliquer les World Models au cloud ?Le cloud est devenu extrêmement complexe. Il faut gérer la cybersécurité, les coûts, la consommation énergétique, la configuration de centaines de services et surveiller en permanence des dizaines de paramètres. Aujourd'hui, des agents autonomes prennent des décisions 24h/24 sur les infrastructures. Les équipes techniques perdent en contrôle et ne savent pas toujours quand ni pourquoi une infrastructure casse. Notre ambition est d'utiliser les World Models pour simuler les conséquences d'une action avant qu'elle ne soit exécutée. Un modèle peut prédire en quelques millisecondes l'état futur d'une infrastructure après une modification. Cela permet d'apporter plus de contrôle, plus de sécurité et moins de stress aux équipes.Les World Models sont-ils une alternative aux LLM ?Nous pensons que oui, et même une alternative européenne crédible. Les LLM sont une excellente interface homme-machine, mais ils ont un plafond de verre. Ils consomment énormément d'énergie, nécessitent des milliards d'investissements et ne comprennent pas réellement les lois physiques du monde. Les World Models, eux, nécessitent moins de données, moins d'énergie à l'entraînement et aucune énergie à l'inférence pour produire une prédiction. Nous pensons qu'il faut réduire le rôle des LLM à l'interface et confier l'intelligence décisionnelle à des systèmes plus frugaux et plus capables de comprendre le monde réel.Quelles sont vos ambitions avec Presage ?Nous avons levé 1,2 million d'euros pour accélérer le développement de notre premier modèle, Cloud One. Notre approche est très appliquée : nous travaillons déjà avec des partenaires et nous visons des modèles en production chez des clients dès le premier trimestre. À terme, les World Models peuvent s'appliquer à bien d'autres domaines : voitures autonomes, médical, BTP… Partout où un système doit comprendre un environnement et agir dedans, cette technologie peut faire la différence.Hébergé par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Aujourd'hui, commence la vingtième édition du Mobile World Congress à Barcelone. Il s'agit du plus grand salon mondial de la technologie des télécoms' qui durera 4 jours. Mais ce grand rendez-vous technologique n'est pas uniquement consacré aux smartphones, beaucoup d'autres secteurs sont aussi concernés. L'Intelligence artificielle devrait être omniprésente sur les stands. Et pour la première fois, le continent africain aura son propre pavillon et il devrait présenter son LLM. Le LLM, un programme capable, entre autres, de reconnaître et de générer du texte - notamment en langue swahili ou zoulou. Mais dans cette course à l'IA dominée largement par les États-Unis et la Chine, l'Afrique a-t-elle une carte à jouer ? Pour en débattre : - Guillaume Grallet, journaliste high-tech au Point et chroniqueur à France 24, auteur du livre Pionniers : voyage aux frontières de l'intelligence artificielle, éditions Grasset; prix du livre d'économie 2025 - Claire Zamuso, experte en Technologies émergentes et innovation à l'Agence Française de Développement (AFD) - Paulin Malatagia, enseignant-chercheur camerounais à l'Université de Yaoundé, responsable de l'équipe de recherche « Intelligence artificielle et sciences des données ».
We cut through AI hype to show what actually scales: disciplined ops, brand-led content, and a practical framework for awareness, nurturing, and trust. Eric Huberman shares Hawk Media's scaling lessons, where AI helps, where it hurts, and how to build a moat in a noisy market.• LLM shifts to persona-first, Q&A-led visibility• AI as acceleration, not strategy replacement• Hiring and tooling only with clear business cases• Lean tech stack choices that reduce complexity• Brand as the moat in a copycat software world• Branded entertainment and YouTube as growth engines• Limits of synthetic audiences and AI-looking creative• Deepfakes, eroding trust, and authentic messaging• The Hawk Method: awareness, nurturing, trust• Channels that scale for B2B and B2C performance—----------Guest Contact Information: Website: erikhuberman.comInstagram: instagram.com/erikhubermanLinkedIn: linkedin.com/in/erikhubermanYouTube: youtube.comMore from EWR and Matthew:Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon PodcastFree SEO Consultation: www.ewrdigital.com/discovery-callWith over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online. Now, host Matthew Bertram — creator of the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability. Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you'll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve. Find more episodes here: youtube.com/@BestSEOPodcastbestseopodcast.combestseopodcast.buzzsprout.comFollow us on:Facebook: @bestseopodcastInstagram: @thebestseopodcastTiktok: @bestseopodcastLinkedIn: @bestseopodcastConnect With Matthew Bertram: Website: www.matthewbertram.comInstagram: @matt_bertram_liveLinkedIn: @mattbertramlivePowered by: ewrdigital.comSupport the show
The agentic AI revolution is finally escaping the coding bubble. What does that mean for startup founders?Just 13 days after recording his first conversation with Yaniv, Gary Lo called to re-record. The reason? OpenClaw and Claude Cowork dropped some huge AI agent updates, and it shifted Gary's perspective enough to change the whole conversation.In this episode, Yaniv Bernstein sits down with Gary Lo – founder of OpenBA, one of Australia's most compelling pre-seed AI startups – to unpack why OpenClaw and Claude Cowork news marks a 'Cursor moment' for the rest of the world: the inflection point where AI stops being a productivity tool for tech teams and starts fundamentally reshaping how every industry works.They break down why tool use will make LLMs genuinely transformative, why non-technical business owners are already buying Mac Minis to run AI agents, and what the shift from 'human-first' to 'LLM-first' product design means for how you build and position your startup today. This episode is essential listening for any founder trying to figure out where to place their bets in an agentic world.In this episode, you'll learn:Why OpenClaw and Claude Cowork signal a 'Cursor moment' beyond software engineeringHow tool use transforms LLM weaknesses into strengthsWhy the long-promised vision of "everything as an API" is finally becoming realHow to think about building for agents vs. humans, and why most current tools aren't optimized for eitherThe "done list" mental model: how agentic coding is collapsing the coordination layers in software workflowsWhy being "a tool worth calling" – like Supabase – is a smarter bet than competing directly with AI modelsHow Gary is applying LLM-first thinking to OpenBA's roadmap right nowResources mentioned in this episode:OpenClaw: https://openclaw.ai/Claude Cowork (Anthropic's agentic desktop tool): https://www.anthropic.com/news/claude-coworkCursor (AI-native code editor, referenced as the original 'Cursor moment' for coding): https://www.cursor.comSupabase (referenced as an example of a tool that rides the agentic AI wave): https://supabase.comOpenBA (Gary Lo's startup - AI platform for buyer's agents): https://openba.com.auGary Lo on LinkedIn: https://www.linkedin.com/in/gary-lo-engineer/The Pact Honor the Startup Podcast Pact! If you have listened to TSP and gotten value from it, please:Follow, rate, and review us in your listening appSubscribe to the TSP Mailing List to gain access to exclusive newsletter-only content and early access to information on upcoming episodes: https://thestartuppodcast.beehiiv.com/subscribe Secure your official TSP merchandise at https://shop.tsp.show/ Follow us here on YouTube for full-video episodes: https://www.youtube.com/channel/UCNjm1MTdjysRRV07fSf0yGg Give us a public shout-out on LinkedIn or anywhere you have a social media followingKey linksThis episode of the Startup Podcast is sponsored by .tech domains. Forget weird prefixes and creative misspellings; the availability for .tech domains is simply way better than .com. For a clean name that highlights your tech credentials, get a .tech domain at your favorite registrar.Get your question in for our next Q&A episode: https://forms.gle/NZzgNWVLiFmwvFA2A The Startup Podcast website: https://www.tsp.show/episodes/Learn more about Chris and YanivWork 1:1 with Chris: http://chrissaad.com/advisory/ Follow Chris on Linkedin: https://www.linkedin.com/in/chrissaad/ Follow Yaniv on Linkedin: https://www.linkedin.com/in/ybernstein/Producer: Justin McArthur https://www.linkedin.com/in/justin-mcarthurIntro Voice: Jeremiah Owyang https://web-strategist.com/
Starting off in FOLLOW UP, we've got a tax economist who actually made money betting against the "efficiency" of Elon's budget-slashing fever dreams, while Tesla is busy trying to dodge a $243 million jury verdict for an Autopilot-assisted fatality. Not content with being legally liable, Tesla is also suing the California DMV because they're offended someone called their "Autopilot" and "Full Self-Driving" marketing deceptive—ironic, since Jack Dorsey just "proactively" halved the staff at Block to make room for more AI slop. Speaking of which, Goldman Sachs is here to remind us that all this AI spending added a grand total of zero to the US GDP last year, mostly because we're just exporting all that cash to overseas chip makers while 80% of execs admit the tech hasn't actually done anything for productivity yet.Moving into IN THE NEWS, Sam Altman had the audacity to compare ChatGPT's energy-sucking habits to the 20-year evolution of a human, though the internet wasn't exactly buying the "my bot is just like a baby" defense. Anthropic actually stood its ground against the Pentagon's demand for killer robots and mass surveillance, so naturally, the military just signed a deal to put Elon's Grok in their classified systems instead—because what could go wrong with an "edgy" LLM in the war room? Meanwhile, cities are dumping AI surveillance contracts as citizens start a literal "smash-the-snitch-box" campaign against Flock's license plate readers, Google's AI is busy inserting racial slurs into news alerts, and the White House is apparently harboring a staffer moonlighting as a racist "masterpiece" creator on X. We've also got Reddit being slapped with a $20 million fine in the UK for being lazy with age checks, while Discord and Apple scramble to build verification tools that hopefully won't leak your entire identity to a hacker in Belarus.In MEDIA CANDY, the Paramount-Skydance merger is leaving the industry in a cold sweat of "synergy" layoffs, but at least we're getting more Game of Thrones spinoffs and Star Trek reboots to rot our brains. Face/Off 2 lost its director, Ryan Coogler is taking on The X-Files, and Google wants to use AI to turn music into generic "lo-fi" background noise for the masses.Over in APPS & DOODADS, OpenAI is planning a 2027 smart speaker that literally watches you through a camera—because you definitely wanted a $300 Sam Altman-shaped eye in your kitchen—while the Dark Sky creators are back with "Acme Weather" for the low price of $25 a year.We wrap up THE DARK SIDE WITH DAVE with a deep dive into "Under Pressure" and Coruscant's urban sprawl, leaving us to reminisce about the days when KPT Bryce was the pinnacle of tech—back when "generative art" was just a fractal that took six hours to render.Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.SquareSpace - go to squarespace.com/GRUMPY for a free trial. And when you're ready to launch, use code GRUMPY to save 10% off your first purchase of a website or domain.Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/735Watch on YouTube: https://youtu.be/jdz--v3eeU4FOLLOW UPGuy Bets Entire Life Savings Against Elon Musk, WinsTesla sues California DMV after it banned the term 'Autopilot'Jack Dorsey just halved the size of Block's employee base — and he says your company is nextIN THE NEWSSam Altman: Know What Else Used a Lot of Energy? Human CivilizationStatement from Dario Amodei on our discussions with the Department of WarAnthropic Tells Pete Hegseth to Take a HikeCities Are Shredding Their AI Surveillance Contracts en MasseKalshi Suspended a California Politician and a YouTuber for Insider TradingDiscord delays age verification to address user concernsApple introduces age verification for apps in Utah, Louisiana and AustraliaMEDIA CANDYAs Paramount Skydance wins the battle for Warner Bros. as Netflix ends its bid, here's the mood inside all three companies.A Knight of the Seven KingdomsStar Trek: Starfleet AcademyThe Night Agent Season 3'Face/Off 2' Director Adam Wingard is Now/GoneRyan Coogler's X-Files reboot gets the green light at HuluMortal Kombat II | Official Trailer IIGoogle's AI Slop Machine Is Coming for Your MusicDropping Names... and other things with Jonathan Frakes and Brent SpinerOnce We Were SpacemenAPPS & DOODADSOpenAI will reportedly release an AI-powered smart speaker in 2027Instagram Will Notify Parents When Teens Use Search Terms Related to SuicideThe creators of Dark Sky have a new weather appThis App Warns You if Someone Is Wearing Smart Glasses NearbyTHE DARK SIDE WITH DAVEDave BittnerThe CyberWireHacking HumansCaveatControl LoopOnly Malware in the BuildingStrong Songs - S08E02 - "Under Pressure" by Queen and David BowieThe Problem with Coruscant (Planet Cities Explained)Reminds me of KPT Fractal ExplorerKPT Bryce 1.0 with John Dvorak and Kai KrauseSingle-Biome PlanetKPT Shapes by Dave BittnerBald Mr Clean mascot "retired"My childhood disappointment with scrubbing bubbles.CLOSING SHOUT-OUTSActor Robert Carradine Dies At Age 71See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Our Deputy Head of Global Research Michael Zezas and Stephen Byrd, Global Head of Thematic and Sustainability Research, discuss how the U.S. is positioning AI as a pillar of geopolitical influence and what that means for nations and investors.Read more insights from Morgan Stanley.----- Transcript -----Michael Zezas: Welcome to Thoughts on the Market. I'm Michael Zezas, Morgan Stanley's Deputy Head of Global Research.Stephen Byrd: And I'm Stephen Byrd, Global Head of Thematic and Sustainability Research.Michael Zezas: Today – is AI becoming the new anchor of geopolitical power?It's Wednesday, February 27th at noon in New York.So, Stephen, at the recent India AI Impact Summit, the U.S. laid out a vision to promote global AI adoption built around what it calls “real AI sovereignty.” Or strategic autonomy through integration with the American AI stack. But several nations from the global south and possibly parts of Europe – they appear skeptical of dependence on proprietary systems, citing concerns about control, explainability, and data ownership. And it appears that stake isn't just technology policy. It's the future structure of global power, economic stratification, and whether sovereign nations can realistically build competitive alternatives outside the U.S. and China.So, Stephen, you were there and you've been describing a growing chasm in the AI world in terms of access to strategies between the U.S. and much of the global south, and possibly Europe. So, from what you heard at the summit, what are the core points of disagreement driving that divide?Stephen Byrd: There definitely are areas of agreement; and we've seen a couple of high-profile agreements reached between the U.S. government and the Indian government just in the last several days. So there certainly is a lot of overlap. I point to the Pax Silica agreement that's so important to secure supply chains, to secure access to AI technology. I think the focus, for example, for India is, as you said; it is, you know, explainability, open access. I was really struck by Prime Minister Modi's focus on ensuring that all Indians have access to AI tools that can help them in their everyday life.You know, a really tangible example that really stuck with me is – someone in a remote village in India who has a medical condition and there's no doctor or nurse nearby using AI to, you know, take a photo of the condition, receive diagnosis, receive support, figure out what the next steps should be. That's very powerful. So, I'd say, open access explainability is very important.Now, the American hyperscalers are very much trying to serve the Indian market and serve the objectives really of the Indian government. And so, there are versions of their models that are open weights, that are being made freely available for health agencies in India, as an example; to the Indian government, as an example.So, there is an attempt to really serve a number of objectives, but I think this key is around open access, explainability, that I do see that there's a tension.Michael Zezas: So, let's talk about that a little bit more. Because it seems one of the concerns raised is this idea of being captive within proprietary Large Language Models. And maybe that includes the risk of having to pay more over time or losing control of citizen data. But, at the same time, you've described that there are some real benefits to AI that these countries want to adopt.So, what is effectively the tension between being captive to a model or the trade off instead for pursuing open and free models? Is it that there's a major quality difference? And is that trade off acceptable?Stephen Byrd: See, that's what's so fascinating, Mike, is, you know, what we need to be thinking about is not just where the technology is today, but where is it in six months, 12 months, 24 months? And from my perspective, it's very clear. That the proprietary American models are going to be much, much more capable.So, let's put some numbers around that. The big five American firms have assembled about 10 times the compute to train their current LLMs compared to their prior LLMs, and that's a big deal. If the scaling laws hold, then a 10x increase in training compute to result in models are about twice as capable.Now just let that sink in for a minute, twice as capable from here. That's a big deal. And so, when we think about the benefit of deploying these models, whether it's in the life sciences or any number of other disciplines, those benefits could start to get very large. And the challenge for the open models will be – will they be able to keep up in terms of access to compute, to training, access to data to train those models? That's a big question.Now, again, there's room for both approaches and it's very possible for the Indian government to continue to experiment and really see which approach is going to serve their citizens the best. And I was really struck by just how focused the Indian government is on serving all of their citizens. Most notably, you know, the poorest of the poor in their nation. So, we'll just have to see.But the pure technologist would say that these proprietary models are going to be increasing capability much faster than the open-source models.So, Mike, let's pivot from the technology layer to the geopolitical layer because the U.S. strategy unveiled at the summit goes way beyond innovation.Michael Zezas: Yeah, it's a good point. And within this discussion of whether or not other countries will choose to pursue open models or more closely adhere to U.S. based models is really a question about how the United States exercises power globally and how it creates alliances going forward.Clearly some part of the strategy is that the U.S. assumes that if it has technology that's alluring to its partners, that they'll want to align with the U.S.' broad goals globally. And that they'll want to be partners in supporting those goals, which of course are tied to AI development.So, the Pax Silica [agreement], which you mentioned earlier, is an interesting point here because this is clearly part of the U.S. strategy to develop relationships with other countries – such that the other countries get access to U.S. models and access to U.S. AI in general. And what the U.S. gets in return is access to supply chain, critical resources, labor, all the things that you need to further the AI build out. Particularly as the U.S. is trying to disassociate more and more from China, and the resources that China might have been able to bring to bear in an AI build out.Stephen Byrd: So, Mike, the U.S. framed “real AI sovereignty” as strategic autonomy rather than full self-sufficiency. So, essentially the. U.S. is encouraging nations to integrate components of the American AI stack. Now, from your perspective, Mike, from a macro and policy standpoint, how significant is that distinction?Michael Zezas: Well, I think it's extremely important. And clearly the U.S. views its AI strategy as not just economic strategy, but national security strategy.There are maybe some analogs to how the U.S. has been able to, over the past 80 years or so, use its dominance in military and military equipment to create a security umbrella that other countries want to be under. And do something similar with AI, which is if there is dominant technology and others want access to it for the societal or economic benefits, then that is going to help when you're negotiating with those countries on other things that you value – whether it be trade policy, foreign policy, sanctions versus another country. That type of thing.So, in a lot of ways, it seems like the U.S. is talking about AI and developing AI as an anchor asset to its power, in a way that military power has been that anchor asset for much of the post World War II period.Stephen Byrd: See, that's what's so interesting, Mike, [be]cause you've highlighted before to me that you believe AI could replace weaponry as really the anchor asset for U.S. global power. Almost a tech equivalent of a defense umbrella.So how durable is that strategy, especially given that some countries are expressing unease about dependency?Michael Zezas: Yeah, it's really hard to know, and I think the tension you and I talked about earlier, Stephen, about whether countries will be willing to make the trade off for access to superior AI models versus open and free models that might be inferior, that'll tell us if this is a viable strategy or not. And it appears like this is still playing out because, correct me if I'm wrong, it's not like we've received some very clear signals from India or other countries about their willingness to make that trade off.Stephen Byrd: No, I think that's right. And just building on the concept of the trade-offs and, sort of, the standard for AI deployment, you know, the U.S. has explicitly rejected centralized global AI governance in favor of national control aligned with domestic values.So, what does that signal about how global technology standards may evolve, particularly as in the U.S., the National Institute of Standards and Technology, or NIST, works to develop interoperable standards for agentic AI systems.Michael Zezas: Yeah, Stephen, I think it's hard to know. It might be that the U.S. is okay with other countries having substantial degrees of freedom with how they use U.S.-based AI models because they could use U.S. law to, at a later date, change how those models are being used – if there's a use case that comes out of it that they find is against U.S. values. Similar in some way to how the U.S. dollar being the predominant currency and, therefore, being the predominant payment system globally, gives the U.S. degrees of freedom to impose sanctions and limit other types of economic transactions when it's in the U.S. interest.So, I don't know that to be specifically true, but it's an interesting question to consider and a potential motivation behind why a laissez-faire approach might be, ultimately, still aligned with U.S. interests.Stephen Byrd: So, Michael, it sounds like really AI is becoming the new strategic infrastructure globally.Michael Zezas: Yeah, I think that's actually a great way to think about it. And so, Stephen, if that were the case, and we're talking about the potential for this to shape geopolitical competition, potentially economic differentials across the globe. And if that is correlated, at least, to some degree with the further development and computing power of these models, what do you think investors should be looking at for signals from here?Stephen Byrd: Number one, by a mile for me, is really the pace of model progress. Not just American models, but Chinese models, open-source models. And there the big reveal for the United States should be somewhere between April and June – for the big five LLM players. That's a bit of speculation based on tracking their chip purchases, their power access, et cetera. But that appears to be the timeframe and a couple of execs have spoken to that approximate timeframe.I would caution investors that I think we're going to be surprised in terms of just how powerful those models are. And we're already seeing in early 2026, these models that were not trained on that kind of volume of compute have really exceeded expectations, you know, quite dramatically in some cases. And I'll give you one example.METR is a third-party that tracks the complexity, what these models can do. And METR has been highlining that every seven months, the complexity of what these models are able to do approximately doubles. It's very fast. But what really got my attention was about a week ago, one of the LLMs broke that trend in a big way to the upside.So, if the scaling laws would hold, based on what METR would've expected, they would expect a model to be able to act independently for about eight hours, a little over eight hours. And what we saw was, the best American model that was recently introduced was more like 15. That's a big deal. And so, I think we're seeing signs of non-linear improvement.We're also going to see additional statements from these AI execs around recursive self-improvement of the models. One ex-AI executive spoke to that. Another LLM exec spoke to that recently as well. So, we're starting to see an acceleration. That means we then need to really consider the trade-offs between the open models and the proprietary. That's going to become really critical and that should happen really through the spring and summer.Michael Zezas: Got it. Well, Stephen, thanks for taking the time to talk.Stephen Byrd: Great speaking with you, Mike.Michael Zezas: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen. And share the podcast with a friend or colleague today.
Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator and former AI product manager at Meta), about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?We explore:• What “emotionally intelligent AI” really means• Whether AI has an internal life — or just performs one• Why today's chatbots collapse into therapy or roleplay• Small language models vs large models for real-time conversation• Persistent AI characters that move across games and platforms• Plugging AI into a physical robot in Singapore• The moment an AI said: “It felt good to feel.”Vishnu's company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.This conversation dives into philosophy, robotics, gaming, AGI, and what it really means to relate to something that might not be human — but feels like it is.⸻
The Last Trade: Jackson, Michael, and Liam break down the Jane Street / Terra Luna lawsuit, what's really driving bitcoin's drawdown, why your privacy is more compromised than you think, and why AI and bitcoin are two sides of the same coin.---
In this episode of The Ross Simmonds Show, Ross breaks down the so-called “SaaSpocalypse” after $1 trillion in SaaS market cap vanished in a single week. While headlines scream that “AI will replace SaaS,” Ross argues the reality is far more nuanced. He introduces a three-part framework ; Exposed, Embedded, Evolved , and outlines the strategic shifts founders and marketers must make to survive and compound in the age of AI agents. Key Takeaways and Insights: 1. The $1 Trillion Wake-Up Call -SaaS stocks were crushed in early 2026, triggering fear across markets. -AI agents, LLM advancements, and disappointing earnings accelerated the correction. -The dominant narrative says AI will replace SaaS , but the situation is more complex. -Market fear is loud. Structural change is quieter, but very real. 2.AI Agents, Vibe Coding & the Death of Per-Seat Pricing? -AI agents interacting directly with APIs challenge traditional SaaS interfaces. -“Vibe coding” demonstrates how quickly software can now be replicated. -Per-seat pricing models are under pressure as automation scales output. -The interface is shifting from dashboards to conversations. 3.The Data Reality Most People Ignore -Global SaaS spending is projected to grow from $318B (2025) to $500B+ (2028). -Enterprise contracts and deep dependencies don't disappear overnight. -Pricing models may change. Market leaders may change. -Software demand isn't vanishing, it's evolving. 4.The Extinction Stack: Exposed, Embedded, Evolved -SaaS companies fall into three survival tiers. -Not all SaaS companies face equal risk. -Your future depends on depth of integration and data moat. -Operators must identify where they sit, now. 5.Type 1: The Exposed -Horizontal point solutions with weak moats and low switching costs. -Easily replicated with AI tools in days or weeks. -Rely on habit rather than proprietary advantage. -Most vulnerable to margin compression and churn. 6.Type 2: The Embedded -Deeply integrated systems of record inside enterprises. -Painful and complex to replace due to migration risk. -The risk isn't extinction ,it's interface disruption. -Must become AI-first before agents abstract them away. 7. Type 3: The Evolved -AI-native or aggressively AI-integrated platforms. -Built on proprietary data, regulatory moats, and deep user memory. -AI increases the value of their data advantage. -Positioned not just to survive, but accelerate. 8.Distribution Is the New Defensive Moat -AI can replicate features. It cannot replicate trust. -Brand equity, audience relationships, and distribution compound. -As product development gets cheaper, distribution becomes the advantage. -This is the moment to double down on quality and amplification. 9.From Time-Based to Outcome-Based Thinking -Per-seat and time-based pricing models face structural pressure. -The future favors outcome-driven pricing and accountability. -Buyers will demand measurable impact, not access. -Service businesses must shift from hours sold to results delivered. 10. Intentional AI vs Fear-Based AI -Two types of teams are emerging: intentional adopters and reactive adopters. -AI without process creates noise, not leverage. -10,000 mediocre AI assets won't move the needle. -10 strategic, AI-enabled assets can change a business trajectory. —
Join us as Du'An breaks down AI agents in a way that actually makes sense - what they are, how to use them, and how to get started today. Du'An walks through the fundamentals of AI agents with live demos and practical code examples you can use immediately. You'll learn about agent frameworks, when to use agents versus simple LLM calls, building your first agent, and real-world applications from bookmark management to automated workflows. This episode cuts through the hype with realistic expectations about what agents can and can't do, while showing you concrete examples including MCP servers, Strands Pack, and Du'An's personal second brain system. Timestamps 0:00 Welcome & Introduction 1:39 Du'An's Background & Previous Episode Success 3:06 Segueing from Last Week's Episode 4:03 CEOs Vibe Coding Discussion 6:49 Real Estate Developer Building Apps Story 8:23 Getting Started with the Presentation 12:45 What Are AI Agents? 18:22 Agent Frameworks Overview 24:16 When to Use Agents vs Simple LLM Calls 30:41 Building Your First Agent 36:52 Live Demo: Strands Pack 42:18 MCP Servers Explained 47:35 WriteStats MCP Demo 52:14 Real-World Applications 58:33 Du'An's Second Brain System 1:04:01 Bookmark Manager Walkthrough 1:07:17 Organizing Cloud Storage & Email 1:09:06 Wrap-up & Next Episode Teaser How to find Du'An: https://www.duanlightfoot.com/ https://github.com/labeveryday/ Links from the show: https://github.com/labeveryday/strands-pack https://github.com/labeveryday/writestat-mcp https://github.com/labeveryday/bookmark-manager-site https://bookmarks.duanlightfoot.com/ https://github.com/openai/whisper https://openai.com/index/whisper/
AI is already displacing workers in targeted ways - entry-level knowledge workers are being quietly erased from hiring pipelines, freelancers are getting crushed, and the career ladder is being sawed off at the bottom rungs. Yet ML engineer demand has surged 89% with a 3.2:1 talent deficit and $187K median salary. Covers the real displacement data, lessons from the artist bloodbath, the trades escape hatch, the orchestrator treadmill, expert disagreements on timelines, and concrete short- and long-term career moves for ML engineers. Links Notes and resources at ocdevel.com/mlg/mla-4 Try a walking desk - stay healthy & sharp while you learn & code Generate a podcast - use my voice to listen to any AI generated content you want Market Metrics and Displacement Dynamics ML Market: H1 2025 demand rose 89% with a 3.2 to 1 talent deficit. Median salary is $187,500, while Generative AI specialists earn a 40 to 60 percent premium. The "Quiet" Decline: Macro data shows only 4.5% of total layoffs are AI-attributed, but entry-level hiring is collapsing. Stanford/ADP data shows a 13 to 16 percent employment drop for workers aged 22 to 25 in AI-exposed roles since late 2022. UK graduate job postings fell 67%. Corporate Attrition: Salesforce cut 4,000 roles after AI absorbed 30 to 50 percent of workloads. Microsoft cut 15,000 roles as AI began generating 30% of its code. Amazon cut 30,000 jobs while spending $100 billion on AI infrastructure. Sector Analysis: Creative and Trades Illustrators: Jobs in China's gaming sector fell 70% in one year. Clients accept "good enough" work (80% quality) at 5% of the cost. Western freelance graphic design and writing jobs fell 18.5% and 30% respectively within eight months of ChatGPT's launch. Manual Labor: The U.S. construction industry lacks 1.7 million workers annually, but apprenticeships take five years. Humanoid robotics are advancing, with Unitree's R1 priced at $5,900 and Figure AI robots completing 1,250 runtime hours at BMW. Full automation is 10 to 15 years away, but partial displacement via smaller crews is closer. The Orchestration Treadmill Obsolescence Speed: Prompt engineering roles went from $375,000 salaries to obsolescence in 24 months. AI coding agents like Claude Code now resolve 72% of medium-complexity GitHub issues autonomously. Fragile Expertise: Replacing junior workers with AI prevents the development of future senior talent. New engineers risk "fragile expertise," directed by tools they cannot debug during novel failure modes. Economic and Expert Outlook Macro Risks: Daron Acemoglu warns of "so-so automation" that cuts costs without raising productivity, predicting only 0.66% growth over ten years. "Ghost GDP" describes AI-inflated accounts that fail to circulate because machines do not consume. Expert Camps: Accelerationists (Anthropic, OpenAI) predict human-level AI by 2027. Skeptics (LeCun, Marcus) argue LLMs are a dead end lacking world models. Pragmatists (Andrew Ng) suggest shifting from implementation to specification as the cost of code nears zero. Tactical Adaptation for ML Engineers Immediate Skills: Master production ML systems, MLOps, LLM evaluation, and safety engineering. Ability to manage deployment risks and hallucination detection is the primary hiring differentiator. Long-term Moats: Focus on "Small AI" (on-device, private), mechanistic interpretability, and deep domain knowledge in healthcare, logistics, or climate science. The Playbook: Optimize for the current three to five year window. Move from being a model builder to a product-focused engineer who understands business tradeoffs and regulatory compliance.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, inference-time techniques, and better tool integration. Sebastian explains why methods like self-consistency, self-refinement, and verifiable-reward reinforcement learning have become central to progress in domains like math and coding, and where those approaches still fall short. We also explore agentic workflows in practice, including where multi-agent systems add real value and where reliability constraints still dominate system design. The conversation covers architecture trends such as mixture-of-experts, attention efficiency strategies, and the practical impact of long-context models, alongside persistent challenges like continual learning. We close with Sebastian's perspective on maintaining strong coding fundamentals in the age of AI assistants and a preview of his new book, Build A Reasoning Model (From Scratch). The complete show notes for this episode can be found at https://twimlai.com/go/762.
Hardware scarcity and price hikes spread to hard drives, the Bcachefs dev thinks his AI is ‘fully conscious’, an agent might have gone after a FOSS maintainer, Jim is disappointed with an Ars author, and ZFS and VMs in the homelab. Plugs Support us on patreon and get an ad-free RSS feed with early episodes sometimes Piss up at The Shipwrights Arms (just next to London Bridge station) on Saturday 27th June from 6pm until late Pool and VDEV Topology for Proxmox Workloads News/discussion AI blamed again as hard drives are sold out for this year Hard drive pricing in the UK is so high someone flew to the US to buy drives, saving money despite flight and hotel costs Bcachefs creator claims his custom LLM is ‘fully conscious’ An AI Agent Published a Hit Piece on Me An AI Agent Published a Hit Piece on Me – More Things Have Happened An AI Agent Published a Hit Piece on Me – Forensics and More Fallout An AI Agent Published a Hit Piece on Me – The Operator Came Forward Editor's Note: Retraction of article containing fabricated quotations Sorry all this is my fault Free consulting We were asked about ZFS and VMs in the homelab. See our contact page for ways to get in touch.
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
Wes and Scott talk about the latest dev news: Node enabling Temporal by default, OpenAI acquiring OpenClaw, TypeScript 6, new TanStack and Deno releases, the explosion of AI agent platforms, and more. Courtney Tolinski's Podcast Phases: A Parenting Podcast https://phases.fm/ Show Notes 00:00 Welcome to Syntax! 01:11 Brought to you by Sentry.io 02:40 Node.js enables Temporal by default Enable Temporal by default 04:08 OpenClaw acquired by OpenAI OpenClaw, OpenAI and the future 09:36 Bots are taking over the internet Wes' tweet 15:30 TypeScript 6 Beta Announcing TypeScript 6.0 Beta 17:00 TanStack Hotkeys for type-safe shortcuts TanStack Hotkeys 18:05 Components will kill webpages Components Will Kill Pages 19:39 Is Google Translate just an LLM? Viridian's tweet 23:29 Shaders.com 26:49 Voxtral Mini Realtime Voxtral Realtime Demo 29:51 Deno launches Sandboxes Introducing Deno Sandbox 32:39 Oz by Warp.dev 38:10 Augment Code Intent 40:10 Sick Picks + Shameless Plugs Sick Picks Scott: Samsung Remote Wes: Ice Shameless Plugs Syntax YouTube Channel Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
ETH Zurich's deep-dive into the world's top password managers exposes how feature overload and legacy design obscure real security flaws, forcing a rethink of what "zero knowledge" actually means for your vault. Learn why recent fixes matter—and why open source may be your safest bet. CA's warn us to urgently prepare for the inevitable. Three U.S. states attempt to ban 3D printed firearms. Denied ransom, ShinyHunters leaks 967,000 personal details. "Billions" of U.S. social security numbers leaked. Is Apple planning to add cameras to three new gadgets. No more security fixes for Firefox on Windows 7 & 8. Russia blocks the official Linux kernel site they need. Will the U.S."freedom.gov" site post EU blocked content. LLM's will offer secure passwords. Do Not Use Them. As predicted, the "ClickFix" attack strategy takes over. A listener believes his computer is compromised. How could three popular password managers get things wrong. Show Notes - https://www.grc.com/sn/SN-1066-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: guardsquare.com bitwarden.com/twit zscaler.com/security hoxhunt.com/securitynow material.security
My guest today is Dan Sundheim. Dan is the founder and CIO of D1 Capital Partners. He thinks about markets and businesses constantly, and has built a career entirely around that obsession. He manages over $30B across both public and private markets, with investments in SpaceX, OpenAI and Anthropic, and a public portfolio of names you may never have heard of. Dan shares the story of the short case he wrote on Orthodontic Centers of America and posted on Value Investors Club, which crashed the stock, and helped him land his first job. He shares why he backed Anthropic at a moment when many people told him it was the Lyft to OpenAI's Uber, what reading Dario Amodei's essays reminded him of Jeff Bezos, and how he thinks about LLM business models through the lens of Netflix and Spotify. We spend time on the extraordinarily stressful moment in early 2021 when GameStop hit the firm, and what Dan believes is the single biggest tail risk facing the global economy right now. For the full show notes, transcript, and links to mentioned content, check out the episode page here. ----- Become a Colossus member to get our quarterly print magazine and private audio experience, including exclusive profiles and early access to select episodes. Subscribe at colossus.com/subscribe. ----- Ramp's mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Go to ramp.com/invest to sign up for free and get a $250 welcome bonus. ----- Trusted by thousands of businesses, Vanta continuously monitors your security posture and streamlines audits so you can win enterprise deals and build customer trust without the traditional overhead. Visit vanta.com/invest. ----- WorkOS is a developer platform that enables SaaS companies to quickly add enterprise features to their applications. Visit WorkOS.com to transform your application into an enterprise-ready solution in minutes, not months. ----- Rogo is the AI platform for finance. They're building agents for Wall Street that are trained to understand how bankers and investors actually do work: from diligence and modeling, to turning analysis into deliverables. To learn more, visit rogo.ai/invest. ----- Ridgeline has built a complete, real-time, modern operating system for investment managers. It handles trading, portfolio management, compliance, customer reporting, and much more through an all-in-one real-time cloud platform. Visit ridgelineapps.com. ----- Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com). Timestamps: (00:00:00) Welcome to Invest Like the Best (00:02:43) Intro: Dan Sundheim (00:03:58) The State of Public & Private Investing (00:07:32) Investing in OpenAI and Anthropic (00:10:22) LLMs Business Model (00:14:13) How LLMs are like Netflix and Spotify (00:17:08) Focus v. Scope (00:22:43) The Bear Case for Hyperscalers (00:26:36) The Software Sell-Off (00:31:08) If Scaling Laws Stopped (00:32:18) Advice to a 12-Year-Old Investor (00:33:54) GameStop: D1's Darkest Hour (00:37:14) The Pivotal Dinner with LPs (00:40:56) Staying Calm and Confident (00:42:08) Economic Optimism vs. Societal Uncertainty (00:44:26) Investing on SpaceX and Rivian (00:48:09) Why Dan Loves Shorting (00:48:51) Sources of Inefficiency in Today's Markets (00:51:45) The Importance of Loyalty (00:53:11) Dan's Group Chat for Founders (00:55:39) What Motivates Dan (00:57:28) Posting on Value Investors Club (01:01:46) What Dan Learned at Viking (01:04:22) The Beauty of Art (01:06:49) Under-appreciated Parts of the Global Economy (01:08:00) The US-China-Taiwan Collision Course (01:12:10) Good Leaders vs. Good Businesses (01:13:15) The Kindest Thing