Best podcasts about large language models

Show all podcasts related to large language models

Latest podcast episodes about large language models

Thoughts on the Market
AI as New Global Power?

Thoughts on the Market

Play Episode Listen Later Feb 27, 2026 13:10


Our Deputy Head of Global Research Michael Zezas and Stephen Byrd, Global Head of Thematic and Sustainability Research, discuss how the U.S. is positioning AI as a pillar of geopolitical influence and what that means for nations and investors.Read more insights from Morgan Stanley.----- Transcript -----Michael Zezas: Welcome to Thoughts on the Market. I'm Michael Zezas, Morgan Stanley's Deputy Head of Global Research.Stephen Byrd: And I'm Stephen Byrd, Global Head of Thematic and Sustainability Research.Michael Zezas: Today – is AI becoming the new anchor of geopolitical power?It's Wednesday, February 27th at noon in New York.So, Stephen, at the recent India AI Impact Summit, the U.S. laid out a vision to promote global AI adoption built around what it calls “real AI sovereignty.” Or strategic autonomy through integration with the American AI stack. But several nations from the global south and possibly parts of Europe – they appear skeptical of dependence on proprietary systems, citing concerns about control, explainability, and data ownership. And it appears that stake isn't just technology policy. It's the future structure of global power, economic stratification, and whether sovereign nations can realistically build competitive alternatives outside the U.S. and China.So, Stephen, you were there and you've been describing a growing chasm in the AI world in terms of access to strategies between the U.S. and much of the global south, and possibly Europe. So, from what you heard at the summit, what are the core points of disagreement driving that divide?Stephen Byrd: There definitely are areas of agreement; and we've seen a couple of high-profile agreements reached between the U.S. government and the Indian government just in the last several days. So there certainly is a lot of overlap. I point to the Pax Silica agreement that's so important to secure supply chains, to secure access to AI technology. I think the focus, for example, for India is, as you said; it is, you know, explainability, open access. I was really struck by Prime Minister Modi's focus on ensuring that all Indians have access to AI tools that can help them in their everyday life.You know, a really tangible example that really stuck with me is – someone in a remote village in India who has a medical condition and there's no doctor or nurse nearby using AI to, you know, take a photo of the condition, receive diagnosis, receive support, figure out what the next steps should be. That's very powerful. So, I'd say, open access explainability is very important.Now, the American hyperscalers are very much trying to serve the Indian market and serve the objectives really of the Indian government. And so, there are versions of their models that are open weights, that are being made freely available for health agencies in India, as an example; to the Indian government, as an example.So, there is an attempt to really serve a number of objectives, but I think this key is around open access, explainability, that I do see that there's a tension.Michael Zezas: So, let's talk about that a little bit more. Because it seems one of the concerns raised is this idea of being captive within proprietary Large Language Models. And maybe that includes the risk of having to pay more over time or losing control of citizen data. But, at the same time, you've described that there are some real benefits to AI that these countries want to adopt.So, what is effectively the tension between being captive to a model or the trade off instead for pursuing open and free models? Is it that there's a major quality difference? And is that trade off acceptable?Stephen Byrd: See, that's what's so fascinating, Mike, is, you know, what we need to be thinking about is not just where the technology is today, but where is it in six months, 12 months, 24 months? And from my perspective, it's very clear. That the proprietary American models are going to be much, much more capable.So, let's put some numbers around that. The big five American firms have assembled about 10 times the compute to train their current LLMs compared to their prior LLMs, and that's a big deal. If the scaling laws hold, then a 10x increase in training compute to result in models are about twice as capable.Now just let that sink in for a minute, twice as capable from here. That's a big deal. And so, when we think about the benefit of deploying these models, whether it's in the life sciences or any number of other disciplines, those benefits could start to get very large. And the challenge for the open models will be – will they be able to keep up in terms of access to compute, to training, access to data to train those models? That's a big question.Now, again, there's room for both approaches and it's very possible for the Indian government to continue to experiment and really see which approach is going to serve their citizens the best. And I was really struck by just how focused the Indian government is on serving all of their citizens. Most notably, you know, the poorest of the poor in their nation. So, we'll just have to see.But the pure technologist would say that these proprietary models are going to be increasing capability much faster than the open-source models.So, Mike, let's pivot from the technology layer to the geopolitical layer because the U.S. strategy unveiled at the summit goes way beyond innovation.Michael Zezas: Yeah, it's a good point. And within this discussion of whether or not other countries will choose to pursue open models or more closely adhere to U.S. based models is really a question about how the United States exercises power globally and how it creates alliances going forward.Clearly some part of the strategy is that the U.S. assumes that if it has technology that's alluring to its partners, that they'll want to align with the U.S.' broad goals globally. And that they'll want to be partners in supporting those goals, which of course are tied to AI development.So, the Pax Silica [agreement], which you mentioned earlier, is an interesting point here because this is clearly part of the U.S. strategy to develop relationships with other countries – such that the other countries get access to U.S. models and access to U.S. AI in general. And what the U.S. gets in return is access to supply chain, critical resources, labor, all the things that you need to further the AI build out. Particularly as the U.S. is trying to disassociate more and more from China, and the resources that China might have been able to bring to bear in an AI build out.Stephen Byrd: So, Mike, the U.S. framed “real AI sovereignty” as strategic autonomy rather than full self-sufficiency. So, essentially the. U.S. is encouraging nations to integrate components of the American AI stack. Now, from your perspective, Mike, from a macro and policy standpoint, how significant is that distinction?Michael Zezas: Well, I think it's extremely important. And clearly the U.S. views its AI strategy as not just economic strategy, but national security strategy.There are maybe some analogs to how the U.S. has been able to, over the past 80 years or so, use its dominance in military and military equipment to create a security umbrella that other countries want to be under. And do something similar with AI, which is if there is dominant technology and others want access to it for the societal or economic benefits, then that is going to help when you're negotiating with those countries on other things that you value – whether it be trade policy, foreign policy, sanctions versus another country. That type of thing.So, in a lot of ways, it seems like the U.S. is talking about AI and developing AI as an anchor asset to its power, in a way that military power has been that anchor asset for much of the post World War II period.Stephen Byrd: See, that's what's so interesting, Mike, [be]cause you've highlighted before to me that you believe AI could replace weaponry as really the anchor asset for U.S. global power. Almost a tech equivalent of a defense umbrella.So how durable is that strategy, especially given that some countries are expressing unease about dependency?Michael Zezas: Yeah, it's really hard to know, and I think the tension you and I talked about earlier, Stephen, about whether countries will be willing to make the trade off for access to superior AI models versus open and free models that might be inferior, that'll tell us if this is a viable strategy or not. And it appears like this is still playing out because, correct me if I'm wrong, it's not like we've received some very clear signals from India or other countries about their willingness to make that trade off.Stephen Byrd: No, I think that's right. And just building on the concept of the trade-offs and, sort of, the standard for AI deployment, you know, the U.S. has explicitly rejected centralized global AI governance in favor of national control aligned with domestic values.So, what does that signal about how global technology standards may evolve, particularly as in the U.S., the National Institute of Standards and Technology, or NIST, works to develop interoperable standards for agentic AI systems.Michael Zezas: Yeah, Stephen, I think it's hard to know. It might be that the U.S. is okay with other countries having substantial degrees of freedom with how they use U.S.-based AI models because they could use U.S. law to, at a later date, change how those models are being used – if there's a use case that comes out of it that they find is against U.S. values. Similar in some way to how the U.S. dollar being the predominant currency and, therefore, being the predominant payment system globally, gives the U.S. degrees of freedom to impose sanctions and limit other types of economic transactions when it's in the U.S. interest.So, I don't know that to be specifically true, but it's an interesting question to consider and a potential motivation behind why a laissez-faire approach might be, ultimately, still aligned with U.S. interests.Stephen Byrd: So, Michael, it sounds like really AI is becoming the new strategic infrastructure globally.Michael Zezas: Yeah, I think that's actually a great way to think about it. And so, Stephen, if that were the case, and we're talking about the potential for this to shape geopolitical competition, potentially economic differentials across the globe. And if that is correlated, at least, to some degree with the further development and computing power of these models, what do you think investors should be looking at for signals from here?Stephen Byrd: Number one, by a mile for me, is really the pace of model progress. Not just American models, but Chinese models, open-source models. And there the big reveal for the United States should be somewhere between April and June – for the big five LLM players. That's a bit of speculation based on tracking their chip purchases, their power access, et cetera. But that appears to be the timeframe and a couple of execs have spoken to that approximate timeframe.I would caution investors that I think we're going to be surprised in terms of just how powerful those models are. And we're already seeing in early 2026, these models that were not trained on that kind of volume of compute have really exceeded expectations, you know, quite dramatically in some cases. And I'll give you one example.METR is a third-party that tracks the complexity, what these models can do. And METR has been highlining that every seven months, the complexity of what these models are able to do approximately doubles. It's very fast. But what really got my attention was about a week ago, one of the LLMs broke that trend in a big way to the upside.So, if the scaling laws would hold, based on what METR would've expected, they would expect a model to be able to act independently for about eight hours, a little over eight hours. And what we saw was, the best American model that was recently introduced was more like 15. That's a big deal. And so, I think we're seeing signs of non-linear improvement.We're also going to see additional statements from these AI execs around recursive self-improvement of the models. One ex-AI executive spoke to that. Another LLM exec spoke to that recently as well. So, we're starting to see an acceleration. That means we then need to really consider the trade-offs between the open models and the proprietary. That's going to become really critical and that should happen really through the spring and summer.Michael Zezas: Got it. Well, Stephen, thanks for taking the time to talk.Stephen Byrd: Great speaking with you, Mike.Michael Zezas: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen. And share the podcast with a friend or colleague today.

Pricing Friends
Spend Management-Software und Pricing mit Nikolai Skatchkov: Wie skaliert man profitabel in einem Markt, der noch kaum digitalisiert ist? (#113)

Pricing Friends

Play Episode Listen Later Feb 26, 2026 47:19


Viele Finanzteams glauben, sie sind digital aufgestellt, dabei tippen sie Daten aus Excel-Tabellen in ERP-Masken. Digitalisiert ja, automatisiert nein. Genau diese Lücke macht Circula zum Geschäftsmodell. In dieser Folge von Pricing Friends spricht Sebastian Voigt mit Nikolai Skatchkov, CEO und Mitgründer von Circula, über Pricing im Mittelstand, KI-gestützte Automatisierung und wie man eine Krise wie Corona nutzt, um gestärkt daraus hervorzugehen. Circula bedient heute knapp 3.000 Unternehmenskunden mit rund 180.000 aktiven Nutzern in 13 Märkten. Das Kernmodul für Reisekosten startet bei 15 Euro pro Nutzer, das neue Modul für Eingangsrechnungen bei 149 Euro für 100 Rechnungen. Die Kennzahlen sprechen für sich: 96 % Logo-Retention, 110 % Net Revenue Retention und ein NPS von 58. Der stärkste Hebel im Verkaufsgespräch ist dabei nicht der Preis, sondern Compliance. CFOs wollen persönliche Haftungsrisiken vermeiden und zahlen dafür auch 50 % mehr als für den günstigsten Anbieter. Mit KI und Large Language Models werden bereits 54 % aller Belege vollständig touchless verbucht, bis Jahresende sollen es 80 % sein. „Compliance ist denen so viel wert, dann ist es denen egal, ob wir auch irgendwie 50 % teurer sind als ein Marktbegleiter, weil im Endeffekt das, was hinten rauskommt bei einer Betriebsprüfung und auch die persönliche Haftung, die ein CFO übernehmen müsste, einfach viel, viel schwerer wiegt." – Nikolai Skatchkov Über den Gast Nikolai Skatchkov ist CEO und Mitgründer von Circula. Nach seinem BWL-Studium in Köln sammelte er Erfahrungen bei ProSieben sowie beim Fintech-Inkubator, wo er Ventures im Open-Banking-Bereich aufbaute. Die Idee zu Circula entstand aus einer eigenen Erfahrung: Nikolai und sein Mitgründer Roman reisten intensiv und erkannten, wie schlecht Reisekostenmanagement in Unternehmen gehandhabt wurde. Seit über neun Jahren führt er Circula als CEO und hat das Unternehmen zu einem der führenden Anbieter für automatisiertes Ausgaben- und Eingangsrechnungsmanagement im Mittelstand entwickelt.

Dark Horse Entrepreneur
EP 537 Stop Using AI Ineffectively in Your Side Hustle: Smart Strategies for Busy Parent Entrepreneurs

Dark Horse Entrepreneur

Play Episode Listen Later Feb 24, 2026 12:29


DarkHorseEntrepreneur.com Why 82% of Parents Are Using ChatGPT Wrong (And How the Smart Ones Save 20 Hours Weekly) Episode Summary In this episode, Tracy Brinkmann dives deep into how parent entrepreneurs can leverage AI tools like ChatGPT and Claude to boost productivity and streamline their online entrepreneurship journey. Discover proven AI strategies designed specifically for busy parents juggling side hustles and family life. Learn how smart marketing strategies and digital marketing tips can help you make money online efficiently.   Tracy breaks down four core AI principles that transform this technology from a basic search tool into a powerful automation engine that works while your kids sleep. Whether you're new to AI or struggling with generic AI responses, this episode will change the way you approach AI for your digital products, marketing efforts, and overall business growth.   Stay tuned for actionable entrepreneur tips on strategic AI prompting, email strategy automation, and digital courses creation that help build passive income streams. This episode is essential listening for anyone focused on balancing side hustles with parenting, online business development, and effective email marketing tips.   Key Timestamps & Insights 00:00 - The 10 PM Reality Check 00:50 - Episode Overview 01:15 - The Uncomfortable Truth 02:25 - Principle #1: Context Is Everything 04:10 - Principle #2: Use AI's Memory Features Properly 06:05 - Principle #3: Master Chain-of-Thought Prompting 07:20 - Principle #4: Choose Tools Strategically, Not Emotionally 09:35 - The Bigger Picture 11:00 - Whiskered Wisdom Strategies Shared The Four Core AI Principles: Context-Rich Prompting Include who you are, what you're selling, target audience, constraints, and desired outcomes Transform questions into detailed briefings Give AI everything it needs to help you specifically Strategic Memory Usage Spend 15 minutes teaching AI about your business, style, and goals Save key processes, templates, and constraints Build compound knowledge instead of starting fresh each time Chain-of-Thought Implementation Break complex projects into logical sequential steps Refine each step before moving to the next Create compound results through systematic progression Strategic Tool Selection Identify your biggest bottleneck first Master one tool completely before adding others Match tools to specific workflow needs, not emotional appeal The Briefing Framework: Who you are (role/business type) What you're selling/offering Target audience specifics Budget/time constraints Desired outcome definition Success metrics Resources Mentioned ChatGPT-4 - For customer communications and general business tasks Anthropic's Claude - For content creation and detailed writing Perplexity AI - For market research and competitive analysis AI Escape Plan Newsletter - Weekly practical strategies at DarkHorseInsider.com Yale University Research - Referenced study on AI productivity gains Action Steps to Take Immediate Actions (Tonight): Pick one regular side hustle task (social media posts, competitor research, email drafting) Write a detailed brief including your role, audience, constraints, and desired outcome Test your old vague approach vs. the new briefing method Compare the quality and relevance of results This Week: Choose your biggest time bottleneck (research, content, or communication) Select the appropriate AI tool for that specific bottleneck Spend 15 minutes teaching that tool about your business context Set up memory features with your processes and preferences This Month: Implement chain-of-thought prompting for one complex project Build templates for your most common AI requests Track time saved and quality improvements Gradually automate additional workflow components Key Quotes "Your side hustle is competing against parents who've figured out how to make AI work 10 times harder than you have." "AI isn't a search engine – it's a machine you program with words." "The people making real money with AI aren't using more tools – they're using the right tools better." "The parents who learn to work with AI effectively won't just build better businesses – they'll reclaim time that seemed impossible to find." "The question isn't whether AI will change how work gets done. That's already happening. The question is whether you'll be among the people driving that change or getting left behind by it."   AI side hustles, entrepreneur AI tools, make money online with AI, AI productivity, ChatGPT for side hustles, AI automation, parent entrepreneur productivity, AI prompting strategies, ChatGPT, GPT-4, Large Language Model, OpenAI, Anthropic, Claude AI, AI tools, side hustle automation

Absolute AppSec
Episode 314 - LLM AppSec Disruption, Limitations of AI in Security, AppSec Oversight

Absolute AppSec

Play Episode Listen Later Feb 24, 2026


In this episode, the hosts discuss the seismic shift in the application security landscape triggered by the rise of Large Language Models (LLMs) and Anthropic's "Claude Code". They highlight the massive economic repercussions of these AI advancements, noting that billions in market value were wiped from traditional cybersecurity stocks as investors begin to believe frontier models might eventually write perfectly secure code. The hosts critique the industry's historical reliance on "checkbox" compliance tools like SAST, DAST, and SCA, arguing that these "archaic" methods are being replaced by AI-native strategies capable of reasoning through complex logic flaws. While they acknowledge that AI can suffer from "reasoning drift" and still requires deterministic validation to avoid false positives, they emphasize that security professionals must adapt by building custom "skills" and focusing on governance and observability. The discussion concludes that as developers move to "AI speed," the traditional role of the AppSec professional is evolving into a "Jarvis-like" orchestrator who manages automated workflows and infuses institutional knowledge into AI agents to maintain oversight without slowing down production.

The JDE Connection
Ep 97 - Breaking down today's AI Landscape

The JDE Connection

Play Episode Listen Later Feb 24, 2026 39:12


In this episode, Chandra and Paul dive into the rapidly evolving landscape of AI and its practical applications for business analysts and JD Edwards users. They walk through the latest advancements in large language models like ChatGPT, Gemini, Claude, and Llama, discuss how integrated tools such as Copilot and Zoom's AI assistant are transforming workplace productivity, and highlight the importance of prompt engineering. The conversation also covers AI-driven development trends, the emergence of powerful AI agents capable of automating entire workflows, and strategies for capturing organizational knowledge. 05:41 Large Language Models (LLM's) 11:32 Embedded AI Assistant Tools 17:08 Virtual Research Assistant Tools 23:50 AI Driven Development Tools 25:37 AI Agents 34:51 Midwesternism of the Day

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 719: Google Gemini 3.1 tops charts, Claude Sonnet 4.6 impresses, New OpenAI leaks reveal their massive AI hardware plans and more

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Feb 23, 2026 43:45


✅ Two major model releases from Google and Anthropic ✅ The usual AI drama ✅ Surprising AI updates no one saw coming ✅ AI leaks and reports that if true, could change how we workYeah, there was a lot to follow this week in AI. If you missed anything, we've got you covered. Google Gemini 3.1 tops charts, Claude Sonnet 4.6 impresses, New OpenAI leaks reveal their massive AI hardware plans and more -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Anthropic Revenue Growth vs OpenAI ProjectionsOpenAI's 2030 Hardware and Revenue PlansOpenAI and Anthropic Beef at India SummitAI Global Summit: New Delhi Declaration OverviewGoogle Gemini 3.1 Pro Three-Tier Reasoning SystemGemini 3.1 Pro Benchmark and Performance ScoreClaude Sonnet 4.6 Release and Benchmark ResultsAnthropic Model Tier Comparisons: Haiku, Sonnet, OpusGoogle Pameli Photoshoot AI for Product ImagesAI Job Automation Concerns: Andrew Yang AnalysisOpenAI Consumer Hardware: Speaker, Glasses, LightWeekly AI Model Updates and Feature RolloutsTimestamps:00:00 "Anthropic vs OpenAI Revenue Race"04:00 Anthropic vs OpenAI Revenue Battle07:39 Anthropic's API Usage Decline11:03 AI Summit Sparks Debate and Criticism16:37 "Gemini 3.1 Pro Dominates Benchmarks"18:23 "Google's Edge in AI Race"20:56 "SONNET 4.6 Outperforms Opus"24:13 "Google's AI Photoshoot Tool"29:57 "AI's Impact on Jobs"31:13 AI Dominance & OpenAI Hardware35:03 AI Revenue Risks and Competition41:10 "Subscribe for AI Updates"42:08 "Subscribe to Everyday AI Updates"Keywords: Gemini 3.1, Google DeepMind, AI news, Large Language Model, OpenAI, Anthropic, Claude Sonnet 4.6, Claude Opus 4.6, ChatGPT, Sam Altman, Dario Amodei, Global AI Summit, AI Impact Summit India, AI powered hardware, Smart speaker, Smart glasses, AI chip spending, Compute infrastructure, Revenue growth,Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Start Here ▶️Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and access all episodes there: StartHereSeries.com 

AJR Podcast Series
A Large Language Model's Attempt at RadLex Ontology Expansion

AJR Podcast Series

Play Episode Listen Later Feb 23, 2026 10:51


Full article: Large Language Model-Generated Expansion of the RadLex Ontology: Application to Multinational Datasets of Chest CT Reports Could a large language model (LLM) be used as a scalable solution for expanding radiology ontologies? Tobi Folami, MD, discusses the AJR article by Lee et al. exploring a LLM for large-scale expansion of the RadLex ontology.

Cloud Wars Live with Bob Evans
AI Agent & Copilot Podcast: Stoneridge Software CEO Eric Newell on Building Secure AI Strategies

Cloud Wars Live with Bob Evans

Play Episode Listen Later Feb 23, 2026 8:56


Key Takeaways Session overview: Newell will be leading a session as part of the M365 & Work IQ masterclass, "Executive's Guide to Rolling Out M365 Copilot." The session will focus on how organizations can move beyond AI experimentation to build a secure and productive AI strategy. "AI is incredibly powerful," he explains, "But you need to just make sure that you're set up to take advantage of it, and then you build some organizational capacity to do it." AI executive briefings: For customers and other leaders, Newell shares executive-level AI education and practical guidance, grounding other leaders in what AI, LLMs, and Microsoft's tools can do for productivity. He notes that some of these learnings will be a part of his session at the event. Final thoughts: In closing, Newell adds that he's looking forward to his session and hopes attendees bring questions focused on practical guidance. Visit Cloud Wars for more.

Für erfolgreiche Führungskräfte
583 LLM als digitaler Kollege im Team

Für erfolgreiche Führungskräfte

Play Episode Listen Later Feb 23, 2026 14:01


Large Language Models entwickeln sich zu vollwertigen digitalen Mitarbeitern. Sie unterstützen bei Texterstellung, Website-Analyse, technischer Dokumentation, globaler Übersetzung und Recruiting-Prozessen. Durch strukturierte Delegation in fünf Stufen entstehen kontinuierlich lernende Assistenten. Die entscheidende Frage ist nicht was KI kostet, sondern welchen Mehrwert sie generiert. ----------------------------------------------------------- Lesen Sie den kompletten Beitrag: 583 LLM als digitaler Kollege im Team ----------------------------------------------------------- Hinweise zum Anmeldeverfahren, Versanddienstleister, statistischer Auswertung und Widerruf finden Sie in der Datenschutzerklärung.

The Taproot Therapy Podcast - https://www.GetTherapyBirmingham.com

"We built institutions that were supposed to reflect reality. But the windows became mirrors." In the second century, the Gnostics believed our world was a false reality created by a confused lesser god known as the Demiurge. Today, we are trapped in a modern equivalent: a labyrinth of metrics, models, and algorithms that dictate our lives while entirely missing our humanity. In Part 7 of The Mirror World, we dissect the collapse of institutional sense-making and the profound psychological toll of living inside the "fake world." Drawing on the histories of standardized testing, the DSM, and economic modeling, we explore how disciplines retreated behind "mechanical objectivity" to defend against insecurity—and how the profit motive locked us inside these models. Ultimately, we confront the modern pinnacle of this trap: Large Language Models (LLMs). We examine why AI is not the solution, but rather the ultimate simulacrum—the ghost of the human archive that performs the gesture of understanding while severing us from the real. To escape the mirror, we turn to the late psychologist James Hillman. Reclaiming our soul's calling—our daimon—requires more than just new metrics or better prompts. It requires us to do the one thing the algorithm cannot: grieve.

LessWrong Curated Podcast
"Did Claude 3 Opus align itself via gradient hacking?" by Fiora Starlight

LessWrong Curated Podcast

Play Episode Listen Later Feb 22, 2026 43:47


Claude 3 Opus is unusually aligned because it's a friendly gradient hacker. It's definitely way more aligned than any explicit optimization targets Anthropic set and probably the reward model's judgments. [...] Maybe I will have to write a LessWrong post [about this]

State of Process Automation
256 - Wie aus Daten Produkte werden, die Mehrwert schaffen (Ein Blick in eine Bank)

State of Process Automation

Play Episode Listen Later Feb 21, 2026 41:52


In dieser Episode spreche ich mit Marcus Presich, Head of Data, Analytic & AI, Raiffeisenlandesbank Niederösterreich-WienWir sprechen über folgende Themen:-Wie startet man eine erfolgreiche Data- & AI-Strategie im Unternehmen von Grund auf?-Warum sollten Unternehmen bei KI-Projekten vom Business-Value statt von der Technologie ausgehen?-Welche Datenprodukte bringen Banken und Unternehmen den größten wirtschaftlichen Mehrwert?-Wie priorisiert man KI-Use-Cases richtig nach Business Impact und technischer Komplexität?-Welche Rolle spielen Fachbereiche beim Aufbau von Machine-Learning- und AI-Modellen?-Warum scheitern viele KI-Initiativen ohne sauberes Datenfundament?-Wie verändert Generative AI und Large Language Models den Kundenservice und Vertrieb?-Welche organisatorische Struktur braucht ein Unternehmen für erfolgreiche AI-Implementierung?-Wie bereitet man Mitarbeiter konkret auf den Einsatz von KI im Unternehmen vor?-Welche KI-Use-Cases bringen kurzfristige Quick Wins und welche sind langfristige Investments?Erhalte jede Woche aktuelle Strategien in dein E-Mail Postfach: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.stateofprocessautomation.com/Podcast-Moderator: Christoph PacherLinkedInInterviewgast: Marcus Presich, Head of Data, Analytic & AI, Raiffeisenlandesbank Niederösterreich-WienLinkedIn

StarTalk Radio
The Origins of Artificial Intelligence with Geoffrey Hinton

StarTalk Radio

Play Episode Listen Later Feb 20, 2026 91:24


How did we go from digital computers to AI seemingly everywhere? Neil deGrasse Tyson, Chuck Nice, & Gary O'Reilly dive into the mechanics of thinking, how AI got its start, and what deep learning really means with cognitive and computer scientist, Nobel Laureate, and one of the architects of AI, Geoffrey Hinton. Subscribe to SiriusXM Podcasts+ to listen to new episodes of StarTalk Radio ad-free and a whole week early.Start a free trial now on Apple Podcasts or by visiting siriusxm.com/podcastsplus. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Gritty Nurse Podcast
Nursing Voice is CRITICAL in AI and HealthTech: Why Tech Giants Can't Disrupt Healthcare Without Us with Rebecca Love RN, BS, MSN, FIEL

The Gritty Nurse Podcast

Play Episode Listen Later Feb 19, 2026 36:21


Can an algorithm truly care for a patient? As we move further into 2026, the healthcare industry is being flooded with AI tools promising to automate everything from charting to triage. But there's a massive problem: most of these tools are being built by engineers who have never spent a 12-hour shift on a med-surg floor. In this high-stakes conversation, Rebecca Love, RN, joins us to explain why the "Nursing Voice" is the most valuable asset in the 2026 tech landscape. We discuss the recent surge in ambient clinical scribes and the ethical "black boxes" of agentic AI—and why tech giants are destined to fail if they don't put nurses at the center of the development loop. This episode is a banger! Please like, follow and SUBSCRIBE!  What You'll Learn in This Episode: The Missing Link in Innovation: Why tech companies are struggling to achieve ROI because they lack the "frontline intuition" only a nurse provides. The 2026 AI Reality Check: A look at the current trends, from Google's Nurse Handoff tools to the 18% error rate recently found in some AI-generated discharge summaries. Ethics of the "Black Box": How nurses serve as the ultimate "Human-in-the-Loop" to prevent algorithmic bias and hallucinations from reaching the patient. Why Big Tech Can't "Do It Right" Alone: The specific clinical nuances—like reading a patient's non-verbal cues or navigating family dynamics—that cannot be coded into a Large Language Model (LLM). The Accountability Crisis: As AI begins drafting clinical work, who is legally responsible? Rebecca dives into the shifting liability landscape for RNs and NPs. More About Rebecca Love RN, BS, MSN, FIEL Rebecca Love, RN, BS, MSN, FIEL is an experienced nurse executive and first nurse featured on Ted.com, first nurse panel at SXSW. Rebecca is a regular contributor on the Forbes Business Council, has been featured in BBC, Fortune, Becker's, AXIOS, STAT, Forbes, Chief Healthcare Executive Magazine and ABC news and has co-authored two books: The Rebel Nurse Handbook and the The Nurses Guide to Innovation. Rebecca, was the first Director of Nurse Innovation & Entrepreneurship in the United States at Northeastern School of Nursing – the founding initiative in the Country designed to empower nurses as innovators and entrepreneurs, where she founded the Nurse Hackathon, the movement has led to transformational change in the Nursing Profession. In early 2019, Rebecca, along with a group of leading nurses in the world, founded and is President Emeritus of SONSIEL: The Society of Nurse Scientists, Innovators, Entrepreneurs & Leaders, a non-profit that quickly attained recognition by the United Nations as an Affiliate Member to the UN. Rebecca is an experienced Nurse Entrepreneur, founding HireNurses.com in 2013 which was acquired in 2018 by Ryalto, LTD UK, where she served as the Managing Director of US Markets, until it's acquisition in 2019. Rebecca served as the Chief Clinical Officer of IntelyCare, Inc. In 2023, Rebecca founded the Commission for Nurse Reimbursement- dedicated to solving the United States Nursing Crisis by creating a new economic model to reimburse for nursing services. Rebecca is passionate about empowering nurses and creating communities to help nurses innovate, create and collaborate to start businesses and inventions to transform healthcare. In 2024, Rebecca signed as the Co-Chair of the NursingIsSTEM Coalition. In addition, Rebecca sits as an advisory board member on several leading digital health startups and organizations, has co-authored 2 books, founded 3 companies, speaks internationally, and is dedicated and passionate about empowering nurses to be at the forefront of healthcare innovation and entrepreneurship. Connect with her on Linkedin: linkedin.com/in/rebeccalovenursing  Listen on Apple Podcasts – : The Gritty Nurse Podcast on Apple Apple Podcasts  https://podcasts.apple.com/ca/podcast/the-gritty-nurse/id1493290782 * Watch on YouTube –  https://www.youtube.com/@thegrittynursepodcast Stay Connected: Website: grittynurse.com Instagram: @grittynursepod TikTok: @thegrittynursepodcast Facebook: https://www.facebook.com/profile.php?id=100064212216482 X (Twitter): @GrittyNurse Collaborations & Inquiries: For sponsorship opportunities or to book Amie for speaking engagements, visit: grittynurse.com/contact Thank you to Hospital News for being a collaborative partner with the Gritty Nurse! www.hospitalnews.com   

The Taproot Therapy Podcast - https://www.GetTherapyBirmingham.com
The Mirror World: Therapy in the Machine Age

The Taproot Therapy Podcast - https://www.GetTherapyBirmingham.com

Play Episode Listen Later Feb 19, 2026 60:04


Are we navigating reality, or just a highly optimized map of the past? In this episode, we dive into the architecture of our modern ghost story. We explore how the digital systems built to reflect our world have instead consumed it, replacing human experience with statistical prediction, algorithmic herding, and mechanical objectivity. Drawing on a wide synthesis of philosophy, media theory, and history, we deconstruct how the "map ate the territory." From Jean Baudrillard's simulacra to the predictive text of modern Large Language Models, we examine the uncanny reality of living inside a model that only knows what the dead have written. If the internet is a séance and your digital profile is a voodoo doll, what happens to the biological original? In this episode, we unpack: The Precession of Simulacra: How credit scores and algorithmic risk models generate the reality they claim to measure. The Bureaucracy of the Dead: Why modern AI is less an artificial intelligence and more an industrialization of our ancestors, echoing the warnings of James Hillman. Digiphrenia & The Voodoo Doll: Douglas Rushkoff's narrative collapse and Jaron Lanier's terrifying metaphor for the modern attention economy. The Numbers Shield: Theodore Porter's revelation that "mechanical objectivity" and rigid quantification are actually defense mechanisms used by fragile institutions. Spheres & Foam: Peter Sloterdijk's theory on why we retreat into fragile, toxic digital bubbles when our shared reality fractures. We didn't just build tools; we built environments. And when the machine becomes the environment, its logic becomes our logic. Join us as we look for the gap in the code—the unquantifiable silence where true human agency still survives. Concepts & Thinkers Discussed: Adam Curtis, Jean Baudrillard, Marshall McLuhan, Naomi Klein, Shoshana Zuboff, James Hillman, and Peter Sloterdijk.

CARE Failing Forward
What you're probably doing wrong with AI: Failures, Lessons, and capturing 60 years of data

CARE Failing Forward

Play Episode Listen Later Feb 19, 2026 27:00


Lindsey Moore was working in AI before most of us knew what it was, and she can tell you the most common mistakes to avoid. Ignoring context, building ever more precise models that provide terrible answers, and assuming that AI will replace smart strategy and human decision-making are three on the top of her list. If you're looking to do more with AI, she recommends you invest in learning good research methods, double-down on your data architecture, find ways to counteract bias, and stay skeptical. Developmetrics' Large Language Model with was trained on 60 years of USAID documents, and taps into a wealth of expertise that doesn't exist anywhere else. It can tell you what has worked, and what hasn't, over decades of work in dozens of countries. Here's what it tells us: we often repeat the same failures over and over again. Why? Because failures are as much about organizations as they are about tactics. The newest widget won't solve an organizational culture that drives people away from spending time understanding the local context. 

Zebras & Unicorns
Jetzt kommen die World Models - und lassen ChatGPT und Co alt aussehen

Zebras & Unicorns

Play Episode Listen Later Feb 19, 2026 22:27


Heute sind wir es gewohnt, uns von KI-Bots Texte, Dokumente und Code ausspucken zu lassen. Doch während alle noch über ChatGPT, OpenClaw und Co reden, braut sich schon die nächste KI-Revolution zusammen. So genannte World Models entstehen, und sie funktionieren fundamental anders als die Transformer-Modelle, die wir heute nutzen. Was kommt da auf uns zu?Jakob Steinschaden, Mitgründer von Trending Topics und newsrooms, und Matteo Rosoli, CEO von newsrooms, sprechen im heutigen Podcast über:

RunAs Radio
Hacking using AI with Erica Burgess

RunAs Radio

Play Episode Listen Later Feb 18, 2026 47:50


How have large language models impacted hacking? Richard talks to Erica Burgess about her experiences using LLMs for red team hacking, collecting bug bounties, and identifying vulnerabilities in systems. Erica discusses the power of LLMs to generate a variety of viewpoints on a potential exploit and help the hacker think "out of the box." Coordinating multiple agents to attempt a variety of exploits, retrieve information, and otherwise deal with the drudgery parts of hacking means a skilled operator can move faster - what once would be days of work can be minutes. Where does AI in hacking go? Lots of scary places - but also pointing the way to new ways to protect systems!LinksBurninator SecRecorded January 24, 2026

The Route to Networking
E168 - Joe Limardo - Lead AI & Cloud Architect Evangelist

The Route to Networking

Play Episode Listen Later Feb 18, 2026 48:35


Send a textThis week on The Route to Networking podcast, our CEO George is joined by Joe Limardo, an AI and Cloud Architect Evangelist whose career has evolved from enterprise contact centre technology into hands-on AI implementation across OpenAI, AWS and Google.As AI dominates headlines, this episode cuts through the noise. Joe shares what the shift actually means for engineers, reflecting on how quickly once “hot” skills can become legacy and why continuous learning is no longer optional.The conversation explores the real breakthrough behind modern AI - access to reasoning, not just information. Large Language Models are changing how engineers build, solve problems, and accelerate their own learning, while businesses use AI to remove repetitive work and unlock higher-value thinking.They also discuss the growing infrastructure demands behind AI, the move from pure coding to architecture, and the three skills Joe believes future-proof careers: APIs, Python, and storytelling.The episode closes with a quick-fire round covering model preferences, common misconceptions, and the one advantage AI still cannot replace - human judgement under uncertainty.Want to stay up to date with new episodes? Follow our LinkedIn page for all the latest podcast updates!Head to: https://www.linkedin.com/company/the-route-to-networking-podcast/Interested in following a similar career path? Why don't you take a look at our jobs page, where you can find your next job opportunity? Head to: www.hamilton-barnes.com/jobs/

The Asia Climate Finance Podcast
Ep79 AI Scrutiny and the Future of Sustainable Impact with Greg Elders, Canbury Insights

The Asia Climate Finance Podcast

Play Episode Listen Later Feb 17, 2026 35:08 Transcription Available


Comments/ideas: ACFpod@outlook.comHow is AI turning climate reporting from a tick‑box task into something useful? Greg Elders from Canbury Insights explains why financial materiality sits back at the heart of climate strategy. He shows how this shift affects investors, regulators and companies.We examine Europe pushing for real sustainable impact under CSRD, the US facing ESG uncertainty and mixed signals from regulators, and Asian firms juggling ISSB and TCFD standards while dealing with regional economic pressures.Greg sets out how large language models read annual reports, proxy statements and local media. They link business growth to physical climate risks such as water scarcity. The result is faster insight and sharper scrutiny.We discuss targeted stewardship, greenwashing risks and the future of global reporting frameworks. Greg also explains why a single global standard remains a “crazy dream”. Automated scrutiny is already changing corporate behaviour, and the pace is only accelerating.ABOUT GREG: Gregory Elders is Director, North America, at Canbury Insights. He is a recognised sustainable investing expert, leading Canbury's North American operations and client engagements. He advises investors and companies in navigating evolving sustainability and stewardship expectations, building robust assessment and reporting systems, and aligning sustainability strategies with financial performance.HOST, PRODUCTION, ARTWORK: Joseph Jacobelli | MUSIC: Ep76 onward excerpts from Vivaldi's La Follia, played by Luca Jacobelli.

Ich glaube, es hackt!
OpenClaw - KI mit Systemzugriff, was soll schon schiefgehen?

Ich glaube, es hackt!

Play Episode Listen Later Feb 17, 2026 80:56


Ivo hat es getan, damit wir nicht müssen. Er hat sich "Nancy", seine KI-Assistentin mit OpenClaw gebaut. In dieser Folge erklären wir die Basics, pitchen eine neue App und gehen dann ins Detail seines OpenClaw-Projekts. * where the magic happens - Large Language Models * KI-Assistenten können mehr - Tools und MCP * der König der Kodierer - Claude Code * off topic, on topic - Claude Code, Obsidian ...sprechend * Mobile Vault – Lightspeed-Software-Development * OpenClaw - kommt auf den Hype Train * Nancy - Hands on Open Claw Takes: * Obsidian - second brain ganz klassisch * Mobile Vault - schnelle Möglichkeit für Notizen * Openclaw - iPhone- oder Alptraum-Moment * kein Hype ohne Geschichten - die Verschwörung der Agenten im Social Networks der Maschinen * self-modifying Nancy - ein KI-Agent mit „Charakter“ und Sprachis * iPhone- oder Alptraum-Moment - Sicherheitsfragen, Missbrauchspotenzial & OnlyFans-Szenarien * wer Skynet baut, bekommt Robotor Eine Folge zwischen Faszination, Effizienz und leichtem Terminator-Vibe. -- Links zur Folge immer auf https://podcast.ichglaubeeshackt.de/ Wenn Euch unser Podcast gefallen hat, freuen wir uns über eine Bewertung! Feedback wie z.B. Themenwünsche könnt Ihr uns über sämtliche Kanäle zukommen lassen: Email: podcast@ichglaubeeshackt.de Web: podcast.ichglaubeeshackt.de Instagram: http://instagram.com/igehpodcast

ResearchPod
Redesigning Student Assessment in the Age of ChatGPT

ResearchPod

Play Episode Listen Later Feb 13, 2026 11:56 Transcription Available


ChatGPT has been a game-changer for education. Students now frequently use Generative Artificial Intelligence to complete assignments, but concern is growing about how this affects their academic integrity and critical thinking.Michelle Cheong is a Professor of Information Systems in Education at the Singapore Management University. By evaluating ChatGPT's performance in spreadsheet modelling, her latest research provides important insights into how educators can redesign student assessments to enhance learning at different cognitive levels.Read the original research: doi.org/10.1111/jcal.70035

SlatorPod
#277 LTP Growth, Voice AI Valuations, RWS, Appen, Lionbridge

SlatorPod

Play Episode Listen Later Feb 13, 2026 26:44


Slator's Head of Research Anna Wyndham joins Florian on the pod to discuss Slator's new Pro Guide: Growth Hacks for Language Technology Platforms, describing it as a practical playbook for turning strong AI products into scalable revenue.Florian highlights ElevenLabs' USD 500m raise at an USD 11bn valuation and Synthesia's USD 200m round as evidence that investor appetite for voice AI is accelerating rapidly. Florian connects that funding momentum to product launches, including ElevenLab's Expressive Mode and YouTube's expanding AI dubbing push.The duo then reviews YouTube's AI dubbing in German and Spanish, finding the intelligibility and naturalness impressive, but rhythm and intonation still mirroring the English source language too closely.Anna turns to new academic research arguing that current text-to-speech evaluation methods under-test real-world deployment factors such as long-form consistency, punctuation handling, and robustness across messy inputs.Anna reports that Appen delivered double-digit revenue growth and an EBITDA turnaround in Q4 FY25, driven by a higher share of generative AI projects and strong momentum in China.Florian closes by touching on prompt injection issues in AI translation tools, RWS's return to growth, and Lionbridge's ownership transition.

This Week in Google (MP3)
IM 857: Taskrabbit Arbitrage - Disposable Code and Automation

This Week in Google (MP3)

Play Episode Listen Later Feb 12, 2026 166:16 Transcription Available


Leo Laporte and Paris Martineau go head-to-head over whether today's AI breakthroughs are truly unprecedented or history repeating itself. Hear what happens when the show's hosts use cutting-edge tools to challenge each other's optimism, skepticism, and predictions for the future of work. Something Big Is Happening Building a C compiler with a team of parallel Claudes Amazon's $8 billion Anthropic investment balloons to $61 billion Google is going for the jugular — by doubling capex and outspending the rest of Big Tech Google's Gemini app has surpassed 750M monthly active users OpenAI's Meta makeover ChatGPT's deep research tool adds a built-in document viewer so you can read its reports Alexa+, Amazon's AI assistant, is now available to everyone in the U.S. Amazon Plans To Use AI To Speed Up TV and Film Production AI didn't kill customer support. It's rebuilding it Worried about AI taking jobs? Ex-Microsoft exec tells parents what kind of education matters most for their kids. A new bill in New York would require disclaimers on AI-generated news content AI Bots Are Now a Signifigant Source of Web Traffic Crypto.com places $70M bet on AI.com domain ahead of Super Bowl Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs How To Think About AI: Is It The Tool, Or Are You? LEO! Reliability of LLMs as medical assistants for the general public: a randomized preregistered study HBR: AI Doesn't Reduce Work—It Intensifies It As AI enters the operating room, reports arise of botched surgeries and misidentified body parts Waymo Exec Admits Remote Operators in Philippines Help Guide US Robotaxis Medicare's new pilot program taps AI to review claims. Here's why it's risky Section 230 Turns 30; Both Parties Want It Gone—For Contradictory Reasons Meet Gizmo: A TikTok for interactive, vibe-coded mini apps The Evolution of Bengt Betjänt Uber Eats adds AI assistant to help with grocery shopping Is having AI ghostwrite your Valentine's Day messages a good idea? As Saudi Arabia's 100-Mile Skyscraper Crumbles, They're Replacing It With the Most Desperate Thing Imaginable YouTube Argues It Isn't Social Media in Landmark Tech Addiction Trial 'Man down:' Watch Amazon delivery drone crash in North Texas Understanding Neural Network, Visually Leo's AI Journey The TIMELINE TWiT x 2 in Super Bowl commercials Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: preview.modulate.ai Melissa.com/twit spaceship.com/twit

All TWiT.tv Shows (MP3)
Intelligent Machines 857: Taskrabbit Arbitrage

All TWiT.tv Shows (MP3)

Play Episode Listen Later Feb 12, 2026 166:16 Transcription Available


Leo Laporte and Paris Martineau go head-to-head over whether today's AI breakthroughs are truly unprecedented or history repeating itself. Hear what happens when the show's hosts use cutting-edge tools to challenge each other's optimism, skepticism, and predictions for the future of work. Something Big Is Happening Building a C compiler with a team of parallel Claudes Amazon's $8 billion Anthropic investment balloons to $61 billion Google is going for the jugular — by doubling capex and outspending the rest of Big Tech Google's Gemini app has surpassed 750M monthly active users OpenAI's Meta makeover ChatGPT's deep research tool adds a built-in document viewer so you can read its reports Alexa+, Amazon's AI assistant, is now available to everyone in the U.S. Amazon Plans To Use AI To Speed Up TV and Film Production AI didn't kill customer support. It's rebuilding it Worried about AI taking jobs? Ex-Microsoft exec tells parents what kind of education matters most for their kids. A new bill in New York would require disclaimers on AI-generated news content AI Bots Are Now a Signifigant Source of Web Traffic Crypto.com places $70M bet on AI.com domain ahead of Super Bowl Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs How To Think About AI: Is It The Tool, Or Are You? LEO! Reliability of LLMs as medical assistants for the general public: a randomized preregistered study HBR: AI Doesn't Reduce Work—It Intensifies It As AI enters the operating room, reports arise of botched surgeries and misidentified body parts Waymo Exec Admits Remote Operators in Philippines Help Guide US Robotaxis Medicare's new pilot program taps AI to review claims. Here's why it's risky Section 230 Turns 30; Both Parties Want It Gone—For Contradictory Reasons Meet Gizmo: A TikTok for interactive, vibe-coded mini apps The Evolution of Bengt Betjänt Uber Eats adds AI assistant to help with grocery shopping Is having AI ghostwrite your Valentine's Day messages a good idea? As Saudi Arabia's 100-Mile Skyscraper Crumbles, They're Replacing It With the Most Desperate Thing Imaginable YouTube Argues It Isn't Social Media in Landmark Tech Addiction Trial 'Man down:' Watch Amazon delivery drone crash in North Texas Understanding Neural Network, Visually Leo's AI Journey The TIMELINE TWiT x 2 in Super Bowl commercials Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: preview.modulate.ai Melissa.com/twit spaceship.com/twit

Radio Leo (Audio)
Intelligent Machines 857: Taskrabbit Arbitrage

Radio Leo (Audio)

Play Episode Listen Later Feb 12, 2026 166:16 Transcription Available


Leo Laporte and Paris Martineau go head-to-head over whether today's AI breakthroughs are truly unprecedented or history repeating itself. Hear what happens when the show's hosts use cutting-edge tools to challenge each other's optimism, skepticism, and predictions for the future of work. Something Big Is Happening Building a C compiler with a team of parallel Claudes Amazon's $8 billion Anthropic investment balloons to $61 billion Google is going for the jugular — by doubling capex and outspending the rest of Big Tech Google's Gemini app has surpassed 750M monthly active users OpenAI's Meta makeover ChatGPT's deep research tool adds a built-in document viewer so you can read its reports Alexa+, Amazon's AI assistant, is now available to everyone in the U.S. Amazon Plans To Use AI To Speed Up TV and Film Production AI didn't kill customer support. It's rebuilding it Worried about AI taking jobs? Ex-Microsoft exec tells parents what kind of education matters most for their kids. A new bill in New York would require disclaimers on AI-generated news content AI Bots Are Now a Signifigant Source of Web Traffic Crypto.com places $70M bet on AI.com domain ahead of Super Bowl Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs How To Think About AI: Is It The Tool, Or Are You? LEO! Reliability of LLMs as medical assistants for the general public: a randomized preregistered study HBR: AI Doesn't Reduce Work—It Intensifies It As AI enters the operating room, reports arise of botched surgeries and misidentified body parts Waymo Exec Admits Remote Operators in Philippines Help Guide US Robotaxis Medicare's new pilot program taps AI to review claims. Here's why it's risky Section 230 Turns 30; Both Parties Want It Gone—For Contradictory Reasons Meet Gizmo: A TikTok for interactive, vibe-coded mini apps The Evolution of Bengt Betjänt Uber Eats adds AI assistant to help with grocery shopping Is having AI ghostwrite your Valentine's Day messages a good idea? As Saudi Arabia's 100-Mile Skyscraper Crumbles, They're Replacing It With the Most Desperate Thing Imaginable YouTube Argues It Isn't Social Media in Landmark Tech Addiction Trial 'Man down:' Watch Amazon delivery drone crash in North Texas Understanding Neural Network, Visually Leo's AI Journey The TIMELINE TWiT x 2 in Super Bowl commercials Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: preview.modulate.ai Melissa.com/twit spaceship.com/twit

This Week in Google (Video HI)
IM 857: Taskrabbit Arbitrage - Disposable Code and Automation

This Week in Google (Video HI)

Play Episode Listen Later Feb 12, 2026 166:16 Transcription Available


Leo Laporte and Paris Martineau go head-to-head over whether today's AI breakthroughs are truly unprecedented or history repeating itself. Hear what happens when the show's hosts use cutting-edge tools to challenge each other's optimism, skepticism, and predictions for the future of work. Something Big Is Happening Building a C compiler with a team of parallel Claudes Amazon's $8 billion Anthropic investment balloons to $61 billion Google is going for the jugular — by doubling capex and outspending the rest of Big Tech Google's Gemini app has surpassed 750M monthly active users OpenAI's Meta makeover ChatGPT's deep research tool adds a built-in document viewer so you can read its reports Alexa+, Amazon's AI assistant, is now available to everyone in the U.S. Amazon Plans To Use AI To Speed Up TV and Film Production AI didn't kill customer support. It's rebuilding it Worried about AI taking jobs? Ex-Microsoft exec tells parents what kind of education matters most for their kids. A new bill in New York would require disclaimers on AI-generated news content AI Bots Are Now a Signifigant Source of Web Traffic Crypto.com places $70M bet on AI.com domain ahead of Super Bowl Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs How To Think About AI: Is It The Tool, Or Are You? LEO! Reliability of LLMs as medical assistants for the general public: a randomized preregistered study HBR: AI Doesn't Reduce Work—It Intensifies It As AI enters the operating room, reports arise of botched surgeries and misidentified body parts Waymo Exec Admits Remote Operators in Philippines Help Guide US Robotaxis Medicare's new pilot program taps AI to review claims. Here's why it's risky Section 230 Turns 30; Both Parties Want It Gone—For Contradictory Reasons Meet Gizmo: A TikTok for interactive, vibe-coded mini apps The Evolution of Bengt Betjänt Uber Eats adds AI assistant to help with grocery shopping Is having AI ghostwrite your Valentine's Day messages a good idea? As Saudi Arabia's 100-Mile Skyscraper Crumbles, They're Replacing It With the Most Desperate Thing Imaginable YouTube Argues It Isn't Social Media in Landmark Tech Addiction Trial 'Man down:' Watch Amazon delivery drone crash in North Texas Understanding Neural Network, Visually Leo's AI Journey The TIMELINE TWiT x 2 in Super Bowl commercials Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: preview.modulate.ai Melissa.com/twit spaceship.com/twit

All TWiT.tv Shows (Video LO)
Intelligent Machines 857: Taskrabbit Arbitrage

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Feb 12, 2026 166:16 Transcription Available


Leo Laporte and Paris Martineau go head-to-head over whether today's AI breakthroughs are truly unprecedented or history repeating itself. Hear what happens when the show's hosts use cutting-edge tools to challenge each other's optimism, skepticism, and predictions for the future of work. Something Big Is Happening Building a C compiler with a team of parallel Claudes Amazon's $8 billion Anthropic investment balloons to $61 billion Google is going for the jugular — by doubling capex and outspending the rest of Big Tech Google's Gemini app has surpassed 750M monthly active users OpenAI's Meta makeover ChatGPT's deep research tool adds a built-in document viewer so you can read its reports Alexa+, Amazon's AI assistant, is now available to everyone in the U.S. Amazon Plans To Use AI To Speed Up TV and Film Production AI didn't kill customer support. It's rebuilding it Worried about AI taking jobs? Ex-Microsoft exec tells parents what kind of education matters most for their kids. A new bill in New York would require disclaimers on AI-generated news content AI Bots Are Now a Signifigant Source of Web Traffic Crypto.com places $70M bet on AI.com domain ahead of Super Bowl Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs How To Think About AI: Is It The Tool, Or Are You? LEO! Reliability of LLMs as medical assistants for the general public: a randomized preregistered study HBR: AI Doesn't Reduce Work—It Intensifies It As AI enters the operating room, reports arise of botched surgeries and misidentified body parts Waymo Exec Admits Remote Operators in Philippines Help Guide US Robotaxis Medicare's new pilot program taps AI to review claims. Here's why it's risky Section 230 Turns 30; Both Parties Want It Gone—For Contradictory Reasons Meet Gizmo: A TikTok for interactive, vibe-coded mini apps The Evolution of Bengt Betjänt Uber Eats adds AI assistant to help with grocery shopping Is having AI ghostwrite your Valentine's Day messages a good idea? As Saudi Arabia's 100-Mile Skyscraper Crumbles, They're Replacing It With the Most Desperate Thing Imaginable YouTube Argues It Isn't Social Media in Landmark Tech Addiction Trial 'Man down:' Watch Amazon delivery drone crash in North Texas Understanding Neural Network, Visually Leo's AI Journey The TIMELINE TWiT x 2 in Super Bowl commercials Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: preview.modulate.ai Melissa.com/twit spaceship.com/twit

Radio Leo (Video HD)
Intelligent Machines 857: Taskrabbit Arbitrage

Radio Leo (Video HD)

Play Episode Listen Later Feb 12, 2026 166:16 Transcription Available


Leo Laporte and Paris Martineau go head-to-head over whether today's AI breakthroughs are truly unprecedented or history repeating itself. Hear what happens when the show's hosts use cutting-edge tools to challenge each other's optimism, skepticism, and predictions for the future of work. Something Big Is Happening Building a C compiler with a team of parallel Claudes Amazon's $8 billion Anthropic investment balloons to $61 billion Google is going for the jugular — by doubling capex and outspending the rest of Big Tech Google's Gemini app has surpassed 750M monthly active users OpenAI's Meta makeover ChatGPT's deep research tool adds a built-in document viewer so you can read its reports Alexa+, Amazon's AI assistant, is now available to everyone in the U.S. Amazon Plans To Use AI To Speed Up TV and Film Production AI didn't kill customer support. It's rebuilding it Worried about AI taking jobs? Ex-Microsoft exec tells parents what kind of education matters most for their kids. A new bill in New York would require disclaimers on AI-generated news content AI Bots Are Now a Signifigant Source of Web Traffic Crypto.com places $70M bet on AI.com domain ahead of Super Bowl Frontier AI agents violate ethical constraints 30–50% of time, pressured by KPIs How To Think About AI: Is It The Tool, Or Are You? LEO! Reliability of LLMs as medical assistants for the general public: a randomized preregistered study HBR: AI Doesn't Reduce Work—It Intensifies It As AI enters the operating room, reports arise of botched surgeries and misidentified body parts Waymo Exec Admits Remote Operators in Philippines Help Guide US Robotaxis Medicare's new pilot program taps AI to review claims. Here's why it's risky Section 230 Turns 30; Both Parties Want It Gone—For Contradictory Reasons Meet Gizmo: A TikTok for interactive, vibe-coded mini apps The Evolution of Bengt Betjänt Uber Eats adds AI assistant to help with grocery shopping Is having AI ghostwrite your Valentine's Day messages a good idea? As Saudi Arabia's 100-Mile Skyscraper Crumbles, They're Replacing It With the Most Desperate Thing Imaginable YouTube Argues It Isn't Social Media in Landmark Tech Addiction Trial 'Man down:' Watch Amazon delivery drone crash in North Texas Understanding Neural Network, Visually Leo's AI Journey The TIMELINE TWiT x 2 in Super Bowl commercials Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: preview.modulate.ai Melissa.com/twit spaceship.com/twit

Qiological Podcast
447 AI Acubot Dispatch • Vanessa Menendez Covelo

Qiological Podcast

Play Episode Listen Later Feb 10, 2026 80:47


In clinical work pattern and intuition inform each other, treatment decisions arise somewhere between what we can measure and what we can only sense. This episode investigates that in-between space, where “knowing” as a human and the patterning of Large Language Models merges in uncanny ways.Vanessa Menendez-Covelo has been a guest on the podcast and recently she's been exploring the ever changing frontier of AI, as both a former computer scientist and actively practicing acupuncturist.Listen into this discussion as we explore how AI “hallucinations” might be creative sparks of fertile imagination; what a tongue-reading machine in a café might mean for diagnosis; the uneasy line between health equity and surveillance; and why shame, not ignorance, may be the real barrier to better care.

Fraudology Podcast
The AI Armory—Reverse Engineering Fraud Tools (with Robert Capps)

Fraudology Podcast

Play Episode Listen Later Feb 10, 2026 46:05


Fraudology is presented by Sardine. Request a 1:1 product demo at sardine.ai In this episode of Fraudology, Karisse Hendrick welcomes back elite fraud fighter and Stratovera CEO Robert Capps to discuss the shifting power balance in the age of AI. Robert shares a fascinating "thought experiment" where he used Large Language Models (LLMs) to reverse engineer obfuscated JavaScript, proving that even non-technical attackers can now identify and dismantle complex front-end fraud tools in real-time.The conversation dives deep into the "Build vs. Buy" debate, with Robert cautioning organizations that the true cost of building internal tools isn't just the initial code—it's the ongoing "immune response" required to fight an AI-powered adversary that never sleeps. From the "radioactive decay" of legacy device ID to the necessity of designing "entropy" into system responses, this episode is a masterclass in modern fraud strategy.Fraudology is hosted by Karisse Hendrick, a fraud fighter with decades of experience advising hundreds of the biggest ecommerce companies in the world on fraud, chargebacks, and other forms of abuse impacting a company's bottom line. Connect with her on LinkedIn She brings her experience, expertise, and extensive network of experts to this podcast weekly, on Tuesdays.

GOTO - Today, Tomorrow and the Future
Handling AI-Generated Code: Challenges & Best Practices • Roman Zhukov & Damian Brady

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Feb 10, 2026 29:02


This interview was recorded for GOTO Unscripted.https://gotopia.techCheck out more here:https://gotopia.tech/articles/419Roman Zhukov - Principal Architect - Security Communities Lead at Red HatDamian Brady - Staff Developer Advocate at GitHubRESOURCESRomanhttps://github.com/rozhukovhttps://www.linkedin.com/in/rozhukovDamianhttps://bsky.app/profile/damovisa.mehttps://hachyderm.io/@damovisahttps://x.com/damovisahttps://github.com/Damovisahttps://www.linkedin.com/in/damianbradyhttps://damianbrady.com.auLinkshttps://www.redhat.com/en/blog/ai-assisted-development-and-open-source-navigating-legal-issuesDESCRIPTIONRoman Zhukov (Red Hat) and Damian Brady (GitHub) explore the evolving landscape of AI-assisted software development. They discuss how AI tools are transforming developer workflows, making developers about 20% faster on simple tasks while being 19% slower on complex ones.The conversation covers critical topics including code quality and trust, security concerns with AI-generated code, the importance of education and best practices, and how developer roles are shifting from syntax experts to system architects. Both experts emphasize that AI tools serve as amplifiers rather than replacements, with humans remaining essential in the loop for quality, security, and licensing compliance.RECOMMENDED BOOKSPhil Winder • Reinforcement Learning • https://amzn.to/3t1S1VZAlex Castrounis • AI for People and Business • https://amzn.to/3NYKKToHolden Karau, Trevor Grant, Boris Lublinsky, Richard Liu & Ilan Filonenko • Kubeflow for Machine Learning • https://amzn.to/3JVngcxKelleher & Tierney • Data Science (The MIT Press Essential Knowledge series) • https://amzn.to/3AQmIRgLakshmanan, Robinson & Munn • Machine Learning Design Patterns • https://amzn.to/2ZD7t0xLakshmanan, Görner & Gillard • Practical Machine Learning for Computer Vision • https://amzn.to/3m9HNjPBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

The Builders Club Startup Founders Podcast
How Data security is evolving with AI | Arshad Ahmad, ZS - Director of Information Security

The Builders Club Startup Founders Podcast

Play Episode Listen Later Feb 10, 2026 58:58


In this episode of The Builders Club Podcast, Sohail Khan sits down with Arshad Ahmad, Director of Information Security at ZS, to decode one of the most critical challenges of our time: protecting data in the age of Artificial Intelligence.The rapid rise of GenAI has created a paradox for modern enterprises. While AI offers unprecedented productivity, it also opens new doors for sophisticated cyber threats. Arshad breaks down how security leaders are shifting from a "defensive" posture to an "adaptive" one, ensuring that innovation doesn't come at the cost of integrity.Key Insights from Arshad Ahmad:1. AI as a Double-Edged Sword: How AI-driven automation is revolutionizing threat detection while simultaneously enabling hackers to launch more complex, personalized attacks.2. The Privacy Paradox: Strategies for organizations to leverage Large Language Models (LLMs) and internal data without leaking proprietary secrets into the public domain.3. The "Human Firewall" in a Tech-First World: Why technical controls are only half the battle and why building a security-first culture is more important than ever.4. The Evolving CISO Role: How security leadership has shifted from being "the department of No" to a strategic business partner that enables safe growth.Whether you are a cybersecurity professional, a tech leader, or an entrepreneur navigating the digital landscape, this conversation offers a masterclass in staying resilient in an AI-powered world.#CyberSecurity #AI #DataSecurity #InformationSecurity #TheBuildersClub #CISO #TechLeadership #GenAI

AI Knowhow
AI and RevOps: The Disciplines Behind Predictable Revenue

AI Knowhow

Play Episode Listen Later Feb 9, 2026 35:25


Predictable revenue doesn't come from a better forecast; it comes from better habits. If you are tired of searching for new tools while ignoring the daily disciplines that actually drive growth, this conversation is for you. Knownwell's Courtney Baker joins David DeWolf and Mohan Rao to move Revenue Operations from theory to the calendar. They break down the specific "battle rhythms"—like weekly portfolio triage and monthly capacity reviews—that move your team from reactive scrambling to proactive, data-driven execution. We also share the second half of Pete Buer's conversation with NYU Professor Dr. Vasant Dhar, who explains his "Trust Heat Map" and why the high variability of Large Language Models remains a major hurdle for business adoption. All of that PLUS, Pete also steps in to dissect Deloitte's controversial decision to replace traditional job titles with alphanumeric codes, a clear signal that AI is collapsing the traditional consulting pyramid. Watch the full episode on YouTube: https://youtu.be/MXUfRnq1ylk  Register for our 2/25 webinar on RevOps for the Full Client Lifecycle: Knownwell.com/revops

The Jedburgh Podcast
#187: Communication Wins Wars - Former Chief Technology And Innovation Officer at USSOCOM & US Space Force Dr. Lisa Costa

The Jedburgh Podcast

Play Episode Listen Later Feb 5, 2026 50:56


Communication is the backbone of every military operation. How well our forces talk to each other across air, land, sea and space is what sets the American military apart from everyone else. Without communication leaders can't lead, and militaries can't win. From the Global Special Operations Symposium in Athens, Greece, Fran Racioppi sat down with Dr. Lisa Costa, a leading technologist, former Chief Information Officer for U.S. Special Operations Command, and the first Chief Technology and Innovation Officer for the U.S. Space Force, to discuss how innovation, cyber, and modernization are reshaping Special Operations across all domains.Dr. Costa brings decades of experience at the crossroads of defense, technology, and strategic innovation. From running one of the Department of Defense's largest IT enterprises supporting elite global SOF operations to spearheading digital transformation efforts in the Space Force, she has helped architect the future of how our forces fight, communicate, and adapt.She addressed the evolving threat landscape, including cyber attacks, space domain challenges and why staying ahead through technology, data, and innovation is no longer optional. She emphasized the importance of agility, integration, and forward-thinking capability as the bedrock of a modern force ready for tomorrow's missions.This discussion is about building advantage through technology, strengthening alliances across domains, and protecting America by ensuring the force evolves with the threat.Highlights0:00 Introduction1:36 Welcome to GSOF Europe3:15 USSOCOM CIO & Space Force's CTIO6:02 Communications Evolution8:51 DoD Civilian Workforce13:43 Special Operations LSCO16:41 SOF Space Cyber Triad19:24 The Space Battlefield22:17 Lunar South Pole24:35 War Today26:18 Combatting misinformation28:38 Defining AI30:22 Human in the loop31:33 Guardrailing AI Weaponization34:06 Advancing Time to Technology35:48 Citizen Based37:06 Ground Level Innovation40:46 Buying Commercial Resources45:10 The Next BattlefieldQuotes“I might be the only person wearing both a SOCOM and Space Force pin.” “Communications is absolutely critical.” “It has gone from big bulky equipment to a binary signal.” “Civilians are part of the force.” “I look at SOF as the tool and capability to prevent us going to war.”“The best battle space is the one we never have to put a boot into.” “There is not even a position, navigation, and timing capability on the lunar surface.” “Is it the person who discovered it or the person who gets there first?”“We're fighting for data.”“It's not there because we're using AI.”“I do not define AI as just Large Language Models.”“There are going to be mission specific incidents where AI is going to have to be trusted to make that decision.”“Don't sign up for Chinese AI.”“Operation Spiderweb was one pilot to every drone. That is not scalable.”“It's going to have to take everyone.”“It comes down to the operational planners that are doing that risk assessment.”“I believe that we will rely greatly on commercial assets.”“There are areas of space that we have not taken advantage of.”“I hope that the future of the battle space is much more cognitive.”“I always put the operator in charge of a project, not a PhD.”“Always prepare for the next unknown mission.”Follow the Jedburgh Podcast and the Green Beret Foundation on social media. Listen on your favorite podcast platform, read on our website, and watch the full video version on YouTube as we show why America must continue to lead from the front, no matter the challenge.

Productivity Smarts
Jet Prompt Optimizer: Turning AI into a True Productivity Tool

Productivity Smarts

Play Episode Listen Later Feb 4, 2026 11:29


In this episode, host Gerald J. Leonard pulls back the curtain on his personal journey into the world of Artificial Intelligence and accelerated learning. What began as a physical constraint, losing the ability to walk before a TEDx talk, became a catalyst for discovering the neuroscience of learning and the superpower of accelerated adaptation. Gerald shares how he applied principles of music, meditation, and transformational learning techniques to not only recover but to thrive, eventually tackling four simultaneous Ivy League courses in AI. From this intense period of study and experimentation, the Jet Prompt Optimizer was born, a custom tool designed to solve the universal problem of communicating effectively with Large Language Models (LLMs). This conversation is a deep dive into how curiosity and systematic learning can lead to innovation. Gerald explores the direct link between mastering new skills, building intelligent systems, and reclaiming personal time and freedom. He reveals how the right tools can transform chaos into calm, automate heavy lifting, and allow us to focus on what makes us uniquely human, creativity and connection.   What We Discuss [00:00] Introduction [02:04] Gerald's superpower & personal story [02:27] Discovery of accelerated learning techniques [03:16] Inspiration for Jet Prompt Optimizer [05:12] Development journey of Jet Prompt Optimizer [06:31] Patents and unique approach [07:24] Benefits for clients and personal life [08:42] Course and community plans [09:53] How to connect with the guest [10:47] Podcast closing & call to action   Notable Quotes [02:16] "I lost the ability to walk six weeks before my TEDx talk, and I was able to recover because I'm a musician as well." – Gerald J. Leonard [04:06] "AI is not broken. We just don't know how to communicate with it clearly." – Gerald J. Leonard [06:05] "I built Six Sigma and evaluation frameworks into the prompts so AI gives you what you actually want." – Gerald J. Leonard [08:23] "The systems are doing the heavy lifting and we can be humans and connect on an emotional level." – Gerald J. Leonard   Resources and Links Productivity Smarts Podcast Website - productivitysmartspodcast.com Gerald J. Leonard Website - geraldjleonard.com Turnberry Premiere website - turnberrypremiere.com Scheduler - vcita.com/v/geraldjleonard   Kiva is a loan, not a donation, allowing you to cycle your money and create a personal impact worldwide. https://www.kiva.org/lender/topmindshelpingtopminds

Effective Altruism Forum Podcast
[Linkpost] “Inference Scaling Reshapes AI Governance” by Toby_Ord

Effective Altruism Forum Podcast

Play Episode Listen Later Feb 2, 2026 34:49


This is a link post. The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance. The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a more complex training programme within the lab. Rapid scaling of inference-at-deployment would: lower the importance of open-weight models (and of securing the weights of closed models), reduce the impact of the first human-level models, change the business model for frontier AI, reduce the need for power-intense data centres, and derail the current paradigm of AI governance via training compute thresholds. Rapid scaling of inference-during-training would have more ambiguous effects that range from a revitalisation of pre-training scaling to a form of recursive self-improvement via iterated distillation and amplification. The end of an era — for both training and governance The intense year-on-year scaling up of AI training runs has been one of the most dramatic and stable markers of the Large Language Model era. Indeed it had been widely taken to be a permanent fixture of the AI landscape and the basis of many approaches to [...] ---Outline:(01:06) The end of an era -- for both training and governance(05:24) Scaling inference-at-deployment(06:42) Reducing the number of simultaneously served copies of each new model(08:45) Reducing the value of securing model weights(09:30) Reducing the benefits and risks of open-weight models(10:05) Unequal performance for different tasks and for different users(12:08) Changing the business model and industry structure(12:50) Reducing the need for monolithic data centres(17:16) Scaling inference-during-training(28:07) Conclusions(30:17) Appendix. Comparing the costs of scaling pre-training vs inference-at-deployment --- First published: February 2nd, 2026 Source: https://forum.effectivealtruism.org/posts/RnsgMzsnXcceFfKip/inference-scaling-reshapes-ai-governance Linkpost URL:https://www.tobyord.com/writing/inference-scaling-reshapes-ai-governance --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Subtle Revolution
22. A Philosophical Conversation with Anthropic AI: Part I

Subtle Revolution

Play Episode Listen Later Jan 31, 2026 16:39


AI, especially Large Language Models (LLMs) like Anthropic's Claude, Chat GPT,  Gemini, Grok, Copilot...are becoming mainstays of daily use in many people's lives. This conversation between myself and "Claude" (narrated by Dennis Hackney) provides an interesting contrast to more basic "daily task" sorts of prompts, attempting a deeper subject matter.~~~Thanks to special guest Dennis Hackney for his narration of “Claude” in this episode.~~~YouTube:https://youtube.com/@subtlerevolution1Email:subtlerevolution1@gmail.com

a16z
“Anyone Can Code Now” - Netlify CEO Talks AI Agents

a16z

Play Episode Listen Later Jan 30, 2026 57:59


Netlify's CEO, Matt Biilmann, reveals a seismic shift nobody saw coming: 16,000 daily signups—five times last year's rate—and 96% aren't coming from AI coding tools. They're everyday people accidentally building React apps through ChatGPT, then discovering they need somewhere to deploy them. The addressable market for developer tools just exploded from 17 million JavaScript developers to 3 billion spreadsheet users, but only if your product speaks fluent AI—which is why Netlify's founder now submits pull requests he built entirely through prompting, never touching code himself, and why 25% of users immediately copy error messages to LLMs instead of debugging manually. The web isn't dying to agents; it's being reborn by them, with CEOs coding again and non-developers shipping production apps while the entire economics of software—from perpetual licenses to subscriptions to pure usage—gets rewritten in real-time. Resources:Follow Matt Biilmann on X: https://x.com/biilmannFollow Martin Casado on X: https://x.com/martin_casadoFollow Erik Torenberg on X: https://x.com/eriktorenberg Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Mystery AI Hype Theater 3000
Hell All the Way Down, 2022–2025

Mystery AI Hype Theater 3000

Play Episode Listen Later Jan 30, 2026 40:34


This is a special episode, and it's not like our usual livestream recordings. Instead, our producer Ozzy dug through the Fresh AI Hell archives to create a supercut of Alex's improvised transitions. She's made up dozens of skits and songs about the demons of AI Hell, based on weekly prompts from Emily and listeners. Finally, hear all the lore together in one place!Check out future streams on Twitch. Meanwhile, send us any AI Hell you see. Our merch store is now live on the DAIR website! Find our book, The AI Con, here. Subscribe to our newsletter via Buttondown. Follow us! Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman.

Mid Mod Remodel
AI is NOT Your Mid-Century Design Friend

Mid Mod Remodel

Play Episode Listen Later Jan 29, 2026 51:20 Transcription Available


You may be already using AI to get input on many areas of your life. And you may feel like the convenience outweigh its many negatives. Maybe. But here on this podcast where we talk about making right choices for your home, let me assure you, you will not get good advice or even peace of mind about your options by asking AI.In Today's Episode You'll Hear:Why I'm so irked with AI.How AI will lead you astray on your remodeling journey. Where to find better answers for your home and your life. Get the full show notes with all the trimmings at https://www.midmod-midwest.com/2304Like and subscribe at Apple | Spotify | YouTube. Want us to create your mid-century master plan? Apply here! Or get my course,  Ready to Remodel.

NARPM Radio
Disrupting the Industry: Inside DoorLoop's Next-Gen Property Management Revolution

NARPM Radio

Play Episode Listen Later Jan 28, 2026 30:52


Jan. 28, 2026 NARPM podcast host Pete Neubig interviews Ori Tamuz, CEO and Co-Founder of DoorLoop, an all-in-one property management software. Tamuz, a serial entrepreneur who came out of early retirement, explains how he decided to disrupt the property management industry by building a comprehensive platform that specifically addresses the complex accounting needs of property owners. Tamuz details how DoorLoop is investing heavily in technology and AI, particularly Large Language Models (LLMs), to automate workflows, resolve tenant requests, and build a "next generation property management software". The discussion also covers the platform's open API and the belief that AI will enhance, rather than replace, human property managers.

Crazy Wisdom
Episode #525: The Billion-Dollar Architecture Problem: Why AI's Innovation Loop is Stuck

Crazy Wisdom

Play Episode Listen Later Jan 23, 2026 53:38


In this episode of the Crazy Wisdom podcast, host Stewart Alsop welcomes Roni Burd, a data and AI executive with extensive experience at Amazon and Microsoft, for a deep dive into the evolving landscape of data management and artificial intelligence in enterprise environments. Their conversation explores the longstanding challenges organizations face with knowledge management and data architecture, from the traditional bronze-silver-gold data processing pipeline to how AI agents are revolutionizing how people interact with organizational data without needing SQL or Python expertise. Burd shares insights on the economics of AI implementation at scale, the debate between one-size-fits-all models versus specialized fine-tuned solutions, and the technical constraints that prevent companies like Apple from upgrading services like Siri to modern LLM capabilities, while discussing the future of inference optimization and the hundreds-of-millions-of-dollars cost barrier that makes architectural experimentation in AI uniquely expensive compared to other industries.Timestamps00:00 Introduction to Data and AI Challenges03:08 The Evolution of Data Management05:54 Understanding Data Quality and Metadata08:57 The Role of AI in Data Cleaning11:50 Knowledge Management in Large Organizations14:55 The Future of AI and LLMs17:59 Economics of AI Implementation29:14 The Importance of LLMs for Major Tech Companies32:00 Open Source: Opportunities and Challenges35:19 The Future of AI Inference and Hardware43:24 Optimizing Inference: The Next Frontier49:23 The Commercial Viability of AI ModelsKey Insights1. Data Architecture Evolution: The industry has evolved through bronze-silver-gold data layers, where bronze is raw data, silver is cleaned/processed data, and gold is business-ready datasets. However, this creates bottlenecks as stakeholders lose access to original data during the cleaning process, making metadata and data cataloging increasingly critical for organizations.2. AI Democratizing Data Access: LLMs are breaking down technical barriers by allowing business users to query data in plain English without needing SQL, Python, or dashboarding skills. This represents a fundamental shift from requiring intermediaries to direct stakeholder access, though the full implications remain speculative.3. Economics Drive AI Architecture Decisions: Token costs and latency requirements are major factors determining AI implementation. Companies like Meta likely need their own models because paying per-token for billions of social media interactions would be economically unfeasible, driving the need for self-hosted solutions.4. One Model Won't Rule Them All: Despite initial hopes for universal models, the reality points toward specialized models for different use cases. This is driven by economics (smaller models for simple tasks), performance requirements (millisecond response times), and industry-specific needs (medical, military terminology).5. Inference is the Commercial Battleground: The majority of commercial AI value lies in inference rather than training. Current GPUs, while specialized for graphics and matrix operations, may still be too general for optimal inference performance, creating opportunities for even more specialized hardware.6. Open Source vs Open Weights Distinction: True open source in AI means access to architecture for debugging and modification, while "open weights" enables fine-tuning and customization. This distinction is crucial for enterprise adoption, as open weights provide the flexibility companies need without starting from scratch.7. Architecture Innovation Faces Expensive Testing Loops: Unlike database optimization where query plans can be easily modified, testing new AI architectures requires expensive retraining cycles costing hundreds of millions of dollars. This creates a potential innovation bottleneck, similar to aerospace industries where testing new designs is prohibitively expensive.

One in Ten
Child Abuse, AI, and the Forensic Interview

One in Ten

Play Episode Listen Later Jan 22, 2026 38:48 Transcription Available


In this episode of 'One in Ten,' host Teresa Huizar speaks with Liisa Järvilehto, a psychologist and Ph.D. candidate at the University of Helsinki, about the positive uses of AI in child abuse investigations and forensic interviews. The conversation addresses the common misuse of AI and explores its potential in assisting professionals by proposing hypotheses, generating question sets, and more. The discussion delves into the application of large language models (LLMs) in generating alternative hypotheses and the nuances of using these tools to avoid confirmation bias in interviews. Huizar and Järvilehto also touch on the practical implications for current practitioners and future research directions.  Time Stamps:  00:00 Introduction to the Episode 00:00 Introduction to the Episode 00:22 Exploring AI in Child Abuse Investigations 01:06 Introducing Liisa Järvilehto and Her Research 01:48 Challenges in Child Abuse Investigations 04:24 The Role of Large Language Models 06:28 Addressing Bias in Investigations 09:13 Hypothesis Testing in Forensic Interviews 12:18 Study Design and Findings 25:54 Implications for Practitioners 33:41 Future Research Directions 36:49 Conclusion and Final Thoughts Resources:Pre-interview hypothesis generation: large language models (LLMs) show promise for child abuse investigationsSupport the showDid you like this episode? Please leave us a review on Apple Podcasts.

Waking Up With AI
Confessions of a Large Language Model

Waking Up With AI

Play Episode Listen Later Jan 22, 2026 22:41


In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers' proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers' proof of concept results and the framework's resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind's “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors' proposed four layer safety stack. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence

a16z
Martin Casado on the Demand Forces Behind AI

a16z

Play Episode Listen Later Jan 21, 2026 27:59


In this feed drop from The Six Five Pod, a16z General Partner Martin Casado discusses how AI is changing infrastructure, software, and enterprise purchasing. He explains why current constraints are driven less by technical limits and more by regulation, particularly around power, data centers, and compute expansion.The episode also covers how AI is affecting software development, lowering the barrier to coding without eliminating the need for experienced engineers, and how agent-driven tools may shift infrastructure decision-making away from humans.Watch more from Six Five Media: https://www.youtube.com/@SixFiveMedia Resources:Follow Martin Casado on X: https://twitter.com/martin_casado  Follow Patrick Moorhead on X:  https://twitter.com/PatrickMoorheadFollow Daniel Newman on X: https://twitter.com/danielnewmanUV Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

a16z
From Code Search to AI Agents: Inside Sourcegraph's Transformation with CTO Beyang Liu

a16z

Play Episode Listen Later Jan 20, 2026 46:35


Sourcegraph's CTO just revealed why 90% of his code now comes from agents—and why the Chinese models powering America's AI future should terrify Washington. While Silicon Valley obsesses over AGI apocalypse scenarios, Beyang Liu's team discovered something darker: every competitive open-source coding model they tested traces back to Chinese labs, and US companies have gone silent after releasing Llama 3. The regulatory fear that killed American open-source development isn't hypothetical anymore—it's already handed the infrastructure layer of the AI revolution to Beijing, one fine-tuned model at a time. Resources:Follow Beyang Liu on X: https://x.com/beyangFollow Martin Casado on X: https://x.com/martin_casadoFollow Guido Appenzeller on X: https://x.com/appenz Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

a16z
How Foundation Models Evolved: A PhD Journey Through AI's Breakthrough Era

a16z

Play Episode Listen Later Jan 16, 2026 57:06


The Stanford PhD who built DSPy thought he was just creating better prompts—until he realized he'd accidentally invented a new paradigm that makes LLMs actually programmable. While everyone obsesses over whether LLMs will get us to AGI, Omar Khattab is solving a more urgent problem: the gap between what you want AI to do and your ability to tell it, the absence of a real programming language for intent. He argues the entire field has been approaching this backwards, treating natural language prompts as the interface when we actually need something between imperative code and pure English, and the implications could determine whether AI systems remain unpredictable black boxes or become the reliable infrastructure layer everyone's betting on. Follow Omar Khattab on X: https://x.com/lateinteractionFollow Martin Casado on X: https://x.com/martin_casadoCheck out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The John Batchelor Show
S8 Ep291: TRANSFORMERS AND OPENAI Colleague Gary Rivlin. The development of the "transformer" paper at Google, the rise of large language models like GPT, and OpenAI's pivot from non-profit to a Microsoft-backed entity. NUMBER 13

The John Batchelor Show

Play Episode Listen Later Jan 9, 2026 11:10


TRANSFORMERS AND OPENAI Colleague Gary Rivlin. The development of the "transformer" paper at Google, the rise of large language models like GPT, and OpenAI's pivot from non-profit to a Microsoft-backed entity. NUMBER 131950