Podcasts about gpt

  • 4,447PODCASTS
  • 13,178EPISODES
  • 41mAVG DURATION
  • 5DAILY NEW EPISODES
  • Feb 15, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about gpt

Show all podcasts related to gpt

Latest podcast episodes about gpt

Hírstart Robot Podcast
Sunray: saját lézerfegyverrel lőné ki az orosz drónokat Ukrajna

Hírstart Robot Podcast

Play Episode Listen Later Feb 15, 2026 3:06


Sunray: saját lézerfegyverrel lőné ki az orosz drónokat Ukrajna Trump AI segítségével rabolta el Nicolás Madurót Vége a gondnak a Nemzetközi Űrállomáson A kínai kutatók megcsinálták, ami sokáig képtelenségnek tűnt: sivatagból erdő lett A Google Ai benyomul az életébe: újdonságok, amelyek befolyásolják a mindennapjait Aljas eljárás: így tennék tönkre az olimpiát és a foci vb-t is Az OpenAI végleg lekapcsolja a GPT-4o-t Mit tanácsol a ChatGPT a következő kormánynak? A további adásainkat keresd a podcast.hirstart.hu oldalunkon. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Hírstart Robot Podcast - Tech hírek
Sunray: saját lézerfegyverrel lőné ki az orosz drónokat Ukrajna

Hírstart Robot Podcast - Tech hírek

Play Episode Listen Later Feb 15, 2026 3:06


Sunray: saját lézerfegyverrel lőné ki az orosz drónokat Ukrajna Trump AI segítségével rabolta el Nicolás Madurót Vége a gondnak a Nemzetközi Űrállomáson A kínai kutatók megcsinálták, ami sokáig képtelenségnek tűnt: sivatagból erdő lett A Google Ai benyomul az életébe: újdonságok, amelyek befolyásolják a mindennapjait Aljas eljárás: így tennék tönkre az olimpiát és a foci vb-t is Az OpenAI végleg lekapcsolja a GPT-4o-t Mit tanácsol a ChatGPT a következő kormánynak? A további adásainkat keresd a podcast.hirstart.hu oldalunkon. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Quantum Revolution Now
Zero to Quantum in 60 Minutes: How AI Vibe Coding Is Minting the Next $400K Developers

Quantum Revolution Now

Play Episode Listen Later Feb 15, 2026 22:16


Step into the cutting-edge world of quantum computing in 2026 with the Qubit Value Podcast, where hosts explore the revolutionary "Twin Launch" of GPT-5.3 Codex and Claude 4.6 Opus. This episode demystifies the paradigm shift of "vibe coding," a new era where developers manage the high-level physics and logic of an experiment while AI handles the rigorous syntax. Listeners are guided through the "money trail" of modern development, from the subscription costs of AI-native IDEs like Cursor to the high-stakes execution fees of running circuits on physical machines like IonQ Aria. Whether you're a hobbyist or an enterprise architect, this episode offers a witty and essential roadmap to navigating the financial hurdles and immense rewards of the $50 billion quantum sector. Want to hear more? Send a message to Qubit Value

HTML All The Things - Web Development, Web Design, Small Business
Web News: AI Competition is Out Of Control

HTML All The Things - Web Development, Web Design, Small Business

Play Episode Listen Later Feb 14, 2026 26:27


The pace of AI model releases is becoming almost impossible to follow. In just two weeks we saw GPT-5.3-Codex, GPT-5.2 updates, Gemini 3 Deep Think upgrades, Claude Opus 4.6 with a 1M context window in beta, Qwen3-Coder-Next, GLM-5, MiniMax M2.5, Cursor Composer 1.5, and even Kimi 2.5 just outside the window. This isn't a quarterly product cycle anymore - it's a daily arms race. In this episode Matt and Mike break down what this acceleration means for developers, open source, frontier labs, and the broader industry. Are we witnessing healthy innovation, or unsustainable velocity? At what point does this stabilize - if it ever does? If you're trying to build, learn, or compete in AI right now… this conversation is for you. ‍Show Notes: https://www.htmlallthethings.com/podcast/ai-competition-is-out-of-control

Hacker News Recap
February 13th, 2026 | Fix the iOS keyboard before the timer hits zero or I'm switching back to Android

Hacker News Recap

Play Episode Listen Later Feb 14, 2026 15:38


This is a recap of the top 10 posts on Hacker News on February 13, 2026. This podcast was generated by wondercraft.ai (00:30): Fix the iOS keyboard before the timer hits zero or I'm switching back to AndroidOriginal post: https://news.ycombinator.com/item?id=47003064&utm_source=wondercraft_ai(01:59): MonosketchOriginal post: https://news.ycombinator.com/item?id=47001871&utm_source=wondercraft_ai(03:28): MinIO repository is no longer maintainedOriginal post: https://news.ycombinator.com/item?id=47000041&utm_source=wondercraft_ai(04:58): Skip the Tips: A game to select "No Tip" but dark patterns try to stop youOriginal post: https://news.ycombinator.com/item?id=46997519&utm_source=wondercraft_ai(06:27): The EU moves to kill infinite scrollingOriginal post: https://news.ycombinator.com/item?id=47007656&utm_source=wondercraft_ai(07:56): OpenAI has deleted the word 'safely' from its missionOriginal post: https://news.ycombinator.com/item?id=47008560&utm_source=wondercraft_ai(09:26): GPT-5.2 derives a new result in theoretical physicsOriginal post: https://news.ycombinator.com/item?id=47006594&utm_source=wondercraft_ai(10:55): Ring owners are returning their camerasOriginal post: https://news.ycombinator.com/item?id=46999545&utm_source=wondercraft_ai(12:25): Lena by qntm (2021)Original post: https://news.ycombinator.com/item?id=46999224&utm_source=wondercraft_ai(13:54): An AI Agent Published a Hit Piece on Me – More Things Have HappenedOriginal post: https://news.ycombinator.com/item?id=47009949&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Product Showdown: GPT-5.3 Codex vs. Claude Opus 4.6 — The Speed Demon vs. The Architect

AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store

Play Episode Listen Later Feb 14, 2026 26:53


In this rapid-fire "Product Showdown," we test drive the two hottest coding models on the planet: OpenAI's GPT-5.3 Codex and Anthropic's Claude Opus 4.6.Are they direct competitors, or do they serve completely different masters? We break down the strengths of each: Codex for "execution" and Opus for "reasoning." If you are a developer trying to decide which subscription to keep, this 2-minute breakdown is for you.Key Takeaways:GPT-5.3 Codex: Best for fast iteration, terminal workflows, and shipping code quickly.Claude Opus 4.6: Best for deep reasoning, long-context architecture, and complex problem-solving.The Verdict: Stop looking for a winner. Use Codex for doing and Opus for thinking.Keywords: GPT-5.3 Codex, Claude Opus 4.6, AI Coding Benchmarks, Dev Tools, Agentic Workflows, OpenAI vs Anthropic

Quantum Revolution Now
Beyond Predictions: GPT-6 and the Dawn of Quantum-Native Intelligence

Quantum Revolution Now

Play Episode Listen Later Feb 14, 2026 14:00


Step into the future with the latest episode of the Qubit Value Podcast, where the rapid evolution of quantum computing meets the cutting-edge of artificial intelligence. Recorded on February 14, 2026, this episode explores the breathtaking transition from the early limitations of GPT-4 to the groundbreaking capabilities of the new GPT-5.3 Codex. Join the hosts as they discuss how AI has moved beyond simple text prediction to mastering complex quantum algorithms, optimizing hardware design for chips like Google's "Willow," and even debugging legacy code in seconds. From the "Physics Awakening" of 2025 to the philosophical shift toward "quantum-native" intelligence, this conversation is an essential listen for anyone curious about how AI is not just writing code anymore—it's helping us reinvent the laws of physics. Want to hear more? Send a message to Qubit Value

Azeem Azhar's Exponential View
Inside the economics of OpenAI (exclusive research)

Azeem Azhar's Exponential View

Play Episode Listen Later Feb 13, 2026 49:46


Welcome to Exponential View, the show where I explore how exponential technologies such as AI are reshaping our future. I've been studying AI and exponential technologies at the frontier for over ten years. Each week, I share some of my analysis or speak with an expert guest to make light of a particular topic. To keep up with the Exponential transition, subscribe to this channel or to my newsletter: https://www.exponentialview.co/ ----In this episode, I'm joined by Jaime Sevilla, founder of Epoch AI; Hannah Petrovic from my team at Exponential View; and financial journalist Matt Robinson from AI Street. Together we investigate a fundamental question: do the economics of AI companies actually work? We analysed OpenAI's financials from public data to examine whether their revenues can sustain the staggering R&D costs of frontier models. The findings reveal a picture far more precarious than many assume; we also explore where the real infrastructure bottlenecks lie, why compute demand will dwarf energy constraints, and what the rise of long-running agentic workloads means for the entire industry. Read the study here: https://www.exponentialview.co/p/inside-openais-unit-economics-epoch-exponentialviewWe covered: (00:00) Do the economics of frontier AI actually work? (02:48) Piecing together OpenAI's finances from public data (05:24) GPT-5's "rapidly depreciating asset" problem (13:25) Why OpenAI is flirting with ads (17:31) If you were Sam Altman, what would you do differently? (22:54) Energy vs. GPUs; where the real infrastructure bottleneck lies (29:15) What surging compute demand actually looks like (33:12) The most surprising finding from the research (38:02) The race to avoid commoditization (43:35) Agents that outlive their models  Where to find me: Exponential View newsletter: https://www.exponentialview.co/ Website: https://www.azeemazhar.com/ LinkedIn: https://www.linkedin.com/in/azhar/ Twitter/X: https://x.com/azeem  Where to find Jamie: https://epoch.ai or https://epochai.substack.com Where to find Matt: https://www.ai-street.co  Production by supermix.io and EPIIPLUS1 Production and research: Chantal Smith and Marija Gavrilov. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Big Technology Podcast
Is Something Big Happening?, AI Safety Apocalypse, Anthropic Raises $30 Billion

Big Technology Podcast

Play Episode Listen Later Feb 13, 2026 68:35


Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We're also joined by Steven Adler, ex-OpenAI safety researcher and author of Clear-Eyed AI on Substack. We cover: 1) The Viral "Something Big Is Happening" essay 2) What the essay got wrong about recursive self-improving AI 3) Where the essay was right about the pace of change 4) Are we ready for the repercussions of fast moving AI? 5) Anthropic's Claude Opus 4.6 model card's risks 6) Do AI models know when they're being tested? 7) An Anthropic researcher leaves and warns "the world is in peril" 8) OpenAI disbands its mission alignment team 9) The risks of AI companionship 10) OpenAI's GPT 4o is mourned on the way out 11) Anthropic raises $30 billion --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b EXCLUSIVE NordVPN Deal ➼ https://nordvpn.com/bigtech.  Try it risk-free now with a 30-day money-back guarantee! Learn more about your ad choices. Visit megaphone.fm/adchoices

Million Dollar Landscaper
Stop Writing Boring Job Posts — Sell Your Company Instead - MDL Episode 390

Million Dollar Landscaper

Play Episode Listen Later Feb 13, 2026 10:31


Hiring's getting harder and the labor shortage isn't going away, but you can control how you sell your company to potential employees. Instead of listing features (like $20/hour or a 7–4:30 schedule), this episode teaches you to flip those into benefits that answer “What's in it for me?” so your job posting stands out.   You'll hear real examples (biweekly direct deposit → “never late paycheck”; consistent 7 a.m. start → “no guessing when to show up”) and a simple process to turn every feature into a hire-ready benefit. Kati also explains a custom GPT tool he built that writes these benefit-driven job posts in minutes — link in the show notes to join there free AI for Contractors group and grab the template. https://t2m.io/aiforcontractors   Ready to hire this season? Join the free AI for Contractors group (link in show notes), subscribe for more hiring tips, and tune in next week when Scott walks through behavioral interview questions to help you spot reliable, on-time employees.   Join the AI for Contractors group at https://t2m.io/aiforcontractors   Follow Million Dollar Landscaper: Website | Facebook | Instagram | YouTube

The Neuron: AI Explained
BONUS: OpenAI Codex Demo, Learn the Absolute Basics of Coding with AI

The Neuron: AI Explained

Play Episode Listen Later Feb 13, 2026 120:20


In this week's live-stream replay, we go live for a 2-hour, hands-on deep dive into GPT-5.1 Codex Max with Alexander Embiricos, product lead for OpenAI Codex. You'll walk out feeling like an agentic-coding wizard, even if you're starting from zero. GPT-5.1 Codex Max is OpenAI's latest frontier agentic coding model. It's built on an upgraded reasoning backbone and trained to handle real-world software engineering tasks end to end: PRs, refactors, frontend builds, and deep debugging. It can work independently for hours, compacting its own history so it can refactor entire projects and run multi-hour agent loops without losing context. In this live session, we'll set it up together, build real agents, and push Codex Max to its limits.

Hacker News Recap
February 12th, 2026 | An AI agent published a hit piece on me

Hacker News Recap

Play Episode Listen Later Feb 13, 2026 15:36


This is a recap of the top 10 posts on Hacker News on February 12, 2026. This podcast was generated by wondercraft.ai (00:30): An AI agent published a hit piece on meOriginal post: https://news.ycombinator.com/item?id=46990729&utm_source=wondercraft_ai(01:59): Warcraft III Peon Voice Notifications for Claude CodeOriginal post: https://news.ycombinator.com/item?id=46985151&utm_source=wondercraft_ai(03:28): AI agent opens a PR write a blogpost to shames the maintainer who closes itOriginal post: https://news.ycombinator.com/item?id=46987559&utm_source=wondercraft_ai(04:57): Gemini 3 Deep ThinkOriginal post: https://news.ycombinator.com/item?id=46991240&utm_source=wondercraft_ai(06:26): GPT‑5.3‑Codex‑SparkOriginal post: https://news.ycombinator.com/item?id=46992553&utm_source=wondercraft_ai(07:55): ai;drOriginal post: https://news.ycombinator.com/item?id=46991394&utm_source=wondercraft_ai(09:25): Improving 15 LLMs at Coding in One Afternoon. Only the Harness ChangedOriginal post: https://news.ycombinator.com/item?id=46988596&utm_source=wondercraft_ai(10:54): Major European payment processor can't send email to Google Workspace usersOriginal post: https://news.ycombinator.com/item?id=46989217&utm_source=wondercraft_ai(12:23): US businesses and consumers pay 90% of tariff costs, New York Fed saysOriginal post: https://news.ycombinator.com/item?id=46990056&utm_source=wondercraft_ai(13:52): Anthropic raises $30B in Series G funding at $380B post-money valuationOriginal post: https://news.ycombinator.com/item?id=46993345&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

GHOSTWRITING USA
Business Memoir- Rocket Fuel Business Accelerator

GHOSTWRITING USA

Play Episode Listen Later Feb 13, 2026 20:49


The Business of Memoir with Jeffrey MangusMost memoirs never reach their potential.They're written for therapy… not for strategy.For catharsis… not for clarity.For memory… not for momentum.In this episode, Jeffrey Mangus breaks down the truth most writers — and most leaders — don't talk about:Memoir is a business.If you're a founder, healthcare executive, physician, or industry leader, your story isn't just personal. It's positioning. It's intellectual property. It's authority architecture.Jeffrey explores:   •   Why vulnerability must be strategic, not performative   •   How voice becomes enterprise value   •   The economics behind serious memoir work   •   Why most books fail before they're even written   •   How to turn a memoir into a publishing platform — not just a productThis is not about writing for ego.It's about writing for alignment.If you've ever felt the weight of your story — and wondered how to shape it into something meaningful and market-defining — this episode is for you.If This ResonatedSubscribe to the podcast so you don't miss future episodes on authorship, authority, publishing strategy, and building legacy through narrative.If this conversation sparked something for you, share it with a founder, physician, or executive who needs to hear it.And if you're ready to explore your own memoir — not casually, but intentionally — you can book a strategic voice and positioning conversation using the Calendly link inside our GPT.Not a discovery call.A real conversation about building something that lasts.— Jeffrey Mangus

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

This podcast features Gabriele Corso and Jeremy Wohlwend, co-founders of Boltz and authors of the Boltz Manifesto, discussing the rapid evolution of structural biology models from AlphaFold to their own open-source suite, Boltz-1 and Boltz-2. The central thesis is that while single-chain protein structure prediction is largely “solved” through evolutionary hints, the next frontier lies in modeling complex interactions (protein-ligand, protein-protein) and generative protein design, which Boltz aims to democratize via open-source foundations and scalable infrastructure.Full Video PodOn YouTube!Timestamps* 00:00 Introduction to Benchmarking and the “Solved” Protein Problem* 06:48 Evolutionary Hints and Co-evolution in Structure Prediction* 10:00 The Importance of Protein Function and Disease States* 15:31 Transitioning from AlphaFold 2 to AlphaFold 3 Capabilities* 19:48 Generative Modeling vs. Regression in Structural Biology* 25:00 The “Bitter Lesson” and Specialized AI Architectures* 29:14 Development Anecdotes: Training Boltz-1 on a Budget* 32:00 Validation Strategies and the Protein Data Bank (PDB)* 37:26 The Mission of Boltz: Democratizing Access and Open Source* 41:43 Building a Self-Sustaining Research Community* 44:40 Boltz-2 Advancements: Affinity Prediction and Design* 51:03 BoltzGen: Merging Structure and Sequence Prediction* 55:18 Large-Scale Wet Lab Validation Results* 01:02:44 Boltz Lab Product Launch: Agents and Infrastructure* 01:13:06 Future Directions: Developpability and the “Virtual Cell”* 01:17:35 Interacting with Skeptical Medicinal ChemistsKey SummaryEvolution of Structure Prediction & Evolutionary Hints* Co-evolutionary Landscapes: The speakers explain that breakthrough progress in single-chain protein prediction relied on decoding evolutionary correlations where mutations in one position necessitate mutations in another to conserve 3D structure.* Structure vs. Folding: They differentiate between structure prediction (getting the final answer) and folding (the kinetic process of reaching that state), noting that the field is still quite poor at modeling the latter.* Physics vs. Statistics: RJ posits that while models use evolutionary statistics to find the right “valley” in the energy landscape, they likely possess a “light understanding” of physics to refine the local minimum.The Shift to Generative Architectures* Generative Modeling: A key leap in AlphaFold 3 and Boltz-1 was moving from regression (predicting one static coordinate) to a generative diffusion approach that samples from a posterior distribution.* Handling Uncertainty: This shift allows models to represent multiple conformational states and avoid the “averaging” effect seen in regression models when the ground truth is ambiguous.* Specialized Architectures: Despite the “bitter lesson” of general-purpose transformers, the speakers argue that equivariant architectures remain vastly superior for biological data due to the inherent 3D geometric constraints of molecules.Boltz-2 and Generative Protein Design* Unified Encoding: Boltz-2 (and BoltzGen) treats structure and sequence prediction as a single task by encoding amino acid identities into the atomic composition of the predicted structure.* Design Specifics: Instead of a sequence, users feed the model blank tokens and a high-level “spec” (e.g., an antibody framework), and the model decodes both the 3D structure and the corresponding amino acids.* Affinity Prediction: While model confidence is a common metric, Boltz-2 focuses on affinity prediction—quantifying exactly how tightly a designed binder will stick to its target.Real-World Validation and Productization* Generalized Validation: To prove the model isn't just “regurgitating” known data, Boltz tested its designs on 9 targets with zero known interactions in the PDB, achieving nanomolar binders for two-thirds of them.* Boltz Lab Infrastructure: The newly launched Boltz Lab platform provides “agents” for protein and small molecule design, optimized to run 10x faster than open-source versions through proprietary GPU kernels.* Human-in-the-Loop: The platform is designed to convert skeptical medicinal chemists by allowing them to run parallel screens and use their intuition to filter model outputs.TranscriptRJ [00:05:35]: But the goal remains to, like, you know, really challenge the models, like, how well do these models generalize? And, you know, we've seen in some of the latest CASP competitions, like, while we've become really, really good at proteins, especially monomeric proteins, you know, other modalities still remain pretty difficult. So it's really essential, you know, in the field that there are, like, these efforts to gather, you know, benchmarks that are challenging. So it keeps us in line, you know, about what the models can do or not.Gabriel [00:06:26]: Yeah, it's interesting you say that, like, in some sense, CASP, you know, at CASP 14, a problem was solved and, like, pretty comprehensively, right? But at the same time, it was really only the beginning. So you can say, like, what was the specific problem you would argue was solved? And then, like, you know, what is remaining, which is probably quite open.RJ [00:06:48]: I think we'll steer away from the term solved, because we have many friends in the community who get pretty upset at that word. And I think, you know, fairly so. But the problem that was, you know, that a lot of progress was made on was the ability to predict the structure of single chain proteins. So proteins can, like, be composed of many chains. And single chain proteins are, you know, just a single sequence of amino acids. And one of the reasons that we've been able to make such progress is also because we take a lot of hints from evolution. So the way the models work is that, you know, they sort of decode a lot of hints. That comes from evolutionary landscapes. So if you have, like, you know, some protein in an animal, and you go find the similar protein across, like, you know, different organisms, you might find different mutations in them. And as it turns out, if you take a lot of the sequences together, and you analyze them, you see that some positions in the sequence tend to evolve at the same time as other positions in the sequence, sort of this, like, correlation between different positions. And it turns out that that is typically a hint that these two positions are close in three dimension. So part of the, you know, part of the breakthrough has been, like, our ability to also decode that very, very effectively. But what it implies also is that in absence of that co-evolutionary landscape, the models don't quite perform as well. And so, you know, I think when that information is available, maybe one could say, you know, the problem is, like, somewhat solved. From the perspective of structure prediction, when it isn't, it's much more challenging. And I think it's also worth also differentiating the, sometimes we confound a little bit, structure prediction and folding. Folding is the more complex process of actually understanding, like, how it goes from, like, this disordered state into, like, a structured, like, state. And that I don't think we've made that much progress on. But the idea of, like, yeah, going straight to the answer, we've become pretty good at.Brandon [00:08:49]: So there's this protein that is, like, just a long chain and it folds up. Yeah. And so we're good at getting from that long chain in whatever form it was originally to the thing. But we don't know how it necessarily gets to that state. And there might be intermediate states that it's in sometimes that we're not aware of.RJ [00:09:10]: That's right. And that relates also to, like, you know, our general ability to model, like, the different, you know, proteins are not static. They move, they take different shapes based on their energy states. And I think we are, also not that good at understanding the different states that the protein can be in and at what frequency, what probability. So I think the two problems are quite related in some ways. Still a lot to solve. But I think it was very surprising at the time, you know, that even with these evolutionary hints that we were able to, you know, to make such dramatic progress.Brandon [00:09:45]: So I want to ask, why does the intermediate states matter? But first, I kind of want to understand, why do we care? What proteins are shaped like?Gabriel [00:09:54]: Yeah, I mean, the proteins are kind of the machines of our body. You know, the way that all the processes that we have in our cells, you know, work is typically through proteins, sometimes other molecules, sort of intermediate interactions. And through that interactions, we have all sorts of cell functions. And so when we try to understand, you know, a lot of biology, how our body works, how disease work. So we often try to boil it down to, okay, what is going right in case of, you know, our normal biological function and what is going wrong in case of the disease state. And we boil it down to kind of, you know, proteins and kind of other molecules and their interaction. And so when we try predicting the structure of proteins, it's critical to, you know, have an understanding of kind of those interactions. It's a bit like seeing the difference between... Having kind of a list of parts that you would put it in a car and seeing kind of the car in its final form, you know, seeing the car really helps you understand what it does. On the other hand, kind of going to your question of, you know, why do we care about, you know, how the protein falls or, you know, how the car is made to some extent is that, you know, sometimes when something goes wrong, you know, there are, you know, cases of, you know, proteins misfolding. In some diseases and so on, if we don't understand this folding process, we don't really know how to intervene.RJ [00:11:30]: There's this nice line in the, I think it's in the Alpha Fold 2 manuscript, where they sort of discuss also like why we even hopeful that we can target the problem in the first place. And then there's this notion that like, well, four proteins that fold. The folding process is almost instantaneous, which is a strong, like, you know, signal that like, yeah, like we should, we might be... able to predict that this very like constrained thing that, that the protein does so quickly. And of course that's not the case for, you know, for, for all proteins. And there's a lot of like really interesting mechanisms in the cells, but yeah, I remember reading that and thought, yeah, that's somewhat of an insightful point.Gabriel [00:12:10]: I think one of the interesting things about the protein folding problem is that it used to be actually studied. And part of the reason why people thought it was impossible, it used to be studied as kind of like a classical example. Of like an MP problem. Uh, like there are so many different, you know, type of, you know, shapes that, you know, this amino acid could take. And so, this grows combinatorially with the size of the sequence. And so there used to be kind of a lot of actually kind of more theoretical computer science thinking about and studying protein folding as an MP problem. And so it was very surprising also from that perspective, kind of seeing. Machine learning so clear, there is some, you know, signal in those sequences, through evolution, but also through kind of other things that, you know, us as humans, we're probably not really able to, uh, to understand, but that is, models I've, I've learned.Brandon [00:13:07]: And so Andrew White, we were talking to him a few weeks ago and he said that he was following the development of this and that there were actually ASICs that were developed just to solve this problem. So, again, that there were. There were many, many, many millions of computational hours spent trying to solve this problem before AlphaFold. And just to be clear, one thing that you mentioned was that there's this kind of co-evolution of mutations and that you see this again and again in different species. So explain why does that give us a good hint that they're close by to each other? Yeah.RJ [00:13:41]: Um, like think of it this way that, you know, if I have, you know, some amino acid that mutates, it's going to impact everything around it. Right. In three dimensions. And so it's almost like the protein through several, probably random mutations and evolution, like, you know, ends up sort of figuring out that this other amino acid needs to change as well for the structure to be conserved. Uh, so this whole principle is that the structure is probably largely conserved, you know, because there's this function associated with it. And so it's really sort of like different positions compensating for, for each other. I see.Brandon [00:14:17]: Those hints in aggregate give us a lot. Yeah. So you can start to look at what kinds of information about what is close to each other, and then you can start to look at what kinds of folds are possible given the structure and then what is the end state.RJ [00:14:30]: And therefore you can make a lot of inferences about what the actual total shape is. Yeah, that's right. It's almost like, you know, you have this big, like three dimensional Valley, you know, where you're sort of trying to find like these like low energy states and there's so much to search through. That's almost overwhelming. But these hints, they sort of maybe put you in. An area of the space that's already like, kind of close to the solution, maybe not quite there yet. And, and there's always this question of like, how much physics are these models learning, you know, versus like, just pure like statistics. And like, I think one of the thing, at least I believe is that once you're in that sort of approximate area of the solution space, then the models have like some understanding, you know, of how to get you to like, you know, the lower energy, uh, low energy state. And so maybe you have some, some light understanding. Of physics, but maybe not quite enough, you know, to know how to like navigate the whole space. Right. Okay.Brandon [00:15:25]: So we need to give it these hints to kind of get into the right Valley and then it finds the, the minimum or something. Yeah.Gabriel [00:15:31]: One interesting explanation about our awful free works that I think it's quite insightful, of course, doesn't cover kind of the entirety of, of what awful does that is, um, they're going to borrow from, uh, Sergio Chinico for MIT. So he sees kind of awful. Then the interesting thing about awful is God. This very peculiar architecture that we have seen, you know, used, and this architecture operates on this, you know, pairwise context between amino acids. And so the idea is that probably the MSA gives you this first hint about what potential amino acids are close to each other. MSA is most multiple sequence alignment. Exactly. Yeah. Exactly. This evolutionary information. Yeah. And, you know, from this evolutionary information about potential contacts, then is almost as if the model is. of running some kind of, you know, diastro algorithm where it's sort of decoding, okay, these have to be closed. Okay. Then if these are closed and this is connected to this, then this has to be somewhat closed. And so you decode this, that becomes basically a pairwise kind of distance matrix. And then from this rough pairwise distance matrix, you decode kind of theBrandon [00:16:42]: actual potential structure. Interesting. So there's kind of two different things going on in the kind of coarse grain and then the fine grain optimizations. Interesting. Yeah. Very cool.Gabriel [00:16:53]: Yeah. You mentioned AlphaFold3. So maybe we have a good time to move on to that. So yeah, AlphaFold2 came out and it was like, I think fairly groundbreaking for this field. Everyone got very excited. A few years later, AlphaFold3 came out and maybe for some more history, like what were the advancements in AlphaFold3? And then I think maybe we'll, after that, we'll talk a bit about the sort of how it connects to Bolt. But anyway. Yeah. So after AlphaFold2 came out, you know, Jeremy and I got into the field and with many others, you know, the clear problem that, you know, was, you know, obvious after that was, okay, now we can do individual chains. Can we do interactions, interaction, different proteins, proteins with small molecules, proteins with other molecules. And so. So why are interactions important? Interactions are important because to some extent that's kind of the way that, you know, these machines, you know, these proteins have a function, you know, the function comes by the way that they interact with other proteins and other molecules. Actually, in the first place, you know, the individual machines are often, as Jeremy was mentioning, not made of a single chain, but they're made of the multiple chains. And then these multiple chains interact with other molecules to give the function to those. And on the other hand, you know, when we try to intervene of these interactions, think about like a disease, think about like a, a biosensor or many other ways we are trying to design the molecules or proteins that interact in a particular way with what we would call a target protein or target. You know, this problem after AlphaVol2, you know, became clear, kind of one of the biggest problems in the field to, to solve many groups, including kind of ours and others, you know, started making some kind of contributions to this problem of trying to model these interactions. And AlphaVol3 was, you know, was a significant advancement on the problem of modeling interactions. And one of the interesting thing that they were able to do while, you know, some of the rest of the field that really tried to try to model different interactions separately, you know, how protein interacts with small molecules, how protein interacts with other proteins, how RNA or DNA have their structure, they put everything together and, you know, train very large models with a lot of advances, including kind of changing kind of systems. Some of the key architectural choices and managed to get a single model that was able to set this new state-of-the-art performance across all of these different kind of modalities, whether that was protein, small molecules is critical to developing kind of new drugs, protein, protein, understanding, you know, interactions of, you know, proteins with RNA and DNAs and so on.Brandon [00:19:39]: Just to satisfy the AI engineers in the audience, what were some of the key architectural and data, data changes that made that possible?Gabriel [00:19:48]: Yeah, so one critical one that was not necessarily just unique to AlphaFold3, but there were actually a few other teams, including ours in the field that proposed this, was moving from, you know, modeling structure prediction as a regression problem. So where there is a single answer and you're trying to shoot for that answer to a generative modeling problem where you have a posterior distribution of possible structures and you're trying to sample this distribution. And this achieves two things. One is it starts to allow us to try to model more dynamic systems. As we said, you know, some of these structures can actually take multiple structures. And so, you know, you can now model that, you know, through kind of modeling the entire distribution. But on the second hand, from more kind of core modeling questions, when you move from a regression problem to a generative modeling problem, you are really tackling the way that you think about uncertainty in the model in a different way. So if you think about, you know, I'm undecided between different answers, what's going to happen in a regression model is that, you know, I'm going to try to make an average of those different kind of answers that I had in mind. When you have a generative model, what you're going to do is, you know, sample all these different answers and then maybe use separate models to analyze those different answers and pick out the best. So that was kind of one of the critical improvement. The other improvement is that they significantly simplified, to some extent, the architecture, especially of the final model that takes kind of those pairwise representations and turns them into an actual structure. And that now looks a lot more like a more traditional transformer than, you know, like a very specialized equivariant architecture that it was in AlphaFold3.Brandon [00:21:41]: So this is a bitter lesson, a little bit.Gabriel [00:21:45]: There is some aspect of a bitter lesson, but the interesting thing is that it's very far from, you know, being like a simple transformer. This field is one of the, I argue, very few fields in applied machine learning where we still have kind of architecture that are very specialized. And, you know, there are many people that have tried to replace these architectures with, you know, simple transformers. And, you know, there is a lot of debate in the field, but I think kind of that most of the consensus is that, you know, the performance... that we get from the specialized architecture is vastly superior than what we get through a single transformer. Another interesting thing that I think on the staying on the modeling machine learning side, which I think it's somewhat counterintuitive seeing some of the other kind of fields and applications is that scaling hasn't really worked kind of the same in this field. Now, you know, models like AlphaFold2 and AlphaFold3 are, you know, still very large models.RJ [00:29:14]: in a place, I think, where we had, you know, some experience working in, you know, with the data and working with this type of models. And I think that put us already in like a good place to, you know, to produce it quickly. And, you know, and I would even say, like, I think we could have done it quicker. The problem was like, for a while, we didn't really have the compute. And so we couldn't really train the model. And actually, we only trained the big model once. That's how much compute we had. We could only train it once. And so like, while the model was training, we were like, finding bugs left and right. A lot of them that I wrote. And like, I remember like, I was like, sort of like, you know, doing like, surgery in the middle, like stopping the run, making the fix, like relaunching. And yeah, we never actually went back to the start. We just like kept training it with like the bug fixes along the way, which was impossible to reproduce now. Yeah, yeah, no, that model is like, has gone through such a curriculum that, you know, learned some weird stuff. But yeah, somehow by miracle, it worked out.Gabriel [00:30:13]: The other funny thing is that the way that we were training, most of that model was through a cluster from the Department of Energy. But that's sort of like a shared cluster that many groups use. And so we were basically training the model for two days, and then it would go back to the queue and stay a week in the queue. Oh, yeah. And so it was pretty painful. And so we actually kind of towards the end with Evan, the CEO of Genesis, and basically, you know, I was telling him a bit about the project and, you know, kind of telling him about this frustration with the compute. And so luckily, you know, he offered to kind of help. And so we, we got the help from Genesis to, you know, finish up the model. Otherwise, it probably would have taken a couple of extra weeks.Brandon [00:30:57]: Yeah, yeah.Brandon [00:31:02]: And then, and then there's some progression from there.Gabriel [00:31:06]: Yeah, so I would say kind of that, both one, but also kind of these other kind of set of models that came around the same time, were kind of approaching were a big leap from, you know, kind of the previous kind of open source models, and, you know, kind of really kind of approaching the level of AlphaVault 3. But I would still say that, you know, even to this day, there are, you know, some... specific instances where AlphaVault 3 works better. I think one common example is antibody antigen prediction, where, you know, AlphaVault 3 still seems to have an edge in many situations. Obviously, these are somewhat different models. They are, you know, you run them, you obtain different results. So it's, it's not always the case that one model is better than the other, but kind of in aggregate, we still, especially at the time.Brandon [00:32:00]: So AlphaVault 3 is, you know, still having a bit of an edge. We should talk about this more when we talk about Boltzgen, but like, how do you know one is, one model is better than the other? Like you, so you, I make a prediction, you make a prediction, like, how do you know?Gabriel [00:32:11]: Yeah, so easily, you know, the, the great thing about kind of structural prediction and, you know, once we're going to go into the design space of designing new small molecule, new proteins, this becomes a lot more complex. But a great thing about structural prediction is that a bit like, you know, CASP was doing, basically the way that you can evaluate them is that, you know, you train... You know, you train a model on a structure that was, you know, released across the field up until a certain time. And, you know, one of the things that we didn't talk about that was really critical in all this development is the PDB, which is the Protein Data Bank. It's this common resources, basically common database where every biologist publishes their structures. And so we can, you know, train on, you know, all the structures that were put in the PDB until a certain date. And then... And then we basically look for recent structures, okay, which structures look pretty different from anything that was published before, because we really want to try to understand generalization.Brandon [00:33:13]: And then on this new structure, we evaluate all these different models. And so you just know when AlphaFold3 was trained, you know, when you're, you intentionally trained to the same date or something like that. Exactly. Right. Yeah.Gabriel [00:33:24]: And so this is kind of the way that you can somewhat easily kind of compare these models, obviously, that assumes that, you know, the training. You've always been very passionate about validation. I remember like DiffDoc, and then there was like DiffDocL and DocGen. You've thought very carefully about this in the past. Like, actually, I think DocGen is like a really funny story that I think, I don't know if you want to talk about that. It's an interesting like... Yeah, I think one of the amazing things about putting things open source is that we get a ton of feedback from the field. And, you know, sometimes we get kind of great feedback of people. Really like... But honestly, most of the times, you know, to be honest, that's also maybe the most useful feedback is, you know, people sharing about where it doesn't work. And so, you know, at the end of the day, it's critical. And this is also something, you know, across other fields of machine learning. It's always critical to set, to do progress in machine learning, set clear benchmarks. And as, you know, you start doing progress of certain benchmarks, then, you know, you need to improve the benchmarks and make them harder and harder. And this is kind of the progression of, you know, how the field operates. And so, you know, the example of DocGen was, you know, we published this initial model called DiffDoc in my first year of PhD, which was sort of like, you know, one of the early models to try to predict kind of interactions between proteins, small molecules, that we bought a year after AlphaFold2 was published. And now, on the one hand, you know, on these benchmarks that we were using at the time, DiffDoc was doing really well, kind of, you know, outperforming kind of some of the traditional physics-based methods. But on the other hand, you know, when we started, you know, kind of giving these tools to kind of many biologists, and one example was that we collaborated with was the group of Nick Polizzi at Harvard. We noticed, started noticing that there was this clear, pattern where four proteins that were very different from the ones that we're trained on, the models was, was struggling. And so, you know, that seemed clear that, you know, this is probably kind of where we should, you know, put our focus on. And so we first developed, you know, with Nick and his group, a new benchmark, and then, you know, went after and said, okay, what can we change? And kind of about the current architecture to improve this pattern and generalization. And this is the same that, you know, we're still doing today, you know, kind of, where does the model not work, you know, and then, you know, once we have that benchmark, you know, let's try to, through everything we, any ideas that we have of the problem.RJ [00:36:15]: And there's a lot of like healthy skepticism in the field, which I think, you know, is, is, is great. And I think, you know, it's very clear that there's a ton of things, the models don't really work well on, but I think one thing that's probably, you know, undeniable is just like the pace of, pace of progress, you know, and how, how much better we're getting, you know, every year. And so I think if you, you know, if you assume, you know, any constant, you know, rate of progress moving forward, I think things are going to look pretty cool at some point in the future.Gabriel [00:36:42]: ChatGPT was only three years ago. Yeah, I mean, it's wild, right?RJ [00:36:45]: Like, yeah, yeah, yeah, it's one of those things. Like, you've been doing this. Being in the field, you don't see it coming, you know? And like, I think, yeah, hopefully we'll, you know, we'll, we'll continue to have as much progress we've had the past few years.Brandon [00:36:55]: So this is maybe an aside, but I'm really curious, you get this great feedback from the, from the community, right? By being open source. My question is partly like, okay, yeah, if you open source and everyone can copy what you did, but it's also maybe balancing priorities, right? Where you, like all my customers are saying. I want this, there's all these problems with the model. Yeah, yeah. But my customers don't care, right? So like, how do you, how do you think about that? Yeah.Gabriel [00:37:26]: So I would say a couple of things. One is, you know, part of our goal with Bolts and, you know, this is also kind of established as kind of the mission of the public benefit company that we started is to democratize the access to these tools. But one of the reasons why we realized that Bolts needed to be a company, it couldn't just be an academic project is that putting a model on GitHub is definitely not enough to get, you know, chemists and biologists, you know, across, you know, both academia, biotech and pharma to use your model to, in their therapeutic programs. And so a lot of what we think about, you know, at Bolts beyond kind of the, just the models is thinking about all the layers. The layers that come on top of the models to get, you know, from, you know, those models to something that can really enable scientists in the industry. And so that goes, you know, into building kind of the right kind of workflows that take in kind of, for example, the data and try to answer kind of directly that those problems that, you know, the chemists and the biologists are asking, and then also kind of building the infrastructure. And so this to say that, you know, even with models fully open. You know, we see a ton of potential for, you know, products in the space and the critical part about a product is that even, you know, for example, with an open source model, you know, running the model is not free, you know, as we were saying, these are pretty expensive model and especially, and maybe we'll get into this, you know, these days we're seeing kind of pretty dramatic inference time scaling of these models where, you know, the more you run them, the better the results are. But there, you know, you see. You start getting into a point that compute and compute costs becomes a critical factor. And so putting a lot of work into building the right kind of infrastructure, building the optimizations and so on really allows us to provide, you know, a much better service potentially to the open source models. That to say, you know, even though, you know, with a product, we can provide a much better service. I do still think, and we will continue to put a lot of our models open source because the critical kind of role. I think of open source. Models is, you know, helping kind of the community progress on the research and, you know, from which we, we all benefit. And so, you know, we'll continue to on the one hand, you know, put some of our kind of base models open source so that the field can, can be on top of it. And, you know, as we discussed earlier, we learn a ton from, you know, the way that the field uses and builds on top of our models, but then, you know, try to build a product that gives the best experience possible to scientists. So that, you know, like a chemist or a biologist doesn't need to, you know, spin off a GPU and, you know, set up, you know, our open source model in a particular way, but can just, you know, a bit like, you know, I, even though I am a computer scientist, machine learning scientist, I don't necessarily, you know, take a open source LLM and try to kind of spin it off. But, you know, I just maybe open a GPT app or a cloud code and just use it as an amazing product. We kind of want to give the same experience. So this front world.Brandon [00:40:40]: I heard a good analogy yesterday that a surgeon doesn't want the hospital to design a scalpel, right?Brandon [00:40:48]: So just buy the scalpel.RJ [00:40:50]: You wouldn't believe like the number of people, even like in my short time, you know, between AlphaFold3 coming out and the end of the PhD, like the number of people that would like reach out just for like us to like run AlphaFold3 for them, you know, or things like that. Just because like, you know, bolts in our case, you know, just because it's like. It's like not that easy, you know, to do that, you know, if you're not a computational person. And I think like part of the goal here is also that, you know, we continue to obviously build the interface with computational folks, but that, you know, the models are also accessible to like a larger, broader audience. And then that comes from like, you know, good interfaces and stuff like that.Gabriel [00:41:27]: I think one like really interesting thing about bolts is that with the release of it, you didn't just release a model, but you created a community. Yeah. Did that community, it grew very quickly. Did that surprise you? And like, what is the evolution of that community and how is that fed into bolts?RJ [00:41:43]: If you look at its growth, it's like very much like when we release a new model, it's like, there's a big, big jump, but yeah, it's, I mean, it's been great. You know, we have a Slack community that has like thousands of people on it. And it's actually like self-sustaining now, which is like the really nice part because, you know, it's, it's almost overwhelming, I think, you know, to be able to like answer everyone's questions and help. It's really difficult, you know. The, the few people that we were, but it ended up that like, you know, people would answer each other's questions and like, sort of like, you know, help one another. And so the Slack, you know, has been like kind of, yeah, self, self-sustaining and that's been, it's been really cool to see.RJ [00:42:21]: And, you know, that's, that's for like the Slack part, but then also obviously on GitHub as well. We've had like a nice, nice community. You know, I think we also aspire to be even more active on it, you know, than we've been in the past six months, which has been like a bit challenging, you know, for us. But. Yeah, the community has been, has been really great and, you know, there's a lot of papers also that have come out with like new evolutions on top of bolts and it's surprised us to some degree because like there's a lot of models out there. And I think like, you know, sort of people converging on that was, was really cool. And, you know, I think it speaks also, I think, to the importance of like, you know, when, when you put code out, like to try to put a lot of emphasis and like making it like as easy to use as possible and something we thought a lot about when we released the code base. You know, it's far from perfect, but, you know.Brandon [00:43:07]: Do you think that that was one of the factors that caused your community to grow is just the focus on easy to use, make it accessible? I think so.RJ [00:43:14]: Yeah. And we've, we've heard it from a few people over the, over the, over the years now. And, you know, and some people still think it should be a lot nicer and they're, and they're right. And they're right. But yeah, I think it was, you know, at the time, maybe a little bit easier than, than other things.Gabriel [00:43:29]: The other thing part, I think led to, to the community and to some extent, I think, you know, like the somewhat the trust in the community. Kind of what we, what we put out is the fact that, you know, it's not really been kind of, you know, one model, but, and maybe we'll talk about it, you know, after Boltz 1, you know, there were maybe another couple of models kind of released, you know, or open source kind of soon after. We kind of continued kind of that open source journey or at least Boltz 2, where we are not only improving kind of structure prediction, but also starting to do affinity predictions, understanding kind of the strength of the interactions between these different models, which is this critical component. critical property that you often want to optimize in discovery programs. And then, you know, more recently also kind of protein design model. And so we've sort of been building this suite of, of models that come together, interact with one another, where, you know, kind of, there is almost an expectation that, you know, we, we take very at heart of, you know, always having kind of, you know, across kind of the entire suite of different tasks, the best or across the best. model out there so that it's sort of like our open source tool can be kind of the go-to model for everybody in the, in the industry. I really want to talk about Boltz 2, but before that, one last question in this direction, was there anything about the community which surprised you? Were there any, like, someone was doing something and you're like, why would you do that? That's crazy. Or that's actually genius. And I never would have thought about that.RJ [00:45:01]: I mean, we've had many contributions. I think like some of the. Interesting ones, like, I mean, we had, you know, this one individual who like wrote like a complex GPU kernel, you know, for part of the architecture on a piece of, the funny thing is like that piece of the architecture had been there since AlphaFold 2, and I don't know why it took Boltz for this, you know, for this person to, you know, to decide to do it, but that was like a really great contribution. We've had a bunch of others, like, you know, people figuring out like ways to, you know, hack the model to do something. They click peptides, like, you know, there's, I don't know if there's any other interesting ones come to mind.Gabriel [00:45:41]: One cool one, and this was, you know, something that initially was proposed as, you know, as a message in the Slack channel by Tim O'Donnell was basically, he was, you know, there are some cases, especially, for example, we discussed, you know, antibody-antigen interactions where the models don't necessarily kind of get the right answer. What he noticed is that, you know, the models were somewhat stuck into predicting kind of the antibodies. And so he basically ran the experiments in this model, you can condition, basically, you can give hints. And so he basically gave, you know, random hints to the model, basically, okay, you should bind to this residue, you should bind to the first residue, or you should bind to the 11th residue, or you should bind to the 21st residue, you know, basically every 10 residues scanning the entire antigen.Brandon [00:46:33]: Residues are the...Gabriel [00:46:34]: The amino acids. The amino acids, yeah. So the first amino acids. The 11 amino acids, and so on. So it's sort of like doing a scan, and then, you know, conditioning the model to predict all of them, and then looking at the confidence of the model in each of those cases and taking the top. And so it's sort of like a very somewhat crude way of doing kind of inference time search. But surprisingly, you know, for antibody-antigen prediction, it actually kind of helped quite a bit. And so there's some, you know, interesting ideas that, you know, obviously, as kind of developing the model, you say kind of, you know, wow. This is why would the model, you know, be so dumb. But, you know, it's very interesting. And that, you know, leads you to also kind of, you know, start thinking about, okay, how do I, can I do this, you know, not with this brute force, but, you know, in a smarter way.RJ [00:47:22]: And so we've also done a lot of work on that direction. And that speaks to, like, the, you know, the power of scoring. We're seeing that a lot. I'm sure we'll talk about it more when we talk about BullsGen. But, you know, our ability to, like, take a structure and determine that that structure is, like... Good. You know, like, somewhat accurate. Whether that's a single chain or, like, an interaction is a really powerful way of improving, you know, the models. Like, sort of like, you know, if you can sample a ton and you assume that, like, you know, if you sample enough, you're likely to have, like, you know, the good structure. Then it really just becomes a ranking problem. And, you know, now we're, you know, part of the inference time scaling that Gabby was talking about is very much that. It's like, you know, the more we sample, the more we, like, you know, the ranking model. The ranking model ends up finding something it really likes. And so I think our ability to get better at ranking, I think, is also what's going to enable sort of the next, you know, next big, big breakthroughs. Interesting.Brandon [00:48:17]: But I guess there's a, my understanding, there's a diffusion model and you generate some stuff and then you, I guess, it's just what you said, right? Then you rank it using a score and then you finally... And so, like, can you talk about those different parts? Yeah.Gabriel [00:48:34]: So, first of all, like, the... One of the critical kind of, you know, beliefs that we had, you know, also when we started working on Boltz 1 was sort of like the structure prediction models are somewhat, you know, our field version of some foundation models, you know, learning about kind of how proteins and other molecules interact. And then we can leverage that learning to do all sorts of other things. And so with Boltz 2, we leverage that learning to do affinity predictions. So understanding kind of, you know, if I give you this protein, this molecule. How tightly is that interaction? For Boltz 1, what we did was taking kind of that kind of foundation models and then fine tune it to predict kind of entire new proteins. And so the way basically that that works is sort of like instead of for the protein that you're designing, instead of fitting in an actual sequence, you fit in a set of blank tokens. And you train the models to, you know, predict both the structure of kind of that protein. The structure also, what the different amino acids of that proteins are. And so basically the way that Boltz 1 operates is that you feed a target protein that you may want to kind of bind to or, you know, another DNA, RNA. And then you feed the high level kind of design specification of, you know, what you want your new protein to be. For example, it could be like an antibody with a particular framework. It could be a peptide. It could be many other things. And that's with natural language or? And that's, you know, basically, you know, prompting. And we have kind of this sort of like spec that you specify. And, you know, you feed kind of this spec to the model. And then the model translates this into, you know, a set of, you know, tokens, a set of conditioning to the model, a set of, you know, blank tokens. And then, you know, basically the codes as part of the diffusion models, the codes. It's a new structure and a new sequence for your protein. And, you know, basically, then we take that. And as Jeremy was saying, we are trying to score it and, you know, how good of a binder it is to that original target.Brandon [00:50:51]: You're using basically Boltz to predict the folding and the affinity to that molecule. So and then that kind of gives you a score? Exactly.Gabriel [00:51:03]: So you use this model to predict the folding. And then you do two things. One is that you predict the structure and with something like Boltz2, and then you basically compare that structure with what the model predicted, what Boltz2 predicted. And this is sort of like in the field called consistency. It's basically you want to make sure that, you know, the structure that you're predicting is actually what you're trying to design. And that gives you a much better confidence that, you know, that's a good design. And so that's the first filtering. And the second filtering that we did as part of kind of the Boltz2 pipeline that was released is that we look at the confidence that the model has in the structure. Now, unfortunately, kind of going to your question of, you know, predicting affinity, unfortunately, confidence is not a very good predictor of affinity. And so one of the things that we've actually done a ton of progress, you know, since we released Boltz2.Brandon [00:52:03]: And kind of we have some new results that we are going to kind of announce soon is kind of, you know, the ability to get much better hit rates when instead of, you know, trying to rely on confidence of the model, we are actually directly trying to predict the affinity of that interaction. Okay. Just backing up a minute. So your diffusion model actually predicts not only the protein sequence, but also the folding of it. Exactly.Gabriel [00:52:32]: And actually, you can... One of the big different things that we did compared to other models in the space, and, you know, there were some papers that had already kind of done this before, but we really scaled it up was, you know, basically somewhat merging kind of the structure prediction and the sequence prediction into almost the same task. And so the way that Boltz2 works is that you are basically the only thing that you're doing is predicting the structure. So the only sort of... Supervision is we give you a supervision on the structure, but because the structure is atomic and, you know, the different amino acids have a different atomic composition, basically from the way that you place the atoms, we also understand not only kind of the structure that you wanted, but also the identity of the amino acid that, you know, the models believed was there. And so we've basically, instead of, you know, having these two supervision signals, you know, one discrete, one continuous. That somewhat, you know, don't interact well together. We sort of like build kind of like an encoding of, you know, sequences in structures that allows us to basically use exactly the same supervision signal that we were using to Boltz2 that, you know, you know, largely similar to what AlphaVol3 proposed, which is very scalable. And we can use that to design new proteins. Oh, interesting.RJ [00:53:58]: Maybe a quick shout out to Hannes Stark on our team who like did all this work. Yeah.Gabriel [00:54:04]: Yeah, that was a really cool idea. I mean, like looking at the paper and there's this is like encoding or you just add a bunch of, I guess, kind of atoms, which can be anything, and then they get sort of rearranged and then basically plopped on top of each other so that and then that encodes what the amino acid is. And there's sort of like a unique way of doing this. It was that was like such a really such a cool, fun idea.RJ [00:54:29]: I think that idea was had existed before. Yeah, there were a couple of papers.Gabriel [00:54:33]: Yeah, I had proposed this and and Hannes really took it to the large scale.Brandon [00:54:39]: In the paper, a lot of the paper for Boltz2Gen is dedicated to actually the validation of the model. In my opinion, all the people we basically talk about feel that this sort of like in the wet lab or whatever the appropriate, you know, sort of like in real world validation is the whole problem or not the whole problem, but a big giant part of the problem. So can you talk a little bit about the highlights? From there, that really because to me, the results are impressive, both from the perspective of the, you know, the model and also just the effort that went into the validation by a large team.Gabriel [00:55:18]: First of all, I think I should start saying is that both when we were at MIT and Thomas Yacolas and Regina Barzillai's lab, as well as at Boltz, you know, we are not a we're not a biolab and, you know, we are not a therapeutic company. And so to some extent, you know, we were first forced to, you know, look outside of, you know, our group, our team to do the experimental validation. One of the things that really, Hannes, in the team pioneer was the idea, OK, can we go not only to, you know, maybe a specific group and, you know, trying to find a specific system and, you know, maybe overfit a bit to that system and trying to validate. But how can we test this model? So. Across a very wide variety of different settings so that, you know, anyone in the field and, you know, printing design is, you know, such a kind of wide task with all sorts of different applications from therapeutic to, you know, biosensors and many others that, you know, so can we get a validation that is kind of goes across many different tasks? And so he basically put together, you know, I think it was something like, you know, 25 different. You know, academic and industry labs that committed to, you know, testing some of the designs from the model and some of this testing is still ongoing and, you know, giving results kind of back to us in exchange for, you know, hopefully getting some, you know, new great sequences for their task. And he was able to, you know, coordinate this, you know, very wide set of, you know, scientists and already in the paper, I think we. Shared results from, I think, eight to 10 different labs kind of showing results from, you know, designing peptides, designing to target, you know, ordered proteins, peptides targeting disordered proteins, which are results, you know, of designing proteins that bind to small molecules, which are results of, you know, designing nanobodies and across a wide variety of different targets. And so that's sort of like. That gave to the paper a lot of, you know, validation to the model, a lot of validation that was kind of wide.Brandon [00:57:39]: And so those would be therapeutics for those animals or are they relevant to humans as well? They're relevant to humans as well.Gabriel [00:57:45]: Obviously, you need to do some work into, quote unquote, humanizing them, making sure that, you know, they have the right characteristics to so they're not toxic to humans and so on.RJ [00:57:57]: There are some approved medicine in the market that are nanobodies. There's a general. General pattern, I think, in like in trying to design things that are smaller, you know, like it's easier to manufacture at the same time, like that comes with like potentially other challenges, like maybe a little bit less selectivity than like if you have something that has like more hands, you know, but the yeah, there's this big desire to, you know, try to design many proteins, nanobodies, small peptides, you know, that just are just great drug modalities.Brandon [00:58:27]: Okay. I think we were left off. We were talking about validation. Validation in the lab. And I was very excited about seeing like all the diverse validations that you've done. Can you go into some more detail about them? Yeah. Specific ones. Yeah.RJ [00:58:43]: The nanobody one. I think we did. What was it? 15 targets. Is that correct? 14. 14 targets. Testing. So we typically the way this works is like we make a lot of designs. All right. On the order of like tens of thousands. And then we like rank them and we pick like the top. And in this case, and was 15 right for each target and then we like measure sort of like the success rates, both like how many targets we were able to get a binder for and then also like more generally, like out of all of the binders that we designed, how many actually proved to be good binders. Some of the other ones I think involved like, yeah, like we had a cool one where there was a small molecule or design a protein that binds to it. That has a lot of like interesting applications, you know, for example. Like Gabri mentioned, like biosensing and things like that, which is pretty cool. We had a disordered protein, I think you mentioned also. And yeah, I think some of those were some of the highlights. Yeah.Gabriel [00:59:44]: So I would say that the way that we structure kind of some of those validations was on the one end, we have validations across a whole set of different problems that, you know, the biologists that we were working with came to us with. So we were trying to. For example, in some of the experiments, design peptides that would target the RACC, which is a target that is involved in metabolism. And we had, you know, a number of other applications where we were trying to design, you know, peptides or other modalities against some other therapeutic relevant targets. We designed some proteins to bind small molecules. And then some of the other testing that we did was really trying to get like a more broader sense. So how does the model work, especially when tested, you know, on somewhat generalization? So one of the things that, you know, we found with the field was that a lot of the validation, especially outside of the validation that was on specific problems, was done on targets that have a lot of, you know, known interactions in the training data. And so it's always a bit hard to understand, you know, how much are these models really just regurgitating kind of what they've seen or trying to imitate. What they've seen in the training data versus, you know, really be able to design new proteins. And so one of the experiments that we did was to take nine targets from the PDB, filtering to things where there is no known interaction in the PDB. So basically the model has never seen kind of this particular protein bound or a similar protein bound to another protein. So there is no way that. The model from its training set can sort of like say, okay, I'm just going to kind of tweak something and just imitate this particular kind of interaction. And so we took those nine proteins. We worked with adaptive CRO and basically tested, you know, 15 mini proteins and 15 nanobodies against each one of them. And the very cool thing that we saw was that on two thirds of those targets, we were able to, from this 15 design, get nanomolar binders, nanomolar, roughly speaking, just a measure of, you know, how strongly kind of the interaction is, roughly speaking, kind of like a nanomolar binder is approximately the kind of binding strength or binding that you need for a therapeutic. Yeah. So maybe switching directions a bit. Bolt's lab was just announced this week or was it last week? Yeah. This is like your. First, I guess, product, if that's if you want to call it that. Can you talk about what Bolt's lab is and yeah, you know, what you hope that people take away from this? Yeah.RJ [01:02:44]: You know, as we mentioned, like I think at the very beginning is the goal with the product has been to, you know, address what the models don't on their own. And there's largely sort of two categories there. I'll split it in three. The first one. It's one thing to predict, you know, a single interaction, for example, like a single structure. It's another to like, you know, very effectively search a space, a design space to produce something of value. What we found, like sort of building on this product is that there's a lot of steps involved, you know, in that there's certainly need to like, you know, accompany the user through, you know, one of those steps, for example, is like, you know, the creation of the target itself. You know, how do we make sure that the model has like a good enough understanding of the target? So we can like design something and there's all sorts of tricks, you know, that you can do to improve like a particular, you know, structure prediction. And so that's sort of like, you know, the first stage. And then there's like this stage of like, you know, designing and searching the space efficiently. You know, for something like BullsGen, for example, like you, you know, you design many things and then you rank them, for example, for small molecule process, a little bit more complicated. We actually need to also make sure that the molecules are synthesizable. And so the way we do that is that, you know, we have a generative model that learns. To use like appropriate building blocks such that, you know, it can design within a space that we know is like synthesizable. And so there's like, you know, this whole pipeline really of different models involved in being able to design a molecule. And so that's been sort of like the first thing we call them agents. We have a protein agent and we have a small molecule design agents. And that's really like at the core of like what powers, you know, the BullsLab platform.Brandon [01:04:22]: So these agents, are they like a language model wrapper or they're just like your models and you're just calling them agents? A lot. Yeah. Because they, they, they sort of perform a function on behalf of.RJ [01:04:33]: They're more of like a, you know, a recipe, if you wish. And I think we use that term sort of because of, you know, sort of the complex pipelining and automation, you know, that goes into like all this plumbing. So that's the first part of the product. The second part is the infrastructure. You know, we need to be able to do this at very large scale for any one, you know, group that's doing a design campaign. Let's say you're designing, you know, I'd say a hundred thousand possible candidates. Right. To find the good one that is, you know, a very large amount of compute, you know, for small molecules, it's on the order of like a few seconds per designs for proteins can be a bit longer. And so, you know, ideally you want to do that in parallel, otherwise it's going to take you weeks. And so, you know, we've put a lot of effort into like, you know, our ability to have a GPU fleet that allows any one user, you know, to be able to do this kind of like large parallel search.Brandon [01:05:23]: So you're amortizing the cost over your users. Exactly. Exactly.RJ [01:05:27]: And, you know, to some degree, like it's whether you. Use 10,000 GPUs for like, you know, a minute is the same cost as using, you know, one GPUs for God knows how long. Right. So you might as well try to parallelize if you can. So, you know, a lot of work has gone, has gone into that, making it very robust, you know, so that we can have like a lot of people on the platform doing that at the same time. And the third one is, is the interface and the interface comes in, in two shapes. One is in form of an API and that's, you know, really suited for companies that want to integrate, you know, these pipelines, these agents.RJ [01:06:01]: So we're already partnering with, you know, a few distributors, you know, that are gonna integrate our API. And then the second part is the user interface. And, you know, we, we've put a lot of thoughts also into that. And this is when I, I mentioned earlier, you know, this idea of like broadening the audience. That's kind of what the, the user interface is about. And we've built a lot of interesting features in it, you know, for example, for collaboration, you know, when you have like potentially multiple medicinal chemists or. We're going through the results and trying to pick out, okay, like what are the molecules that we're going to go and test in the lab? It's powerful for them to be able to, you know, for example, each provide their own ranking and then do consensus building. And so there's a lot of features around launching these large jobs, but also around like collaborating on analyzing the results that we try to solve, you know, with that part of the platform. So Bolt's lab is sort of a combination of these three objectives into like one, you know, sort of cohesive platform. Who is this accessible to? Everyone. You do need to request access today. We're still like, you know, sort of ramping up the usage, but anyone can request access. If you are an academic in particular, we, you know, we provide a fair amount of free credit so you can play with the platform. If you are a startup or biotech, you may also, you know, reach out and we'll typically like actually hop on a call just to like understand what you're trying to do and also provide a lot of free credit to get started. And of course, also with larger companies, we can deploy this platform in a more like secure environment. And so that's like more like customizing. You know, deals that we make, you know, with the partners, you know, and that's sort of the ethos of Bolt. I think this idea of like servicing everyone and not necessarily like going after just, you know, the really large enterprises. And that starts from the open source, but it's also, you know, a key design principle of the product itself.Gabriel [01:07:48]: One thing I was thinking about with regards to infrastructure, like in the LLM space, you know, the cost of a token has gone down by I think a factor of a thousand or so over the last three years, right? Yeah. And is it possible that like essentially you can exploit economies of scale and infrastructure that you can make it cheaper to run these things yourself than for any person to roll their own system? A hundred percent. Yeah.RJ [01:08:08]: I mean, we're already there, you know, like running Bolts on our platform, especially on a large screen is like considerably cheaper than it would probably take anyone to put the open source model out there and run it. And on top of the infrastructure, like one of the things that we've been working on is accelerating the models. So, you know. Our small molecule screening pipeline is 10x faster on Bolts Lab than it is in the open source, you know, and that's also part of like, you know, building a product, you know, of something that scales really well. And we really wanted to get to a point where like, you know, we could keep prices very low in a way that it would be a no-brainer, you know, to use Bolts through our platform.Gabriel [01:08:52]: How do you think about validation of your like agentic systems? Because, you know, as you were saying earlier. Like we're AlphaFold style models are really good at, let's say, monomeric, you know, proteins where you have, you know, co-evolution data. But now suddenly the whole point of this is to design something which doesn't have, you know, co-evolution data, something which is really novel. So now you're basically leaving the domain that you thought was, you know, that you know you are good at. So like, how do you validate that?RJ [01:09:22]: Yeah, I like every complete, but there's obviously, you know, a ton of computational metrics. That we rely on, but those are only take you so far. You really got to go to the lab, you know, and test, you know, okay, with this method A and this method B, how much better are we? You know, how much better is my, my hit rate? How stronger are my binders? Also, it's not just about hit rate. It's also about how good the binders are. And there's really like no way, nowhere around that. I think we're, you know, we've really ramped up the amount of experimental validation that we do so that we like really track progress, you know, as scientifically sound, you know. Yeah. As, as possible out of this, I think.Gabriel [01:10:00]: Yeah, no, I think, you know, one thing that is unique about us and maybe companies like us is that because we're not working on like maybe a couple of therapeutic pipelines where, you know, our validation would be focused on those. We, when we do an experimental validation, we try to test it across tens of targets. And so that on the one end, we can get a much more statistically significant result and, and really allows us to make progress. From the methodological side without being, you know, steered by, you know, overfitting on any one particular system. And of course we choose, you know, w

B2B Marketers on a Mission
Ep. 207: How to Scale Faster with B2B Brand Strategy

B2B Marketers on a Mission

Play Episode Listen Later Feb 12, 2026 35:33 Transcription Available


How to Scale Faster with B2B Brand Strategy Here's a common scenario in B2B marketing: you launch campaigns, hit the deadlines, and fill the pipeline, but the results feel disconnected from your long-term goals. Internal messaging discussions resurface, campaigns feel shallow and reactive, and when you ask people what your brand stands for, you get 50 different answers. This inconsistent approach creates friction and impedes scalable growth. So what can B2B marketers do when their tactical execution is outpacing their brand strategy, and how to do you realign for lasting impact? That's why we're talking to JoAnne Gritter (COO, ddm marketing + communications), who shares her expertise and actionable insights on how to scale faster with B2B brand strategy. During our conversation, JoAnne underscored why a foundational strategy is crucial for building credibility and trust in competitive markets. She also discussed the role of AI in marketing, commenting that while it can support with idea generation and research, it shouldn't replace direct communication with customers and employees. JoAnne shared some common pitfalls such as messaging misalignment and inconsistent branding, which can lead to distrust and reduced credibility, She explained the importance of having a cohesive brand strategy that aligns values, messaging, and customer experiences across all company touchpoints through proactive brand management. https://youtu.be/_Alwkinhw-g Topics discussed in episode: [02:36] The “Soul vs. Body” framework: Why marketing is just the body in action, while brand strategy is the soul that provides direction and values.  [06:51] Red flags that your marketing has outpaced your strategy: When content feels fragmented and sales teams are telling completely different stories.  [08:52] Defining true brand strategy: Moving beyond logos and colors to include deep research, stakeholder analysis, and internal alignment.  [14:41] The critical differences between a brand refresh (auditing existing assets), a complete revamp (starting from scratch), and branding during a merger.  [24:10] Actionable steps you can take to realign your brand: – Audit your customer journey – Define messaging pillars – Ensure HR and onboarding match the brand promise  [29:37] Why “data-only” marketing fails: The importance of human emotion and psychology that performance data often misses.  Companies and links mentioned: JoAnne Gritter on LinkedIn  ddm marketing + communications  Transcript JoAnne Gritter, Christian Klepp JoAnne Gritter  00:00 AI can be used as a tool. It should not replace thinking and actually talking to your customers and your employees and your sales team. So you can use AI as a crutch to to like, ask it for ideas, idea generation. You can use it for deep research on your on your audience, and stuff like that. But nothing replaces the gold standard of talking to people. I see this in messaging misalignment or content misalignment. If content feels like it’s been written by four different people or completely different companies, that’s a red flag. Christian Klepp  00:37 This is a common scenario for B2B Marketers. You launch campaigns, hit the deadlines and fill the pipeline. It all looks great on paper, but something is still off internal messaging discussions resurface. Campaigns feel shallow and reactive, and when you ask people what the brand stands for, you get 50 different answers. So what can B2B Marketers do when their marketing is outpacing their brand strategy? Welcome to this episode of the B2B Marketers in the Mission podcast, and I’m your host, Christian Klepp, today, I’ll be talking to JoAnne Gritter, who will be answering this question. She’s a member of the leadership team at DDM Marketing Communications that provides integrated marketing solutions to drive business success. Tune in to find out more about what this B2B Marketers Mission is and here we go. JoAnne Gritter, welcome to the show. JoAnne Gritter  01:25 Hi Christian. Happy to be here. Christian Klepp  01:27 We you know, we had such a wonderful, like, pre-interview conversation. I almost feel like we’re neighbors or something, and something to that extent. But I’m, I’m really, like, happy to have you on the show, and I’m really looking forward to this conversation, because this topic is, I’m a little bit biased because I am in the branding space, so it’s a bit near and dear to my heart, but it’s also something that’s extremely important, because you’ll agree. I mean, you, I know you’ll agree because you wrote an article about it. JoAnne Gritter  01:54 Yeah Christian Klepp  01:55 It’s something that marketing teams tend to overlook. And good, goodness gracious me, I’m gonna, like, stop keeping people in suspense. We’ll just jump right in all right. JoAnne Gritter  02:04 Okay Christian Klepp  02:04 So JoAnne, you’re on a mission to provide integrated marketing solutions that drive B2B business success. So for this conversation, let’s focus on this topic, how brand strategy helps B2B organizations to realign for long term growth. So I’m going to kick off this conversation with the following question. In our previous conversation, our previous discussion, you talked about how marketing without a brand is a strategy without a soul. Could you please explain what you meant by that? JoAnne Gritter  02:36 So I just made the comparison kind of to the whole human, as in, like the brand is your soul, meaning like your values, what drives you, why you’re here, what differentiates you, what makes you different than the person standing next to you, whereas, like marketing is your body in action, or action in general, where you hopefully, if you if you’re a trustworthy person, what is, what are your values internally are matching your actions externally? And that is often where we see a divergent in companies, because they don’t think about those as like two sides of the same coin. It is really important that you make sure that you know the direction that you’re going as a company and what you stand for and who you’re there to support or serve, and what markets you’re there to do, and like your whole company, everybody that’s part of interfacing with customers understands that and is and is speaking the same language. Christian Klepp  03:37 Yeah, no, absolutely. And I suppose the the follow up question to that is like, where do you see a lot of, like, marketing teams go wrong. Because, like, you know, more often than not, a lot of teams are like, Okay, we’ve we’ve implemented the campaigns check. We’re generating results and driving pipeline or filling the pipeline, rather check. So where does it all go wrong? JoAnne Gritter  04:00 If you are not paying attention to your branding, you can have a lot of activity without a lot of traction. So or you can have a lot of different messages going out that seem not cohesive or fragmented. And so you can or more examples you can have, like your sales folks going out and telling different stories about about what your company stands for and what you do and how you’re different, that creates a lot of waste, because then you’re continuously trying to get more activity and more campaigns going more sales people out there, because you’re not getting the quality leads that you need, because nobody really knows what you stand for. Everybody says it a little bit differently, and that goes for customer service too. Branding. People think about branding as a marketing problem, or a marketing, you know, teams problem. But if, let’s say part of your brand is your brand identity or values is to put the customer. First, if you don’t really solidify that from your sales team and your customer support team, then there would be a mismatch there, right then you’re just putting out into the world that customers first, but that doesn’t match up with what the customer is experiencing. Christian Klepp  05:16 Yeah, there’s certainly some kind of misalignment there, and you touched on it, like, briefly. It’s interesting to me, like, even in my own experience, one of the telltale signs of that is when you ask people within the organization, well, what makes you different? And you get 50 different answers, and some of them are similar, and some of them are completely, like, different. And it’s like, okay, yep, okay, I see where this is going, or to your to your other point, when sales teams are having those discovery calls, and you listen back to some of those recordings, which I hope you marketing people out there are doing, and you listen to the way that the sales deal with objections, and maybe the procurement team or people like, you know, on the prospect side, they’re probably not phrasing it exactly the way I’m going to say it right now, but like, but they probably are asking something to the effect of, okay, what makes you different from vendor B, C and D, right? What is different about your solution? Like, why are you charging this guy? Why are your rates like, this high. JoAnne Gritter  05:16 Right. Absolutely. And if they have different answers, or if you go and you listen in on four different sales calls and they’re all a little bit different, then that tells you have a branding issue that people don’t fully understand your brand and how you’re different and who you support and serve. Christian Klepp  05:16 Yep, absolutely, absolutely. So you’ve touched on it a little bit, but like, tell us about some more of these. I’m going to call them red flags, right? That signal when marketing has outrun brand strategy. JoAnne Gritter  05:16 Sure, I see this in messaging misalignment or content misalignment. If content feels like it’s been written by four different people or completely different companies, that’s a red flag. If, like we mentioned, your sales team talks about your company completely differently, it’s okay that they put their own little spin on it, as long as you’re still hitting like the purpose of your company, why you’re here, how you serve whatever your target audience or audiences are what your values are. If that’s not coming through in in those different places, then you may have a brand issue, or your training issue, or your brand is not being carried out through the company. So when you have a solid brand, it should be, should be repeated in in like your onboarding process, in HR kind of things, in performance conversations, in obviously, your sales and marketing and your customer service, so that everybody is aligned to that brand, and so that there’s a common message, common theme, because repeatability is is super important. Consistency is super important in marketing. I’m sure a lot of people have heard that it takes multiple multi multiple times of hearing the same message for it to actually resonate, and if they’re hearing multiple different messages, it’s causes confusion and a lack of trust in whatever the company is offering. Christian Klepp  05:16 Yeah, that’s absolutely right. JoAnne, I’ve got a I just thought of another fall off question, and you’ll indulge me here. Um, you know it, I know it. But let’s, let’s clear the air here for a second. Because I’ve been hearing this like, and I’m sure you have as well, in the B2B world, it’s just been thrown around, like, very loosely. Let’s clear the air here. Like, what do you mean by brand strategy, because I’ve heard people, especially at senior level, say, like, Yeah, we don’t need branding. We’ve got a logo and we’ve got a website. We’re good, so maybe just clear the air on that one, please. JoAnne Gritter  05:16 Well, brand strategy is, let’s see, like, I think of strategy in like, four or three different tiers. Like, we have your business strategy, it’s how you win in the marketplace. Then you have your brand strategy, which is positions you in the market and in the minds of your consumers or your customers. And then your marketing strategy is how you take that and communicate it out and you deliver that message in multiple different channels. So if you have marketing running without, without laddering up to that business strategy and and brand strategy, then it’s just, it’s just running and putting stuff out there. So it’s just activity without, without purpose and strategy. So like a brand strategy is so much more than just a lot of people think about it as their logo, their identity suite, whatever, but there should be research that goes into it. They should be stakeholder analysis. They should talk to your customers and kind of understand what they value about about your company compared to another company. So then, using. Their language in some of your brand messaging is super helpful. So if you have like, customers that say, you know, like, I just love working with, you know, Company X, Y and Z, because the people are great. They’re super responsive. They they get me what I need, etc. Like, using some of that as part of your brand is going to be really important. So like, a strategy may may include, like, the focus, the brand, promise your your core values can be part of that. The naming can be part of that. Obviously, the the design part that a lot of folks actually think about and listen or think about and recall would be, like the visual identity that also needs to be consistent, from your logo to your fonts to your colors, and then like, multiple touch points on that, like, again, like repeating that consistency from like the stationary, the collateral, the assets, all that stuff, but then also making sure that the messaging and the voice carries throughout your company, past past your your marketing team, past your sales team. Christian Klepp  05:16 Yeah, that’s absolutely right. I mean, I like to tell people that all of these things that you mentioned, especially the visual aspect, the the sexy part of it, right, like the the visual identity, the logo, the web design and all that. It’s the end result. It’s one of the outcomes of right branding, right? JoAnne Gritter  05:16 That doesn’t come out of a vacuum, right? You don’t show a designer that’s like, I’m super excited about the color red, so we’re gonna do it’s what do our customers, current customers, feel about us, and what do we want our prospective customers to feel about us? And then there’s a lot of strategy behind that. Christian Klepp  05:16 That’s right, that’s right. I’m gonna move on to the topic of key pitfalls to avoid. So what are some of these key pitfalls that B2B Marketing Teams should avoid, and what should they do instead? JoAnne Gritter  05:16 So pitfalls that I see is companies teams that get really excited about certain trends. I’m just going to pick on Tiktok. There’s time and a place for Tiktok, but like, for B2B, they’re like, oh, man, everybody’s on Tiktok, or this latest, you know, social media platform, channel, we really got to get on there. It’s or we got to use AI in some specific way without, like, thinking about the strategy behind that and just like going forward, because you know that that’s the hottest trend right now. So always make sure it ladders up to where your customers are and what you want them to think about you. If you’re a B2B company, it’s likely that your customers are more on LinkedIn than they are on Tiktok. That’s just an example. I can’t say that across the board, but like picking picking things that are always centered on on your customer and your brand are super important. So that’s a pitfall, and then what to do about it? Also treating the brand as a one time exercise, like set it and forget it, kind of thing. A lot of people are just like, Okay, we did the brand. We got a great logo, we got stationery, we even got PowerPoints that are branded and then never think about it again, except for, like, just the, you know, the colors and the logo on all of your media assets, right? So, but the brand is so much more than that. The brand is so much about, like, how you want them to feel, what the differentiators are, what makes you different, what you deliver and like, how you talk about it, how you position yourself. So like, every bit, every asset that goes out the door, should be aligned to that there should be almost a hierarchy. Christian Klepp  05:16 Yeah, no, exactly, exactly. And I’m gonna throw another follow up question at you, only because I know you can handle you can handle it. You probably hear this a lot, and you hear this a lot, most likely also from marketing teams that perhaps don’t have as much experience in the branding space as you do, and they say things like, JoAnne, you know, we’re looking at our company, and we feel that, you know, the overall look and feel and the direction, it’s not really in line with what we aspire to be. So we’re looking for a revamp. And then, and then, as the conversation progresses, they say, Oh, actually, we want maybe, maybe just a refresh, right? And then you hear another prospect say, Well, you know, we just merged the two companies. So like, what do we do there? So maybe just, just to, again, clear the air, so people don’t throw around these terms so recklessly, what actually is the difference between a brand refresh, a brand revamp, and branding as a result of a merger, Speaker 1  06:02 like a brand like from scratch, is going to take a lot of different kind of research efforts than like a brand refresh. Like, if you’re doing a brand refresh, then you’re looking at assets that already exist, you know, and and you’re looking at reasons why they might change or are no longer working. So you’re doing more. Of an audit kind of thing, like, what’s different now than it was 20 years ago when we created this brand, and where are we going? Their new leadership? Are they focused on different parts of this like even even DDM, the marketing agency that I work with or that I work for. We, every once in a while, look at our brand, and not just the visuals, but like the things that make us unique. And we say, hey, those are still unique, but we’re talking about them slightly differently now. So we need to take a look at that and change the messaging a little bit. We’re heading in a slightly different direction lately with our creative so let’s, let’s make sure that we’re still in line, so that everything, everything matches. And if they see us on Instagram versus if they see us on LinkedIn or on our website, that it still looks like ABM, you know, and then a merger is slightly different, because you’re putting together two brands, and a lot of times they’re creating a new brand from that, or they might keep one of the brands and then just bring another like, you know, Company X is now a, you know, Company Y brand. And there might be, like a sub. There’s all kinds of different ways hierarchies of brands in that kind of scenario. But more recent one that we did, they created a new brand, which was a combination of the two names, and they completely they went through the whole exercise with the new leadership team. So it’s more similar to like starting from scratch, but also taking bits and pieces that they want to keep from both brands and what’s working. So you kind of look at what clients from both brands like about those brands, and make sure that you keep those and you preserve those, and make sure that it’s it’s heading in the direction that the company wants to go a lot of discovery and research and questions, Christian Klepp  06:16 Absolutely, absolutely. And I love that you keep bringing that up, though, because that is, again, one of these components that people tend to overlook, that this comes with a lot of research. It’s not, as you said, it’s not okay. Here’s the brief. Graphic designers or design team have at it. JoAnne Gritter  17:07 Right? Christian Klepp  17:07 Come up with something, something else, great, right? Yeah, my favorite briefs are always the ones that said we want something modern, clean, yet traditional and exciting. It’s like, JoAnne Gritter  17:17 Oh yes, creative. Make it creative, splashy mean to you? Christian Klepp  17:25 Yeah, yeah, open to interpretation, I suppose. Why do you believe that inconsistent messaging and internal misalignment cost organizations credibility and dollars? And you did touch on it earlier on the conversation. JoAnne Gritter  17:41 It’s a misalignment of what you say versus what you do. If you have on your website that you are there to serve X population and that you are like your mission and purpose in in this world is to support that population in in achieving whatever goal, whatever needs that that population needs, but then that customer or population that comes and interacts with your brand does not get that from the people or get that from their experience with your product. Then then that’s a misalignment, and that creates, you know, instant distrust, like you are not following through on, on what your brand promise was, or if you have multiple people saying they’re promising different things and they don’t get that, that’s a lack of trust. Christian Klepp  18:27 I’m kind of slightly grinning here, although I know that anyone who’s been in this situation probably will not see any humor in it, but like, I’m just thinking about anyone that’s experienced a flight delay, JoAnne Gritter  18:37 right, Christian Klepp  18:39 or been trapped at the airport, and whichever airline it is you’re flying with, and you have to deal with ground staff that are either unprofessional and rude or you just have zero transparency. And I’m sure, like, I’ve certainly gone through it like I’ve experienced a 10, 12 hour flight delay, right where I was at the airport until like, one or two in the morning, and then they finally come and say, well, the plane’s not coming. JoAnne Gritter  19:04 Yeah, that really rocks the brand reputation. I also see that in health care a lot, which, God bless everybody in health care, it’s hard, but like, if all those services are disjointed and the scheduling gives you a different feeling than the doctor gives and trying to do things online, it doesn’t match what your experience is in person. People don’t want to go to that provider anymore. You know, they’re like, this is confusing. I just want help. Just want to get what you’re promising. Christian Klepp  19:35 It’s a very for lack of a description of fragmented ecosystem. JoAnne Gritter  19:39 Yeah, absolutely. And that’s a bigger issue than we can solve here, but Christian Klepp  19:43 Yeah, no amount of branding is going to fix that. JoAnne Gritter  19:47 You got to follow through on it. Christian Klepp  19:49 That’s absolutely right. That’s absolutely right. Talk to us about how aligning, and you’ve touched on it briefly, how aligning soul and action will help to build. Trust, loyalty and resilience and please provide examples where relevant. JoAnne Gritter  20:04 Let me think of an example. We work with a very large medical device manufacturer, and we’ve worked with them for 15, probably close to 20 years now. And so 15 years ago, they were very product centric. They also grow by acquisition. So they have, like several different companies that came in under this master global brand. And even though they have the same logo, they still had their own kind of visual identity. They all talked about their stuff differently. And as a result of that, in those different teams, the customers were getting wildly different experiences from this company, even though they were all under the same master company. So they rebranded. We helped them rebrand seven years ago, maybe, and this is a global organization where they brought all their business units under the same brand. They have a very strict, robust brand now. And I’m not saying that everybody needs 100 page brand guidelines. They don’t, but, like they they went all in on branding, and they make all their new employees do their brand training. It’s worked in through their onboarding. It’s worked in through their like, performance conversations, and they have just really exploded and created this, this amazing reputation as a leader. Christian Klepp  21:25 I’m sorry you’re talking about, you’re talking about real branding, then JoAnne Gritter  21:27 Real branding. Yes, they are now a leader in their industry. I mean, they were big before, but they have just really exploded in the last seven years since rebranding, and it’s been really helpful for them, because now they still grow by acquisition, but they bring in a new company, and they know what the process is to get them on board, not just from a visual identity, like rebranding all the collateral, like the sales enablement and stuff like that, but bringing the internal teams up to speed about like, what what we stand for, what we hire, like, what kind of values we Look for, so that every customer gets the same experience Christian Klepp  22:04 from your experience. How did that exercise of helping them to re brand and take all of this because, you know, there’s that situation of taking all the business units and putting them under one roof, so to speak. How did that exercise help to improve them as an organization. JoAnne Gritter  22:22 It’s been a long time, like in multiple phases. So it improves their organization. It creates a lot of clarity for them. So they’re not like redoing each other’s work, and they’re not all creating the same or they’re they’re not all creating from scratch anymore. They have a they have a similar starting point on, like, the different messaging pillars that they need to hit, even for just their products, you know. So this goes into product messaging and product launch. So like, if they are medical device, they are they want to sell, you know, knee replacements or or stuff along those lines, they know that they need to hit on a couple core values, and they need to make sure that they are targeting the same audience, and that they need to make sure that they that what they’re saying out there aligns with the master brand. Of course, there’s they still need to do the differentiators on the product level, but they also have the full brand that that supports it. So it’s just a higher level like reputation. I like to, I like to compare like branding to your reputation. So that goes along with every product that they bring in. Christian Klepp  23:32 Yeah, no, absolutely, absolutely. Okay, we get to the part in the conversation. We’re talking about actionable tips. And you’ve, you’ve actually given us quite a bit already, but if we were to summarize it, okay, JoAnne, like, if there was somebody out if there was somebody out there that was listening to this conversation, and they were listening to what you were saying, and they were like, oh my goodness, this is exactly what we’re going through right now, right? I mean, besides contacting you, right, what are like three to five things that you would recommend they do right now to realign for long term growth using brand strategy, JoAnne Gritter  24:10 I would take a look at what brand strategy you already have, if you have one otherwise kind of creating at least the bones of that. Like, what are our values? What are we focused on? What is our purpose here and mission? And then, like, what are messaging pillars or groups that align with those values? And then once you have those making sure that you have a succinct narrative or story, or even, like an elevator pitch, that everybody is aligned on. Having that is kind of a simple, hopefully a simple thing for you to figure out and align on, and then auditing the customer journey for those promises and values. So like, if you have a customer journey, they’re going from, you know, awareness of you. Or a problem to consideration between you and your company, and, you know, multiple other companies, and then you’re they’re making a decision, then they’re purchasing, then they’re hopefully your customer experience, and your delivery teams are delivering on those promises, and then you’re creating loyalty. So that’s the customer journey. So of these phases are, they are the customers still experiencing the brand that you want them to experience. So that’s like a little audit that you can do. And then from there, also making sure that all of your content that’s out there, from your like your brochures, your website, your sales enablement kind of stuff, making sure that that’s still aligned to the brand and the message that that you want it to and then making sure that, of course, throughout the company, in your like, HR documentation, you’re, I’ve said onboarding a million times, but like, making sure that everybody that’s coming into your organization understands who you are and who you who you serve, and why? Christian Klepp  26:01 Absolutely, absolutely. And that’s a really good list. And I have to ask you this question, because you know, at the time of the recording, we’re at the end of 2025, and you did bring up AI, so I’m going to bring it up again. How, how has in your experience, from what you’re seeing out there, how has AI impacted brand strategy and all the work that comes along with that. JoAnne Gritter  26:24 Well, that’s a loaded question, right? So as far as brand strategy, I kind of see it. AI can be used as a tool. It should not replace thinking and actually talking to your customers and your employees and your sales team. So you can use AI as a crutch to to, like, ask it for ideas, idea generation. You can use it for deep research on your on your audience, and stuff like that. But nothing replaces the gold standard of talking to people. So like, the the best resources from that research perspective are your customers, or your prospective customers and your sales team, if you can’t get to those customers, will often hear those like, you know, positive and negatives about your products and services. So getting to those and aligning on stakeholders, AI can be used as you know, you can use it to help think of ideas for like, let me think if you were thinking of like values, like core values, like in and messaging pillars, you can say, hey, you know, I really want it to be something along these lines. We’re circling around on like, exactly right the what the right way to phrase this is. And it can give you 50 different ideas, and you can cross out 45 of them and then land on like the top five that you communicate with your team. Don’t ever take it for rate for like per vatum, sorry, exactly as chat GPT gives you, Christian Klepp  27:55 at face value. JoAnne Gritter  27:57 Thank you. I see that that is a lot harder for early career individuals because they don’t have that discernment yet. So they, they will, they will use it as a crutch, and then, like, oftentimes not have that same kind of editing expertise to see what actually works and what doesn’t. So like pairing AI as a tool with with human intelligence and empathy, for sure, Christian Klepp  28:23 Absolutely, absolutely. I mean, at least in from my observation, and this is where I think AI really falls flat, especially when you’re coming up with the verbal expression component of brand strategy. AI doesn’t really have any soul or character, like everything, it turns out, is very, for lack of a better description, lifeless, so, and that’s where the human element, or to your point, the human intervention, can then come into play, because then you can inject that story, you can inject that human emotion, which also is a very crucial component in B2B, right? As much as people like to say, oh, B2B is all factual, right? And I would, I would disagree with that, JoAnne Gritter  29:06 yeah, it’s, it’s quality over quantity. Now, you know people, people can spot, can spot the AI generated content, and there can be a whole bunch of it, and that can help you in a variety of ways. But if it’s not actually, if it doesn’t sound human speaking or human human sounding, then, then people reject it and they don’t trust it as much. Christian Klepp  29:28 Okay, get up on your soapbox a status quo that you passionately disagree with, and why? JoAnne Gritter  29:37 I passionately disagree with data only marketing. So the big push for data driven marketing, I am, I am on board with that at face value, but it still doesn’t tell the whole story, because you can still look at data from, let’s say you did like a. Um, a focus group about about what customers want from a like a beverage or something. I’m thinking of Coca Cola, and they and they say that they they want it to be healthy. They want it to be low sugar. They want it to taste amazing. They want it to make them, you know, feel great, and stuff like that that does not you’re gonna try to create like this Frankenstein kind of soda instead, instead of recognizing that, like, there’s more psychology to this. Like a Coca Cola has, like, a whole traditional, like branding kind of way that, or traditional and emotional way that they make people feel, and that doesn’t show up in the data, necessarily. That doesn’t show up in the performance data. You know that that is a totally different kind of research too. Christian Klepp  30:51 Yeah, yeah, JoAnne Gritter  30:55 You know, that’s performance, marketing and branding. Christian Klepp  30:58 I totally agree. I totally agree that, as much as there is a big camp out there that says the future is data driven now when it comes to B2B Marketing, and I’m like, Yeah, JoAnne Gritter  31:11 humans are tricky. Christian Klepp  31:13 We’re not robots. Absolutely, absolutely, okay, here comes the bonus question. So Rumor has it that you like to draw. JoAnne Gritter  31:23  I do. Christian Klepp  31:24 Yes, and from one enthusiastic sketcher to another, I thought, I thought deep and hard about this question. Tell us about one of the most well exciting, yes, but more importantly, one of the most challenging works that you’ve created to date. So what was the theme and subject? What made it so challenging to draw, and what did you learn from that experience when you when you completed it? JoAnne Gritter  31:50 I really like to find, like, kind of micro moments I have. I have three children at home, and I like to take pictures, or, like, capture, like small moments of, like one of them snuggling the cat, or like holding hands or doing something unexpected. And in, like, not a macro view, but in a micro view of like, the different connections that people have. And then, usually, I’ll take a picture, and then I will sketch those out after they go to sleep and stuff like that. And that’s just kind of my own personal way to, I don’t know it’s it’s therapeutic. It’s a way to see, see the beauty in the world, you know, and to slow down in the moment. Christian Klepp  32:37 100%. I like to call it Balsam for the soul. JoAnne Gritter  32:40 Yeah, Christian Klepp  32:40 all right, I don’t know about you, but like, I like to sketch in the in this very room where we’re doing the recording, and I usually play classical music. So like, show pen, so something like, with with piano. Like, no opera, because that can get a bit too dramatic. JoAnne Gritter  32:59 I like classical too, when, when I’m focused at classical music, and I also like binaural beats, or it’s more like meditation kind of music. So kind of zone, zone into the moment, instead of all the crazy thoughts that go through your head and all the things you have to do. Christian Klepp  33:17 Very nice, very nice. One of the things I learned about drawing is pretty much like certain aspects of our professional work, you know, like marketing and branding. It starts with a line, and then you just keep adding the layers, right? And it’s almost the same like when you’re implementing a campaign, you know, some especially nowadays, right? You try to start small first, and do a lot of testing to see if it works. And you scale from there. And I like to, I like to think of drawings that way too. You start, you start not by adding the details. You start like, you know, with a lighter pencil. And there’s a certain, there’s a certain way of holding the pencil tool, right, so you have lesser control. And just, it’s just a bit free flowing. And for me personally, it took me a long time to start drawing like that, because I’m like, No, then I don’t have control of the process. But that’s kind of the point, right? Let go of the perfectionism, right? JoAnne Gritter  34:18 You outline it first, and then you start filling in. You know that the shadows and the light marks, and then you slowly bring in the detail. I mean, that you’re totally right, that that is like a marketing or branding strategy. You got to outline it first before you go fully in on any specific detail. Otherwise, you’re you may be way off target. Christian Klepp  34:38 That’s it. That’s it. I mean, JoAnne like I think we just found our next podcast interview topic. But thank you so much for coming on and for sharing your expertise and experience with the listeners. So please a quick introduction to yourself and how people out they can get in touch with you. JoAnne Gritter  34:57 JoAnne Gritter, I’m at DDM Marketing and Communications headquartered in Grand Rapids, Michigan, USA. And I am COO, Vice President of our company. You can get a hold of me at joanneg@teamddm.com or you can just check us out at Teamddm.com Christian Klepp  35:18 Fantastic, fantastic. And we will be sure to like drop all those links in the show notes. So once again, JoAnne, thanks so much for your time. Take care, stay safe and talk to you soon. JoAnne Gritter  35:27 Thanks, Christian. Bye. Christian Klepp  35:29 Bye, for now you.

AI Inside
How Smart Are Today's Coding Agents?

AI Inside

Play Episode Listen Later Feb 12, 2026 76:50


This episode is sponsored by Airia. Get started today at ⁠⁠⁠⁠⁠⁠⁠⁠⁠airia.com⁠⁠⁠⁠⁠⁠⁠⁠⁠. Jason Howell and Jeff Jarvis break down Claude Opus 4.6's new role as a financial‑research engine, discuss how GPT‑5.3 Codex is reshaping full‑stack coding workflows, and explore Matt Shumer's warning that AI agents will touch nearly every job in just a few years. We unpack how Super Bowl AI ads are reframing public perception, examine Waymo's use of DeepMind's Genie 3 world model to train autonomous vehicles on rare edge‑case scenarios, and also cover OpenAI's ad‑baked free ChatGPT tiers, HBR's findings on how AI expands workloads instead of lightening them, and new evidence that AI mislabels medical conditions in real‑world settings. Note: Time codes subject to change depending on dynamic ad insertion by the distributor. Chapters: 0:00 - Start 0:01:59 - Anthropic Releases New Model That's Adept at Financial Research Anthropic releases Opus 4.6 with new ‘agent teams' 0:10:00 - Introducing GPT-5.3-Codex 0:14:42 - Something Big Is Happening 0:33:25 - Can these Super Bowl ads make Americans love AI? 0:36:52 - Dunkin' Donuts digitally de-aged ‘90s actors and I'm terrified 0:39:47 - AI.com bought by Crypto.com founder for $70mn in biggest-ever website name deal 0:42:11 - OpenAI begins testing ads in ChatGPT, draws early attention from advertisers and analysts 0:48:27 - Waymo Says Genie 3 Simulations Can Help Boost Robotaxi Rollout 0:53:30 - AI Doesn't Reduce Work—It Intensifies It 1:02:08 - As AI enters the operating room, reports arise of botched surgeries and misidentified body parts 1:04:48 - Meta is giving its AI slop feed an app of its own 1:06:53 - Google goes long with 100-year bond 1:09:18 - OpenAI Abandons ‘io' Branding for Its AI Hardware Learn more about your ad choices. Visit megaphone.fm/adchoices

Effetto giorno le notizie in 60 minuti
Ue, oggi il vertice sulla competitività

Effetto giorno le notizie in 60 minuti

Play Episode Listen Later Feb 12, 2026


Oggi il vertice europeo informale in Belgio: al centro la competitività. Sentiamo il corrispondente de Il Sole 24 ORE Beda Romano, in collegamento proprio da Alden-Biesen. Tragedia di Rigopiano: tre condanne e cinque assoluzioni nel processo di appello bis. Cristina Carpinelli ha sentito Gianluca Tanda, presidente del Comitato vittime di Rigopiano. Domani la chiusura di GPT-4°, il modello di chatbot "umano". A marzo previsto "finalmente" l'arrivo del Trump Phone. Negli Stati Uniti è arrivata la pubblicità su ChatGPT. Ne parliamo con il nostro Enrico Pagliarini.

Du Bitai
190: Discord reikalaus amžiaus patvirtinimo

Du Bitai

Play Episode Listen Later Feb 12, 2026 60:56


JAV vykęs „Super Bowl“ įdomus ne tik „Bad Bunny“ pasirodymu, bet ir technologijų kompanijų reklamomis. Galbūt ne be reikalo „OpenAI“ reklamavo ne „ChatGPT“, o „Codex“? Jonas pasakoja, ką su juo nudirba, ir apie GPT-5.3 modelį, pavogusį dėmesį nuo „Opus 4.6“. Kiek nukrypstame į AI pasaulyje vis aktualesnę temą – kodėl svarbu turėti „source of truth“? „OpenAI“ taip pat pristatė „Frontier“ – platformą, skirtą verslams valdyti AI agentus. „Discord“ planuoja amžiaus tikrinimą visiems vartotojams ir taip erzina internetus. „Waymo“ sako, kad jų autonominiai taksi vairuoja saugiau nei žmogus. „PaperBanana“ žada lengviau kuriamas moksliškai tikslias iliustracijas, o „ElevenLabs Audiobooks“ įrankis – padėti lengvai generuoti audioknygas.

Security Now (MP3)
SN 1064: Least Privilege - Cybercrime Goes Pro

Security Now (MP3)

Play Episode Listen Later Feb 11, 2026 156:39 Transcription Available


From EU fines that never get paid to cyber warfare grounding missiles mid-battle, this week's episode uncovers the untold stories and real-world consequences shaping today's digital defenses. How is the EU's GDPR fine collection going. Western democracies are getting serious about offensive cybercrime. The powerful cyber component of the Midnight Hammer operation. Signs of psychological dependence upon OpenAI's GPT-4o chatbot. CISA orders government agencies to unplug end-of-support devices. How to keep Windows from annoying us after an upgrade. What is OpenClaw, how safe is it to use, what does it mean. Another listener uses AI to completely code an app. Coinbase suffers another insider breach. What can be done Show Notes - https://www.grc.com/sn/SN-1064-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security hoxhunt.com/securitynow trustedtech.team/securitynowCSS guardsquare.com

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 711: Coding with OpenAI's New Codex App: How to Build a Simple App without coding experience

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Feb 11, 2026 41:13


Wealth Formula by Buck Joffrey
545: Should You Invest in Hotels?

Wealth Formula by Buck Joffrey

Play Episode Listen Later Feb 11, 2026 35:19


For most of my career, I've been focused on two things: Operating businesses and Multifamily real estate. The strategy has been pretty simple. Take money generated from higher-risk, active businesses… and move it into more stable, long-term assets like apartment buildings. That shift—from risk to stability—is how I've tried to build durability over time. Now, to be fair, the sharp rise in interest rates a few years ago put a dent in that model. But zooming out, it's still worked well for me overall. So I'm sticking with it. That said, there are other ways to think about real estate. In some cases, the real opportunity is when you combine real estate with an operating business. We've done that before in the Wealth Formula Investor Club with self-storage, and the results were excellent. Storage is operationally simple, relatively boring—and that's exactly why it works. But there's another category that sits at the opposite end of the spectrum. Hotels. They're sexier.They're more volatile.And yes—they're riskier. But the upside can be dramatically higher. One of my closest friends here in Montecito has quietly built a fortune doing boutique hotels over the past few years. He started with a no-frills hotel in Texas serving the oil drilling industry. Over time, he combined his operational experience with his talent as a designer—and eventually created some of the highest-rated boutique hotels in the world. He's absolutely crushing it. Of course, most of us aren't world-class designers or architects. I'm certainly not. Still, his success made me curious. Hotels have been on my radar for a while now—not because I understand the business, but because I don't. When I asked him how he learned the hotel industry, his answer was honest: “I figured it out on the fly—starting with my first acquisition and a great broker.” That's usually how real learning happens. So this week on the Wealth Formula Podcast, I brought on an expert in hospitality investing to educate both of us. We cover the basics: How hotel investing actually worksWhere the real risks are (and where they aren't)How returns differ from multifamilyAnd what someone should understand before ever touching their first hotel deal If you've ever thought about buying or investing in hotels—but didn't know where to start—welcome to the club. You don't have to jump in tomorrow. But you do have to start somewhere. This episode is a good starting point. Listen on Apple Podcasts: https://podcasts.apple.com/gb/podcast/545-should-you-invest-in-hotels/id718416620?i=1000748759003 Listen on Spotify: https://open.spotify.com/episode/5Lx5Rp4x704lWRazWLqDOK Watch on YouTube: https://youtu.be/GMFf6-g8w_0 Transcript Disclaimer: This transcript was generated by AI and may not be 100% accurate. If you notice any errors or corrections, please email us at phil@wealthformula.com. Welcome everybody. This is Buck Joffrey with the Wealth Formula Podcast coming to you from Montecito, California. Before we begin today, I wanna remind you, if you’ve not done so and you are an accredited investor, go to wealthformula.com, sign up for our investor club. Uh, the opportunity there is really to see private deal flow that you wouldn’t otherwise see because it can’t be advertised. And, uh, only available to those people who are deemed accredited. And then what does accredited mean as a reminder? Well, if you’re married, you make $300,000 per year combined for at least two years with a reasonable expectation, continue to do so, or you have a net worth of a million dollars outside of your personal residence. Or if you’re single like me, $200,000 per year or a million dollars net worth. Anyway, that’s probably, uh, most of you. So all you gotta do is go to wealth formula.com, sign up for investor club because hey, who doesn’t wanna be part of a club? And, uh, by the way, it’s a great price. It’s free. So join it. Just get onboarded and all you gotta do is just wait for deal flow. What a deal. Now let’s talk about different kinds of things to invest in. For most of my career, I, I have really focused on two things I’ve focused on. Either operating businesses, uh, in my case, those operating businesses largely have been medical and multifamily real estate. Uh, the strategy itself, theoretically the way I think about it, take money from sort of these active businesses, a higher risk, move them into more stable long-term assets like apartment buildings. Okay? The idea is that’s how you build some durability over time. Now, to be fair, okay, to be fair. Sharp rise in interest rates a few years ago. Put a little bit of a dent in that model. But here’s the thing is that you can’t throw out the, uh, baby with the bath water. ’cause when I zoom out, still worked well for me overall. So I’m sticking with it and, uh, that’s my story. I’m sticking with it. That said, there are always other ways to think about real estate, right? Real estate is not just multifamily. Um, in some cases, the real opportunity is when you combine real estate and operating businesses. So. We’ve actually done that before in our wealth formula investor club. Um, and we’ve done that through self-storage, for example, and the results were really good. Storage is operationally, generally pretty simple. Probably not that simple, but you know, but more so than other things, relatively boring. Boring is good, and that’s exactly why it works. There’s another category that sits at the opposite end of the spectrum of boring, and it’s sexier and it’s more volatile and it’s riskier. And uh, that is the area of hotels, right, like leisure, that kind of thing. But the upside in those things can be dramatically higher. You know, one of my closest friends here. Montecito, I talk about him all the time. He’s a, he is a little bit of an inspiration to me, although I wouldn’t tell that to in space. He’s built a fortune doing boutique hotels over the past few years and the way he started, you know, and I think it was only about a decade ago because he bought like this no frills hotel in Texas that was serving the oil industry. There was a bunch of guys, you know, drilling needed a place to say, and you know, he had this and he actually. I don’t know that I would recommend this, but he, he told me he bought it sight unseen just based on the numbers. Ah, man, I gotta tell you, I don’t think I’m that lucky. If I bought something sight unseen, it would not work great for me, but it did work great for him. But over time, what he did is he, he combined his operational experience with his talent as he’s like a designer, like designs, homes, an architect, uh, of sorts, although more than that. Um, and he, he used to build houses for like famous people in Hollywood. Anyway, he took that skill and so he combined it with hotels and he created some of the highest rated boutique hotels in the world. And he’s absolutely crushing it. Just crushing it. Of course, the reality is that most of us aren’t world-class designers or architects. I’m certainly not. I’m not artistic at all. Still, um, you know, the fact that he’s had so much success in this space and that he loves hotels. What got me curious? So, hotels have been on my radar for a while, not because I understand the business, but actually because I don’t. And when I asked him how he learned, uh, about the hotel industry, he just said, you know, I figured out on the fly and, uh, you know, started with my first acquisition, had a great broker who taught me everything I, you know, needed to know at the beginning and. That’s a great story. I mean, and ideally that’s how things happen. As you can tell, this guy is, uh, seems to just hit on everything. So good for him. So this week on Wealth Formula Podcast, I wanted to get a little bit of a hotel investing 1 0 1. So I brought on an expert in hospitality investing that could educate both you and me. So we’re gonna cover some of the basics, how hotel actually works, you know, what are the risks returns. Like, what should people do if they even consider, you know, buying their first hotel or investing in one? So if you’ve ever thought about investing, uh, in hotels, or maybe that’s the first time you’re hearing about it and you’re curious, uh, welcome to the club and uh, we will have a great interview for you right after these messages. Wealth formula banking is an ingenious concept powered by whole life insurance, but instead of acting just as a safety net, the strategy supercharges your investments. First, you create a personal financial reservoir that grows at a compounding interest rate much higher than any bank savings account. As your money accumulates, you borrow from your own. Bank to invest in other cash flowing investments. Here’s the key. Even though you’ve borrowed money at a simple interest rate, your insurance company keeps paying you compound interest on that money even though you’ve borrowed it. At result, you make money in two places at the same time. That’s why your investments get supercharged. This isn’t a new technique. It’s a refined strategy used by some of the wealthiest families in history, and it uses century old rock solid insurance companies as its backbone. Turbocharge your investments. Visit Wealth formula banking.com. Again, that’s wealth formula banking.com. Welcome back to the show, everyone. Today. My guest on Wealth Farm I podcast is, uh, John O’Neill. He’s a, a professor of hospitality management and director of the Hospitality Real Estate Strategy Group at Pennsylvania State University. Uh, he spent decades studying hotel valuation performance, Cabo flows and economic cycles in in the lodging industry. John, thanks for, uh, joining us. You’re welcome. So, you know, we’re talking offline. You’ve been in the hotel business for a long time. We’re trying to figure out how to frame this thing because you know, I mean there are, I know there are certainly people in. Uh, who in, in my group and my listeners, my community who are in the hotel space, but a lot of ’em aren’t. And you know, they’ve been thinking about, well, you know, we do a lot of apartment buildings, that kind of thing. Um, you know, what else should we be thinking about? And so, you know, when we hear, uh, hotel, um, they’re thinking of hospitality. But from an investor’s perspective, I guess the first question ask is what kind of real estate asset is a hotel? And, and may, may maybe just sort of fundamentally how different it is. From apartments office or retail? Yeah, that’s a great question because hotels are fundamentally different. But what I’ve seen over the past few years as well is hotels have increasingly been considered to be a component of commercial real estate. So we’ve always thought about office and retail and residential and industrial as being components of commercial real estate, but increasingly. Investors are thinking about hotels that way as well, because some of the high risk aspects of hotels have been moderated a little bit. So they are still considered to be a high risk and potentially high reward category, but they’re much more cyclical than those other types of businesses. So if we look at apartment leases, maybe being a year or two. Office leases may be being three to five years and retail leases could be five or 10 years. The leases in hotels are one or two nights, so there’s upside, but there’s risk involved in that as well. So when there’s pressure in a market to increase rates, like here where I am in University Park, Pennsylvania, when we have a home football game. We can see hotels with average daily rates of maybe a hundred to $200 a night charging seven, eight, $900 per night, and filling up on those rates. You can’t do that in an office building or in a retail center. And so there’s great opportunity when demand increases to push up rates and to greatly benefit from that. The flip side of courses on Sunday night when all those guests leave. You might be back to a hundred dollars a night and running 20 or 30% occupancy. Do hotels kind of follow the rest of real estate in terms of market cycles though? Yeah, it depends. I, I would say in many cases they’re actually leaders, which again, double-edged sword there. So for, yeah, when we plummeted in 2020 because of COVID hotels were probably the first category really to see it. Demand dried up overnight, and you go back to September 11th, 2001 on September 12th, 2001, a lot of hotels were empty and that wasn’t the case with office buildings and retail centers. The flip side, of course, is when the economy started improving, hotel operators could start pushing their rates very quickly. And so other categories of commercial real estate didn’t receive those benefits. Yeah, I mean, obviously there’s certainly gonna be. Real estate that’s often used that that’s often using debt and, you know, probably has the same sort of, uh, issues with regard to cap rate compression or decompression based on interest rates as well. Right, right. So, um, where are we? Right? What would you say right now, like, I mean, we know that. Our, we’ve been following very closely on the multifamily side. You know, prices are depressed. I mean, from 2022, we’re looking at probably 30% to 40%. Most, most, uh, large apartment complexes are not moving because people don’t wanna sell into a down market. But when they are, they’re being sold at 30, 40% discounts compared to 2022. Where is the, where is the hotel? Market at right now? It it, it’s challenged because right now we’re seeing discrepancies between where buyers wanna buy and sellers wanna sell. We’ve started to see some movement because some sellers have come down a bit in pricing because of what we’ve seen in 2025, the market really did soften as far as the hotel business is concerned. So in 2025. We really saw no increase in occupancy and in many markets we saw some decreases in occupancy. We are still seeing average daily rates going up a little bit, so yeah. Might be worth maybe a quick step backward that the two key indicators in terms of hotel lodging performance would be occupancy and average daily rate. With occupancy being the extent to which the guest rooms are occupied and average daily rate being the average price somebody is paying. We can talk about the mathematics of those, but, um, just I think conceptually, hopefully that makes sense. But, so, you know, at this point what we’re seeing is average daily rates are still going up a little bit, and the forecasts for 2026 are. Pretty much more of the same, where we’re not expected to see great occupancy increases, but we are anticipating that the average daily rates might go up a little bit. Uh, and, and in fact we might see occupancies decline slightly. And, uh, we might see, uh, average daily rates still possibly going up a little bit. That’s usually an indicator of being late in the cycle, you know, being somewhere near the peak and, and, you know, if the trough was 2020. Which was a pretty deep trough. 2021, we started seeing improvements and we saw great improvements in 22, 23, and 24, and so it’s looking like the end of a cycle. The thing we don’t really know for sure is, is there some reason that we’re going to really go into a substantial down period or are we actually in a situation where we’re going to have another upcycle? Yeah. You know, the other thing I was curious about too, like when you talk about these cycles for hotels, even within hotels, there are certainly, you know, different types of hotels. You know, there’s the boutiquey ones that are pe really pure tourism versus the ones that, okay, well maybe they are, you know, good for football games or. There’s others that are people use for, for, for work frequently, right? They’re, they’re just passing through for, for work trips. Do you, is there, um, is that difficult to extricate those types of different economies running at the same time? It’s not, I, I don’t know that it’s that difficult, you know, just to give you a little bit about my background, I’ve been a professor for some time, but prior to being a professor I worked for. Three of the four major hospitality organizations, namely Marriott, IHG, and Hyatt. Uh, and so going back into the 1980s when I was doing feasibility studies for proposed Marriott hotels, we, in most markets, analyzed three markets segments. And, and you essentially said what they are commercial business, which are your business travelers, leisure business, which are your pleasure travelers, and then groups, which includes conventions and, and those are still the three major market segments in most markets. In, in some markets. For example, if you’re approximate to a major international airport, there’s usually a fourth segment, which is that fourth segment is airline crew business, which is, is very different than the other three because. Whereas the other three go up and down throughout, not just the year, but throughout the week. Airline crew business tends to be stable throughout the year, so it, it, it’s in your hotel 365 nights outta the year. So it’s, it’s a very low risk, but also a very low rated market segment. So it, I don’t know if that’s that complicated, but it just needs to be broken out as you delineated it, which is that there’s. Three or four market segments in any market. And in terms of studying a hotel for development or for investment, it’s necessary to understand not just what’s going on on the supply side, in other words what’s going on in the hotels, but what’s going on in the demand side as well. So give you an example. I recently did a feasibility study in a market, which is a big pharmaceutical market. So I actually spent time with major pharmaceutical people talking about, where are you staying now? Why are you staying there? Are you a member of the Frequent traveler program? How does your business vary throughout the year? What rates are you paying? What facilities and amenities are you seeking? And things like that. So to really understand the demand because that demand segment. So important in that market. So it is ultimately a street corner business and what’s going on in a specific market in terms of the mix of commercial, leisure and group business and possibly other market segments. Really is something that we have to study in depth when we conduct a feasibility study or an appraisal for hotel. I, I don’t know if I mentioned, I’m a licensed real estate appraiser too, and although my licenses allow me to appraise any type of property, I only appraise hotels. Got it. Businesses fundamentally changed pre COVID and post COVID. I would assume that there’s probably less travel. Are you seeing impact? On those types of hotels from that kind of, you know, less travel, more zoom type activity. Yeah. And, and that’s a great, that’s a great follow up because with those market segments, although the segments are the same. The demand from each of those segments really has different, and, and as you said, it really changed substantially in COVID. It, it, it’s fascinating how once we were forced to use Zoom and, and other, you know, Microsoft teams and other technology like that, you know, we, we kind of did a kicking and screaming. But once we figured it out, we realized we didn’t get a lot done. Uh, now I spent last week in Los Angeles at America’s Lodging Investment Summit, and I go to this. Function every year, because I see many of the same people year after year, and the business cards might change, but it’s the same people involved in the hotel business, whether they’re brokers or investors or asset managers or consultants or appraisers. But in between. Each year I do a lot on Zoom with these people and you know, we can keep those relationships going. So it hasn’t eliminated, you know, in my personal case, my need to travel, but it has substantially reduced it. And I think a lot of other business people have seen the same thing. So if we look at the recovery since COVID, it was fascinating because the first market segment that recovered and recovered really strongly was leisure business and people, people see it as their right. To have a vacation and, and people were paying high rates, particularly in, in, in mountain locations and in beach locations. And so those rates came up really quickly. And then the group business followed. If people do wanna go to group functions like I did last week in la what has not recovered to the level of 2019 though is the business travel. Right. Interesting. So I, that’s probably a, uh, you know, and he, I can’t really see a particularly promising future for that Subsect either. Right. I think, in fact, bill Gates said it’s never going to be back to the, you know, he, he’s an investor in Four Seasons hotels, and he said it’ll never be back to the way it was in 2019. I don’t know if he’s right. I mean, because I, I still feel like we get a lot of things done. Face-to-face, person to person that we really can’t do in Zoom. I don’t think Zoom is great for establishing relationships. I, I still think that we need face-to-face, uh, personal contact. But, you know, that might be just my perspective because I’ve been working in hotels since I was a teenager and I’m really far from being a teenager now. And, you know, I, I’ve been indoctrinated in this philosophy of the importance of face-to-face contact. But yeah, you know, that might be generational. You with a younger generation. Yeah. Yeah, absolutely. Um, you know, just kind of going back to the difference differences, uh, with compared to other real estate hotels, ultimately the, one of the big differences, they’re operating businesses, right? I mean, they’re not that large. Apartment buildings aren’t, but they’re is I think, a specific sort of operational execution that matters a lot in hotels. So, you know, in invest, when investors are kinda looking at that, I mean, they, they should probably be not looking at it as nearly as passive as other real estate investments. Is that fair? I, I think that’s very fair because I think, you know, it, it shows what’s happened in terms of the market with real estate investment trust. Because I’ve sold my entire position in hotel real estate investment trust and, and as you probably know, if we look at real estate investment trust. Different categories in, in commercial real estate, hotels lag, which is fascinating because everything else we’ve been talking about explains why hotel returns tend to outperform other classes of commercial real estate. More volatility, but higher returns on average. If you can withstand the long period, uh, that you need to be an investor. On real estate investment trust, it’s the opposite. Hotels actually lag and, and I think it really is because of exactly what you’re talking about, which is that they really are like an operating business where there’s also real estate as opposed to a real estate play where it’s almost like there’s an annuity of rent that is very easily projected, uh, in hotels. You know, we, we. Project all the time how they’re going to perform. But you know, you know, I hope my projections are very good, but there’s always things that can COVID. For example, you know, now there’s a virus in, in India that you know might be coming and, you know, we don’t know, will this be substantial or will it be really minor in the Americas? We really don’t know. Uh, that won’t have a big effect on, on other classes of real estate investment trust, but. It could have a big effect in hotels, so, so the unknowns in hotels are very high. And then when you combine that with the fact that they are an operating business, which are very labor intensive and wage rates are going up. So the cost structure and the management of that cost structure becomes. Very important and the expertise of the hotel managers becomes very important. And so, yeah, like you say, other classes of commercial real estate or, or institutional real estate investments have an operational component. It’s much greater when it comes to hotels. So I actually have a friend who’s an, um, owns, uh, a few boutique hotels here in, in California, and he was telling me one of the things that he’s kind of worried about is, um, you know, they, they’re, they have some, um. Some mandates coming up with regard to, you know, minimum wage and, and all these things that, uh, hotel workers have to get, uh, give you just outta curiosity. I mean, most of my audience is not in California. I am, but have you heard about this? Can you tell us a little bit about those pressures? Yeah, I have heard about it. And there’s, there’s forces on the other side as well, namely the American Hotel and Lodging Association, which represents hotel owners, managers, and franchisers. And so they have a voice in these things as well. But the, the, the forest, particularly in places like California and, and in the west coast in general, we’ve seen it in Seattle as well. Um, you know, in, in terms of increasing minimum wages to rates that, that are shocking to me. Um, you know, that’s, that’s a big issue. You know, you don’t see it as much in the middle of the country, but you do see it on the coast and particularly in the, on the West Coast. So, you know, if we’re looking at projections, say into 2026 and, and perhaps beyond, we expect in many cases to be seeing higher growth in wage expenses than we expect to see growth in RevPAR, which is room revenue, preoccupied room, which is just occupancy times average daily rate. So the, the overall revenue is expected, at least in the short term, to grow more slowly. Than expenses and, and wages are really driving a lot of it. And then anything that’s affected by wages, so insurance, for example, property taxes, other expenses are really growing at this stage more than what we’ve seen in terms of revenue growth. So that’s, that’s a challenge right now. The, the question I think really then is how much will AI affect that and to what extent will guests become more comfortable with checking in? On an iPad type of a situation as opposed to seeing a person face to face, and there’s probably generational differences there. What it is forcing hotel operators to do is the same kinds of things that restaurant operators have been forced to do, which is find ways to use technology and actually have the guests face the technology and get the guests comfortable with that. In terms of things like check in and check out, you know, but still in hotels the rooms have to be cleaned and, and although there’s robots that. You know, they’re nowhere near what, where they need to be to actually clean Hotel guestroom jet, at least in any sort of economically viable way. But, you know, the long-term question is to what extent will the industry be adopting AI and other technology in order to address that issue? Because that’s what’s going to happen. It’s, it’s, you know, it’s not just going to be a situation where. The operators will accept paying higher wages and have the same number of employees in each hotel. Right. Um, branding, you know, sort of confusing to a lot of people. Not in the space, but you know, what role do hotel brands actually kind of play in, in protecting revenue and value? Um, and I guess when does a brand help an owner versus become a constraint? Yeah. You know, brands have been very important and, and I, I forget if I mentioned but of the, the big brand companies I’ve worked for three of them and, um. You know, they, they, they typically started as management companies. So originally companies like Hilton and Marriott primarily generated revenue through management fees. And so they own some of the real estate, although they’ve become asset light over the years and own very little, if any, anymore. Uh, but they do still manage hotels. So one thing that the brand companies do have is expertise in terms of management. That’s one of the fees that a branded hotel and a non-branded hotel would have as well, would be a management fee, which is usually expressed as a percentage of revenue. And sometimes there’s an incentive structure in there as well. But then there’s a franchise fee, which is just paying for the brand, and, and that’s usually as a percentage of total revenue, higher than the management fee. But what it does is it, it, it. Puts the property in a global distribution system, so the global distribution systems that brands like Marriott and Hilton and IHG and, and HIA have, uh, they. Generate heads and beds. You know, that’s, that’s the term we always, when I worked at Hyatt and Merritt, we always talked about heads and beds. Every night you’re trying to, trying to get people in the rooms. The brands do a lot to put heads and beds, you know, in a typical hotel with a good brand affiliation. Somewhere between probably a third and two thirds of the occupy rooms actually came in through the brand global distribution system, which historically was a toll free reservation system. And although the, you know, those still exist now, it’s really more of a focus on the online system and, and, and sometimes toll-free reservations and direct reservations. But, but that’s what the brand does. It, it, it ultimately is a generator of. So kind of just focusing on somebody who’s potentially thinking about hotels as an investment. So far, what I gleaned from you, and, and correct me if I’m wrong, is that timing probably isn’t perfect right now. We’re probably, you know, we’re probably in a, you know, a peak and you generally not a great idea to buy in peaks. Um. I personally, from what I understand, would stay outta California. You know, uh, you know, like my friend was saying that it was gonna make it very difficult for a lot of hotels to have their, you know, hotel restaurants even. And so he foresees like a lot of them having to close those down. Um, and then the, the next thing I think is, gosh, you really have to be cognizant of the, of the fact that, you know, work patterns are changing. And so maybe that’s not a good. Way to go, either. What other, what are some other big picture things that you think people ought to be thinking about as they evaluate the space? Yeah. Well, I think there’s a couple of things. One of which is. That is a street corner business. So it really depends on what street corner you’re in. Uh, I’ve done some research just on how hotels perform in university towns versus other locations because, for example, there are brands now called graduate hotels, which eventually was acquired by Hilton, uh, and, uh, scholar Hotels and, and these properties are university town hotels. They’re doing okay. You know, they’re, they’re doing okay. If you look at how universities operate, we’ve seen some Ivy League schools pay 60, $80 million or more just to make sure they keep that billion dollars a year coming in from the federal government that they, they get for research grants and, and we’ve seen, you know, look at what’s going on with NIL now in terms of, of university sports. Universities clearly are willing to. You gen willing to spend a lot of money to keep doing what they do, which is, you know, they, they generate a lot of research and I’m talking about. Big universities now, uh, you know, a lot of research and, and there’s a sporting business aspect to universities as well. So university towns are okay, and, and what I ultimately found in my research is they’re much less cyclical than the average. So, you know, we talk about the risk of hotels as things go up and things go down and things go up and down. That doesn’t happen as much in university towns. You know, big universities don’t close and, and don’t even substantially change their business model. So it really depends on, on where you’re located. And then there’s certain cities as well, you know, people, you know, I, I don’t have to go into detail about my last visit to San Francisco and how weird it was, and I was with students and, and told my female students don’t go out at night alone. I mean, it was, it was, it was really freaky, but. San Francisco now might be a place to invest. Now San Francisco probably has bottomed out. Uh, and the same might be true with New York. So, you know, it really depends on where you’re going. I, I think in general, yeah, you know, there’s, there’s concerns, but even so, you know, I think it’s still might be a good time to invest in. Good quality hotel companies, just, you know, in terms of the stock market and, and equity in, in businesses like Marriott and, and Hilton because their franchise fees and their management fees are a percentage of total revenue. So hotels that are not profitable, that are a member of those brand affiliations are still paying. Into those systems and you know, hopefully the goal is that these properties become profitable, but even while they’re not profitable, they owe franchise fees and in some cases management fees as well. So I think there are a lot of ways to still invest in the hotel business. It’s just what vehicles are being used and where. So, you know, it sounds a little overwhelming, um, for someone who, again, who’s new to the space. Any suggestions on how somebody might just learn more about this ecosystem and, you know, start to go down this path of potentially becoming, you know, a hotel investor? Yeah. Well, first thing is, you know, we talked about ai. AI is pretty good for helping people to learn. So if you wanna learn about the hotel business, you can go and have a really good conversation with chat GPT about what makes it click and where could the opportunities lie today. Uh, you know, I’ve gone over the past year from essentially not using AI at all to using it essentially every day. And so that’s a great way because that’ll access a lot of, there, there’s trade journals, for example, but it’ll access those things. Uh, the conference, like I went to last week, the America’s Lodging Investment Summit, which is in LA every year is a. Is a great place to learn as well. There’s, there’s wonderful sessions and that conference is attended by everybody from Anthony Capano, who’s the CEO of Marriott, down to people involved in real estate and investments in the hotels and, and who essentially make their living. Off of those as brokers, appraisers, consultants, asset managers and things like that. So, so there’s ways online to do it and there’s ways to do it actually by attending conferences as well. Yeah. A good broker as well. Right. I mean, you know, going back to my, my friend who, who’s become a very successful hotelier, the first one he bought, he threw a broker and he said he learned everything about hotels that he knows from that guy. Um. So that’s probably, it probably tells you something as well. Yeah. And, and there are some excellent hotel brokers. There’s some who are national in scope and some who are local in scope. So again, it depends on where you’re thinking you might wanna be investing. Uh, but, but there’s some great local brokers, but then there’s national firms like JLL and CBRE and Hunter, uh, that, you know, they have really good people who are very knowledgeable about the hotel business. Yeah. John, thanks so much for, uh, joining us here on Wealth Formula Podcast and giving us sort of an overview of the, uh, um, hotel, uh, real estate, uh, uh, asset class. You bet you make a lot of money, but are still worried about retirement. Maybe you didn’t start earning until your thirties. Now you’re trying to catch up. Meanwhile, you’ve got a mortgage, a private school to pay for, and you feel like you’re getting further and further behind. Now, good news, if you need to catch up on retirement, check out a program put out by some of the oldest and most prestigious life insurance companies in the world. It’s called Wealth Accelerator, and it can help you amplify your returns quickly, protect your money from creditors, and provide financial protection to your family if something happens to. The concepts here are used by some of the wealthiest families in the world, and there’s no reason why they can’t be used by you. Check it out for yourself by going to wealth formula banking.com. Welcome back to the show everyone. Hope you enjoyed and again, uh, hey hotels. Think about it. I guess. Uh, I continue. I will continue to do so, uh, especially given my buddy’s success in this space. Um. Although, I will tell you, I probably am not a boutique hotel guy. Um, you know, I don’t, I don’t know that I could make it super fancy, you know? And then on the other hand, you hear about these, uh, hotels that are. For the people traveling through and they’re not doing this so great. So maybe wait till that we hit that, um, that trough that he was talking about, he said we’re kind of at a peak right now. Anyway, that’s it for me. Uh, this week on Wealth Formula Podcast. This is Buck Joffrey signing off. If you wanna learn more, you can now get free access to our in-depth personal finance course featuring industry leaders like Tom Wheel Wright and Ken McElroy. Visit well formula roadmap.com.

All TWiT.tv Shows (MP3)
Security Now 1064: Least Privilege

All TWiT.tv Shows (MP3)

Play Episode Listen Later Feb 11, 2026 156:39 Transcription Available


From EU fines that never get paid to cyber warfare grounding missiles mid-battle, this week's episode uncovers the untold stories and real-world consequences shaping today's digital defenses. How is the EU's GDPR fine collection going. Western democracies are getting serious about offensive cybercrime. The powerful cyber component of the Midnight Hammer operation. Signs of psychological dependence upon OpenAI's GPT-4o chatbot. CISA orders government agencies to unplug end-of-support devices. How to keep Windows from annoying us after an upgrade. What is OpenClaw, how safe is it to use, what does it mean. Another listener uses AI to completely code an app. Coinbase suffers another insider breach. What can be done Show Notes - https://www.grc.com/sn/SN-1064-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security hoxhunt.com/securitynow trustedtech.team/securitynowCSS guardsquare.com

Security Now (Video HD)
SN 1064: Least Privilege - Cybercrime Goes Pro

Security Now (Video HD)

Play Episode Listen Later Feb 11, 2026 156:39 Transcription Available


From EU fines that never get paid to cyber warfare grounding missiles mid-battle, this week's episode uncovers the untold stories and real-world consequences shaping today's digital defenses. How is the EU's GDPR fine collection going. Western democracies are getting serious about offensive cybercrime. The powerful cyber component of the Midnight Hammer operation. Signs of psychological dependence upon OpenAI's GPT-4o chatbot. CISA orders government agencies to unplug end-of-support devices. How to keep Windows from annoying us after an upgrade. What is OpenClaw, how safe is it to use, what does it mean. Another listener uses AI to completely code an app. Coinbase suffers another insider breach. What can be done Show Notes - https://www.grc.com/sn/SN-1064-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security hoxhunt.com/securitynow trustedtech.team/securitynowCSS guardsquare.com

Security Now (Video HI)
SN 1064: Least Privilege - Cybercrime Goes Pro

Security Now (Video HI)

Play Episode Listen Later Feb 11, 2026 156:39 Transcription Available


From EU fines that never get paid to cyber warfare grounding missiles mid-battle, this week's episode uncovers the untold stories and real-world consequences shaping today's digital defenses. How is the EU's GDPR fine collection going. Western democracies are getting serious about offensive cybercrime. The powerful cyber component of the Midnight Hammer operation. Signs of psychological dependence upon OpenAI's GPT-4o chatbot. CISA orders government agencies to unplug end-of-support devices. How to keep Windows from annoying us after an upgrade. What is OpenClaw, how safe is it to use, what does it mean. Another listener uses AI to completely code an app. Coinbase suffers another insider breach. What can be done Show Notes - https://www.grc.com/sn/SN-1064-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security hoxhunt.com/securitynow trustedtech.team/securitynowCSS guardsquare.com

Radio Leo (Audio)
Security Now 1064: Least Privilege

Radio Leo (Audio)

Play Episode Listen Later Feb 11, 2026 156:39 Transcription Available


From EU fines that never get paid to cyber warfare grounding missiles mid-battle, this week's episode uncovers the untold stories and real-world consequences shaping today's digital defenses. How is the EU's GDPR fine collection going. Western democracies are getting serious about offensive cybercrime. The powerful cyber component of the Midnight Hammer operation. Signs of psychological dependence upon OpenAI's GPT-4o chatbot. CISA orders government agencies to unplug end-of-support devices. How to keep Windows from annoying us after an upgrade. What is OpenClaw, how safe is it to use, what does it mean. Another listener uses AI to completely code an app. Coinbase suffers another insider breach. What can be done Show Notes - https://www.grc.com/sn/SN-1064-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security hoxhunt.com/securitynow trustedtech.team/securitynowCSS guardsquare.com

Business of Tech
AI Raises Workloads and Burnout: HBR Study, Medical Risk, and New Governance for MSPs

Business of Tech

Play Episode Listen Later Feb 11, 2026 13:33


Artificial intelligence (AI) is intensifying workloads rather than alleviating them, leading to increased burnout and declining decision quality, according to findings published in the Harvard Business Review and cited by Dave Sobel. The episode underscores that AI lowers the cost of producing outputs such as drafts and summaries but raises throughput targets and introduces new verification burdens. Economic gains from AI remain concentrated where capital and skilled labor already exist, while negative impacts—like displacement and wage pressure—are felt locally. These dynamics highlight the need for robust governance, particularly for managed service providers (MSPs) who deploy AI solutions.Supporting studies referenced include the International AI Safety Report, which details heightened uncertainty around AI development and its risks, as well as research from Oxford documenting the unreliability of AI chatbots in real-world medical decision-making. Experts warn that rapid automation without corresponding improvements in control systems creates structural constraints, making traditional software governance frameworks inadequate for unpredictable AI behaviors. Without proactive measures, these gaps risk exacerbating economic inequality and liability in regulated environments.Additional developments include OpenAI's release of upgraded agent features—such as GPT-5.2, improved context retention, managed shell containers, and a new skills standard—presented as operational enhancements but raising concerns about black-box context handling, auditability, and dependency risk. T-Mobile's AI-powered live translation service offers greater convenience but eliminates audit trails, shifting compliance risk to customers and prohibiting independent verification. Quark Cyber's launch of an internal cyber risk score introduces further complexity, as the scoring methodology is embedded within a financial product structure and lacks transparent validation.For MSPs and IT service leaders, the key takeaway is to treat new AI features and risk metrics as tools with significant tradeoffs. AI deployments should focus on governance layers that include workload caps, quality gates, and measurable outcomes rather than simply accelerating productivity. New features should be used for low-stakes workflows and carefully avoided in high-risk or regulated contexts unless auditable controls and deterministic checkpoints are established. Vendor-managed risk scores and warranties require independent validation before being positioned as client-facing truth standards.Four things to know today00:00 Harvard, Oxford Studies Find AI Raises Workload, Delivers Inadequate Medical Advice05:01 OpenAI Updates Deep Research and Adds New Agent Runtime Capabilities07:33 T-Mobile Tests Real-Time Call Translation Built Into Its Network09:17 Cork Cyber Rolls Out New Risk Score for Managed Service ProvidersThis is the Business of Tech.   Supported by:  ScalePad Small Biz Thoughts Community

Security Now (Video LO)
SN 1064: Least Privilege - Cybercrime Goes Pro

Security Now (Video LO)

Play Episode Listen Later Feb 11, 2026 156:39 Transcription Available


From EU fines that never get paid to cyber warfare grounding missiles mid-battle, this week's episode uncovers the untold stories and real-world consequences shaping today's digital defenses. How is the EU's GDPR fine collection going. Western democracies are getting serious about offensive cybercrime. The powerful cyber component of the Midnight Hammer operation. Signs of psychological dependence upon OpenAI's GPT-4o chatbot. CISA orders government agencies to unplug end-of-support devices. How to keep Windows from annoying us after an upgrade. What is OpenClaw, how safe is it to use, what does it mean. Another listener uses AI to completely code an app. Coinbase suffers another insider breach. What can be done Show Notes - https://www.grc.com/sn/SN-1064-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security hoxhunt.com/securitynow trustedtech.team/securitynowCSS guardsquare.com

The Information's 411
Musk's xAI Loses Two Co-founders, Why AI Automation is Different, Wealth Management Stocks Fall

The Information's 411

Play Episode Listen Later Feb 11, 2026 39:27


Elon Musk Reporter Theo Wayt breaks down the continuing exodus of co-founders at Musk's xAI and what it signals for the company's model timeline. The Information's Anita Ramaswamy then explains why ServiceNow is currently undervalued despite the broader SaaS market sell-off. Matt Shumer, GP of Shumer Capital, joins to discuss his viral essay on why GPT-5.3 Codex represents a unique inflection point for labor, and Kawasaki Wealth & Investment Management's Ross Gerber discusses how AI is disrupting wealth management and why he's concerned about leadership at Tesla and SpaceX.Articles discussed on this episode: https://www.theinformation.com/articles/investors-missing-servicenowhttps://www.theinformation.com/briefings/shopify-shares-jump-forecasts-continued-revenue-growthhttps://www.theinformation.com/newsletters/the-briefing/risk-muskiverses-steady-turnoverhttps://www.theinformation.com/briefings/departures-accelerate-elon-musks-xai-yet-another-cofounder-leavesSubscribe: YouTube: https://www.youtube.com/@theinformation The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agendaTITV airs weekdays on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Follow us:X: https://x.com/theinformationIG: https://www.instagram.com/theinformation/TikTok: https://www.tiktok.com/@titv.theinformationLinkedIn: https://www.linkedin.com/company/theinformation/

All TWiT.tv Shows (Video LO)
Security Now 1064: Least Privilege

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Feb 11, 2026 156:39 Transcription Available


From EU fines that never get paid to cyber warfare grounding missiles mid-battle, this week's episode uncovers the untold stories and real-world consequences shaping today's digital defenses. How is the EU's GDPR fine collection going. Western democracies are getting serious about offensive cybercrime. The powerful cyber component of the Midnight Hammer operation. Signs of psychological dependence upon OpenAI's GPT-4o chatbot. CISA orders government agencies to unplug end-of-support devices. How to keep Windows from annoying us after an upgrade. What is OpenClaw, how safe is it to use, what does it mean. Another listener uses AI to completely code an app. Coinbase suffers another insider breach. What can be done Show Notes - https://www.grc.com/sn/SN-1064-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security hoxhunt.com/securitynow trustedtech.team/securitynowCSS guardsquare.com

Radio Leo (Video HD)
Security Now 1064: Least Privilege

Radio Leo (Video HD)

Play Episode Listen Later Feb 11, 2026 156:39 Transcription Available


From EU fines that never get paid to cyber warfare grounding missiles mid-battle, this week's episode uncovers the untold stories and real-world consequences shaping today's digital defenses. How is the EU's GDPR fine collection going. Western democracies are getting serious about offensive cybercrime. The powerful cyber component of the Midnight Hammer operation. Signs of psychological dependence upon OpenAI's GPT-4o chatbot. CISA orders government agencies to unplug end-of-support devices. How to keep Windows from annoying us after an upgrade. What is OpenClaw, how safe is it to use, what does it mean. Another listener uses AI to completely code an app. Coinbase suffers another insider breach. What can be done Show Notes - https://www.grc.com/sn/SN-1064-Notes.pdf Hosts: Steve Gibson and Leo Laporte Download or subscribe to Security Now at https://twit.tv/shows/security-now. You can submit a question to Security Now at the GRC Feedback Page. For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security hoxhunt.com/securitynow trustedtech.team/securitynowCSS guardsquare.com

Technology for Business
Navigating AI Models: Finding the Right Fit

Technology for Business

Play Episode Listen Later Feb 11, 2026 37:04


In this episode, we are joined by Cole, a Power Platform Developer, and Kelsey, a Graphic Designer and Brand Strategist, to discuss the key differences between various AI models. The conversation emphasizes the importance of trial and error, context, and the specific needs of different departments within a business. We also touch on the practical applications of GPT, Copilot, and other tools, and explore how businesses can better align their AI strategies. Tune in for a deep dive into AI utilities, effective prompting, and strategic decision-making to boost your business productivity and efficiency.00:00 Introduction to AI Models00:27 Differences Between AI Models01:14 Practical Applications and Preferences03:05 AI Tools and Their Use Cases04:22 Choosing the Right AI Tool08:20 Memory and Context in AI15:24 Effective Prompting and Tone20:10 Building Context in AI Conversations22:26 Balancing Business Tools and Employee Preferences23:40 Understanding the Gap Between Business and Personal Tool Use24:20 Tailoring Tools to Department Needs25:55 The Importance of Pilot Programs for AI Tools28:48 Governance and Safe Use of AI Tools30:58 The Creative Process of Using AI Tools34:22 Effective Communication with AI Models37:04 Conclusion and Future Discussions

Speakernomics
How Personal Branding and AI Shape the Future of Professional Speaking

Speakernomics

Play Episode Listen Later Feb 10, 2026 27:22


In this episode of Speakernomics, Terry Brock, CSP, CPAE welcomes renowned personal branding strategist and Speaker Hall of Fame member Rory Vaden, CSP, CPAE, for an engaging exploration on building, refining, and leveraging your personal brand as a professional speaker.In this session, Rory will:* Define and distinguish the real meaning of personal branding as the formalization, digitization, and monetization of your reputation—moving beyond logos and social media to focus on trust and authenticity.* Demonstrate how to effectively leverage emerging tools, including AI and custom GPT bots, to streamline and personalize your speaking business while maintaining your unique human touch.* Formulate actionable strategies for breaking through industry noise, finding your uniqueness, and applying proven frameworks to grow your business, secure higher speaking fees, and expand your platform.* Whether you're an established professional or just building your brand, this discussion will help you evaluate the strengths of your current reputation and construct a targeted plan to enhance your recognition and reach in the speaking marketplace. Become an NSA Member! https://nsaspeaker.org/join/#membership THRIVE 2026! You NEED to be here! https://thrive.nsaspeaker.org/ Learn more about your ad choices. Visit megaphone.fm/adchoices

Small Business Sales & Strategy | How to Grow Sales, Sales Strategy, Christian Entrepreneur
106. Branding Isn't Your Logo — It's the Feeling People Associate With You | Small Business Branding Insights with Jan Touchberry

Small Business Sales & Strategy | How to Grow Sales, Sales Strategy, Christian Entrepreneur

Play Episode Listen Later Feb 10, 2026 41:08 Transcription Available


In this insightful episode of Grow My Small Business, Lindsay sits down with Jan Touchberry from The Brand Collaborative to uncover the truth behind branding for small business owners, especially those with faith-driven businesses. They discuss why branding is much more than just a logo—it's about the feelings and associations your brand creates with your clients. This episode is perfect for Christian entrepreneurs and small business owners looking to build a brand that attracts ideal clients and fosters business growth. Discover the psychology of branding, including how colors influence perception, and why defining your ideal client, values, and desired feelings should come before any visual design. Jan shares practical tips on incorporating your faith into your branding authentically, avoiding performative expressions, and building trust in a challenging 'trust recession' environment. Plus, learn how AI tools like custom GPT can help capture your unique brand voice without sounding robotic. Whether you're just starting or refining your brand, this episode offers actionable strategies for creating consistency, building strong relationships, and growing your faith-driven small business. Tune in for expert insights on branding strategy, faith in business, and business growth that will empower you to make an impactful and authentic impression. Connect with Jan Touchberry at https://jantouchberry.com Jan's Podcast: Her Faith At Work

ai discover logo gpt small business branding
Tech Café
Les agents IA ont enfin leurs réseaux sociaux

Tech Café

Play Episode Listen Later Feb 10, 2026 77:42


Claude contre ChatGPT, la promesse d’Elon Musk pour des data centers dans l’espace, compétition entre Claude Opus 4.6 et GPT 5.3 Codex, pubs du Super Bowl, IA qui socialisent, et les promesses de GTA 6.  Me soutenir sur Patreon Me retrouver sur YouTube On discute ensemble sur Discord Cote Cote Codex Photofinish pour Anthropic et Open AI, qui gagne la course de la hype ? Ça y est, l'IA est la nouvelle crypto (.com). Sam Altman égratigné par les mamans cougars. Pas assez cher mon fils ! Qui gagne la course du pognon ? Homard d'alors : le débat sur la sécurité loin d'être Claw… Pruneau d'agent : 4claw, Moltroad, Rentahuman, Moltmatch, Moltbunker, SpaceMolt et Molthub ! Ketamine de rien… Le million ! Le million ! Elon parfaitement sérieux avec ses space datacenters. Mais ça risque de pas être si simple. Né sous X : un algorithme pas si transparent que ça… Dead see scrolls : TikTok est trop addictif pour l'UE. Procès à l'appel : Google et le département de la justice font un crossover. Jeux vidéo GTA 6 bientôt interdit en France ? Non. Record pour la Switch, pas pour la Xbox… Et pour la Gabecube?! Participants Une émission préparée par Guillaume Poggiaspalla Présenté par Guillaume Vendé

The Marketing AI Show
#196: SaaSpocalypse, Claude Super Bowl Ad, SpaceX Acquires xAI & Claude Opus 4.6

The Marketing AI Show

Play Episode Listen Later Feb 10, 2026 81:08


Is the SaaS business model dead? Wall Street just wiped out $300B in software value as fears grow that AI agents will replace human seats. Paul and Mike break down the market drop, Anthropic's Super Bowl ads targeting OpenAI, and the rise of "Move 37" moments where experts admit AI superiority. Plus: SpaceX buys xAI, Claude Opus 4.6, and the $650B race for compute. Show Notes: Access the show notes and show links here Click here to take this week's AI Pulse. Timestamps: 00:00:00 — Intro 00:04:14 — AI Pulse Results 00:06:24 —SaaS Apocalypse 00:23:53 — Anthropic Super Bowl Ad 00:33:56 — The Move 37 Moment for Everyone 00:47:39 — SpaceX Acquires xAI 00:50:55 — Claude Opus 4.6 00:56:00 — GPT-5.3 Codex 00:59:10 — OpenAI Frontier 01:04:48 — The AI Capex Wars 01:11:01 — Latest on AI Impact on Jobs 01:14:52 — Agentic CRMs 01:17:41 — AI Product and Funding Updates Today's episode is also brought to you by our AI for Agencies Summit, a virtual event taking place from 12pm - 5pm ET on Thursday, February 12. The AI for Agencies Summit is designed for marketing agency practitioners and leaders who are ready to reinvent what's possible in their business and embrace smarter technologies to accelerate transformation and value creation. There is a free registration option, as well as paid ticket options that also give you on-demand access after the event. To register, go to www.aiforagencies.com  Visit our website Receive our weekly newsletter Join our community: Slack LinkedIn Twitter Instagram Facebook Looking for content and resources? Register for a free webinar Come to our next Marketing AI Conference Enroll in our AI Academy 

GOTO - Today, Tomorrow and the Future
Handling AI-Generated Code: Challenges & Best Practices • Roman Zhukov & Damian Brady

GOTO - Today, Tomorrow and the Future

Play Episode Listen Later Feb 10, 2026 29:02


This interview was recorded for GOTO Unscripted.https://gotopia.techCheck out more here:https://gotopia.tech/articles/419Roman Zhukov - Principal Architect - Security Communities Lead at Red HatDamian Brady - Staff Developer Advocate at GitHubRESOURCESRomanhttps://github.com/rozhukovhttps://www.linkedin.com/in/rozhukovDamianhttps://bsky.app/profile/damovisa.mehttps://hachyderm.io/@damovisahttps://x.com/damovisahttps://github.com/Damovisahttps://www.linkedin.com/in/damianbradyhttps://damianbrady.com.auLinkshttps://www.redhat.com/en/blog/ai-assisted-development-and-open-source-navigating-legal-issuesDESCRIPTIONRoman Zhukov (Red Hat) and Damian Brady (GitHub) explore the evolving landscape of AI-assisted software development. They discuss how AI tools are transforming developer workflows, making developers about 20% faster on simple tasks while being 19% slower on complex ones.The conversation covers critical topics including code quality and trust, security concerns with AI-generated code, the importance of education and best practices, and how developer roles are shifting from syntax experts to system architects. Both experts emphasize that AI tools serve as amplifiers rather than replacements, with humans remaining essential in the loop for quality, security, and licensing compliance.RECOMMENDED BOOKSPhil Winder • Reinforcement Learning • https://amzn.to/3t1S1VZAlex Castrounis • AI for People and Business • https://amzn.to/3NYKKToHolden Karau, Trevor Grant, Boris Lublinsky, Richard Liu & Ilan Filonenko • Kubeflow for Machine Learning • https://amzn.to/3JVngcxKelleher & Tierney • Data Science (The MIT Press Essential Knowledge series) • https://amzn.to/3AQmIRgLakshmanan, Robinson & Munn • Machine Learning Design Patterns • https://amzn.to/2ZD7t0xLakshmanan, Görner & Gillard • Practical Machine Learning for Computer Vision • https://amzn.to/3m9HNjPBlueskyTwitterInstagramLinkedInFacebookCHANNEL MEMBERSHIP BONUSJoin this channel to get early access to videos & other perks:https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/joinLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!

Everyday AI Podcast – An AI and ChatGPT Podcast
Ep 709: OpenAI and Anthropic battle each other, SpaceX and xAI merge, AI coding takes spotlight and more

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Feb 9, 2026 44:14


Tandem Nomads - From expat partners to global entrepreneurs!  Build a successful business and thrive in your global  nomadic

Listen on your podcast app:                    Resources Of This Episode: Find this episode in video format onSpotifyandYoutube.  Get access to Nina – The Niche Navigator, my custom GPT designed to help you clarify your positioning when things feel messy:Download here.   Interested in working with me? Schedule a free assessment call here. 

Bay Current
BONUS: Will there be doctors in the future? AI's effect on workplaces and medicine

Bay Current

Play Episode Listen Later Feb 9, 2026 31:35


Only 9% of workplaces say they're fully staffed in a recent Robert Half survey. Is that because of AI? And when your doctor uses chat GPT (and he does), what does that mean for the future of medicine? Also, more young people are having heart attacks. Here are the warning signs.

I’ve Got Questions with Mike Simpson
BONUS: Will there be doctors in the future? AI's effect on workplaces and medicine

I’ve Got Questions with Mike Simpson

Play Episode Listen Later Feb 9, 2026 31:35


Only 9% of workplaces say they're fully staffed in a recent Robert Half survey. Is that because of AI? And when your doctor uses chat GPT (and he does), what does that mean for the future of medicine? Also, more young people are having heart attacks. Here are the warning signs.

Adam and Jordana
BONUS: Will there be doctors in the future? AI's effect on workplaces and medicine

Adam and Jordana

Play Episode Listen Later Feb 9, 2026 31:35


Only 9% of workplaces say they're fully staffed in a recent Robert Half survey. Is that because of AI? And when your doctor uses chat GPT (and he does), what does that mean for the future of medicine? Also, more young people are having heart attacks. Here are the warning signs.

Phil Matier
BONUS: Will there be doctors in the future? AI's effect on workplaces and medicine

Phil Matier

Play Episode Listen Later Feb 9, 2026 31:35


Only 9% of workplaces say they're fully staffed in a recent Robert Half survey. Is that because of AI? And when your doctor uses chat GPT (and he does), what does that mean for the future of medicine? Also, more young people are having heart attacks. Here are the warning signs.

The Becky Beach Show
100. Stop Posting Daily: The AI Visibility System That Works While You Live Your Life

The Becky Beach Show

Play Episode Listen Later Feb 9, 2026 9:27 Transcription Available


Ever miss a day of posting and immediately feel like your business is going to fall apart? Becky's been there. In this milestone Episode 100, she shares the mindset shift that changed everything: daily posting isn't the same as daily visibility.You'll learn how to stop being the algorithm's unpaid intern and build an “always-on” visibility system using long-form content + AI repurposing—so your business stays consistent even when life is busy (hello, sound-off bathroom scrolling and the Tupperware drawer chaos).Becky breaks down her simple monthly batching routine, how she uses Opus Clip to turn one video into multiple captioned short clips, and the quick tweaks that make those clips perform better. She also covers other ways to stay visible without posting every day—like evergreen content, repurposing, email marketing, and collaborations.Challenge for the week: record one 10–15 minute piece of long-form content, upload it to Opus Clip, schedule the clips, and feel what it's like when your visibility doesn't depend on your energy that day.Links mentioned:Free trial of Opus Clip: CoachBeckyBeach.com/opusclipJoin Business Beach Club (50% off first month with code FEB50OFF): BusinessBeachClub.comShow notes to get a free GPT and workbook to launch your digital product at BeckyBeachShow.com

WWL First News with Tommy Tucker
BONUS: Will there be doctors in the future? AI's effect on workplaces and medicine

WWL First News with Tommy Tucker

Play Episode Listen Later Feb 9, 2026 31:35


Only 9% of workplaces say they're fully staffed in a recent Robert Half survey. Is that because of AI? And when your doctor uses chat GPT (and he does), what does that mean for the future of medicine? Also, more young people are having heart attacks. Here are the warning signs.

The AI Breakdown: Daily Artificial Intelligence News and Discussions
Opus 4.6 and ChatGPT 5.3-Codex Are Here and the Labs Are at War

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Feb 6, 2026 27:47


Anthropic dropped Claude Opus 4.6 and OpenAI responded with GPT 5.3 Codex just 20 minutes later — the most intense head-to-head model release we've ever seen. Here's what each model brings, how they compare, and what the first reactions are telling us. In the headlines: Google and Amazon share their capex plans, and we're about to spend 2.5 moon landings on AI. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Rackspace AI Launchpad - Build, test and scale intelligent workloads faster - ⁠⁠⁠⁠⁠⁠⁠http://rackspace.com/ailaunchpad⁠⁠⁠⁠⁠⁠⁠Zencoder - From vibe coding to AI-first engineering - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://zencoder.ai/zenflow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Optimizely Agents in Action - Join the virtual event (with me!) free March 4 - ⁠https://www.optimizely.com/insights/agents-in-action/⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Section - Build an AI workforce at scale - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.sectionai.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://pod.link/1680633614⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Interested in sponsoring the show? sponsors@aidailybrief.ai

Where It Happens
Claude Opus 4.6 vs GPT-5.3 Codex: Live Build, Clear Winner

Where It Happens

Play Episode Listen Later Feb 6, 2026 48:54


I sit down with Morgan Linton, Cofounder/CTO of Bold Metrics, to break down the same-day release of Claude Opus 4.6 and GPT-5.3 Codex. We walk through exactly how to set up Opus 4.6 in Claude Code, explore the philosophical split between autonomous agent teams and interactive pair-programming, and then put both models to the test by having each one build a Polymarket competitor from scratch, live and unscripted. By the end, you'll know how to configure each model, when to reach for one over the other, and what happened when we let them race head-to-head. Timestamps 00:00 – Intro 03:26 – Setting Up Opus 4.6 in Claude Code 05:16 – Enabling Agent Teams 08:32 – The Philosophical Divergence between Codex and Opus 11:11 – Core Feature Comparison (Context Window, Benchmarks, Agentic Behavior) 15:27 – Live Demo Setup: Polymarket Build Prompt Design 18:26 – Race Begins 21:02 – Best Model for Vibe Coders 22:12 – Codex Finishes in Under 4 Minutes 26:38 – Opus Agents Still Running, Token Usage Climbing 31:41 – Testing and Reviewing the Codex Build 40:25 – Opus Build Completes, First Look at Results 42:47 – Opus Final Build Reveal 44:22 – Side-by-Side Comparison: Opus Takes This Round 45:40 – Final Takeaways and Recommendations Key Points Opus 4.6 and GPT-5.3 Codex dropped within 18 minutes of each other and represent two fundamentally different engineering philosophies — autonomous agents vs. interactive collaboration. To use Opus 4.6 properly, you must update Claude Code to version 2.1.32+, set the model in settings.json, and explicitly enable the experimental Agent Teams feature. Opus 4.6's standout feature is multi-agent orchestration: you can spin up parallel agents for research, architecture, UX, and testing — all working simultaneously. GPT-5.3 Codex's standout feature is mid-task steering: you can interrupt, redirect, and course-correct the model while it's actively building. In the live head-to-head, Codex finished a Polymarket competitor in under 4 minutes; Opus took significantly longer but produced a more polished UI, richer feature set, and 96 tests vs. Codex's 10. Agent teams multiply token usage substantially — a single Opus build can consume 150,000–250,000 tokens across all agents. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ Morgan Linton X/Twitter: https://x.com/morganlinton Bold Metrics: https://boldmetrics.com Personal Website: https://linton.ai

AI For Humans
OpenAI's GPT-5.3 vs Opus 4.6. Both Are Great. So... Are We Cooked?

AI For Humans

Play Episode Listen Later Feb 6, 2026 57:15


Anthropic drops Opus 4.6. Twenty minutes later, OpenAI fires back with GPT-5.3 Codex. This is the AI agentic coding arms race and it's moving fast. Both AI models are writing code that can write itself now. OpenAI is using 5.3 to improve its own tooling. Opus 4.6 is "voicing discomfort with being a product." We tested both and break down what actually matters for people building stuff.  Plus Kling 3.0 is out (and harder to prompt than you think), OpenClaw bots are hiring humans on rent-a-human.ai, Roblox launches prompt-to-3D creation, and robots are now doing 130K step challenges in negative 47 degree weather. THE MODELS ARE IMPROVING THEMSELVES NOW. EVERYTHING IS FINE. Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // Anthropic's Claude Opus 4.6 https://www.anthropic.com/news/claude-opus-4-6 Orchestrating Agents in Claude Code https://x.com/lydiahallie/status/2019469032844587505?s=20 Opus 4.6 Beats Humans at analyzing complex human science docs https://x.com/_simonsmith/status/2019502742209769540?s=20 OpenAI GPT-5.3 Codex https://openai.com/index/introducing-gpt-5-3-codex/ 5.3 Codex First model instrumental in creating itself https://x.com/deredleritt3r/status/2019475360438493597 OpenAI Frontier https://openai.com/index/introducing-openai-frontier/ Anthropic's Superbowl Ads https://x.com/tomwarren/status/2019039874771550516?s=20 GPT-5 connected to an autonomous lab to do experiments https://x.com/OpenAI/status/2019488071134347605?s=20 OpenClaw https://openclaw.ai/ Rent-A-Human https://rentahuman.ai/bounties Kling 3.0 = Really good model https://x.com/Kling_ai/status/2019064918960668819?s=20 Kling 3.0 Moonlanding Mockumentary https://x.com/Kling_ai/status/2019228615775604784?s=20 PJ Ace's Way of Kings Intro https://x.com/PJaccetturo/status/2019072637192843463?s=20  We Are The Art | Brandon Sanderson's Keynote Speech https://youtu.be/mb3uK-_QkOo?si=EgKBjxZf4GE4DYIJ Gavin's Kling Fail https://x.com/gavinpurcell/status/2019436331999588371?s=20 FIGMA VECTOR AI https://x.com/moguzbulbul/status/2019106665732403708?s=20 Grok Imagine 1.0 Officially Launches https://x.com/xai/status/2018164753810764061?s=20 Roblox Launches 4D Creation https://x.com/Roblox/status/2019221624604750238 Unitree Robot Walks Across The Tundra (-47C!!) https://x.com/War_Radar2/status/2018315065414635813?s=20 KinectIQ's Humanoid Framework https://youtu.be/Y2DhzLPGdwY?si=iWibCGoc_h53yZz3 The LooksMaxxor https://x.com/Gossip_Goblin/status/2018362969025884282?s=20 Midi-Survivor https://x.com/measure_plan/status/2019082789379858577?s=20

This Week in Google (MP3)
IM 856: SecretlyBriti.sh - From Humans to Hive Minds

This Week in Google (MP3)

Play Episode Listen Later Feb 5, 2026 165:19 Transcription Available


he podcast dives into the explosive advances in agentic AI, where developers and even Fortune 100 companies are racing to use powerful tools like Gastown, despite their unfinished and sometimes dangerous edges. If you thought ChatGPT was a revolution, wait until you hear how developers are orchestrating armies of AIs with real-world impact. Anthropic's Move Into Legal Is Sinking Data Services Stocks Data centers in space makes no sense The hitchhiker's guide to Musk's SpaceX memo Two kinds of AI users are emerging. The gap between them is astonishing. Does AI already have human-level intelligence? The evidence is clear - Nature OpenAI will retire several models, including GPT-4o, from ChatGPT next month Jensen Huang says Nvidia would love to back an OpenAI IPO, and there's 'no drama' with Sam Altman Firefox will soon let you block all of its generative AI features Salesforce signs $5.6B deal to inject agentic AI into the US Army HHS Is Making an AI Tool to Create Hypotheses About Vaccine Injury Claims French office of Elon Musk's X raided by Paris prosecutor's cybercrime unit An AI Toy Exposed 50K Logs of Its Chats With Kids To Anyone With a Gmail Account Darren Aronofsky's AI Studio Used Artificial Intelligence Tools for Revolutionary War Animated Series — but Hired Human Actors to Voice Founding Fathers Forget Hinge or Bumble. This App Promises a Personal AI Matchmaker Scientists Launch AI DinoTracker App That Identifies Dinosaur Footprints Project Genie: Experimenting with infinite, interactive worlds Anthropic Takes Aim at OpenAI's ChatGPT in Super Bowl Ad Debut Move to Ban Social Media for Kids Gains Traction in Europe The Matrix Resurrections Is a Messy, Imperfect Triumph The Thatcher Effect and other Optical Toys Fascinating Research: AIs are highly inconsistent [i.e., random] when recommending brands or products Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steve Yegge Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT monarch.com with code IM zscaler.com/security helixsleep.com/machines

Rich Habits Podcast
Q&A: Buying a Second Home, Lofty Career Goals, & How to Handle Cash

Rich Habits Podcast

Play Episode Listen Later Feb 5, 2026 36:06


In this week's episode of the Rich Habits Podcast, Robert Croak and Austin Hankwitz answer your questions!---

All TWiT.tv Shows (MP3)
Intelligent Machines 856: SecretlyBriti.sh

All TWiT.tv Shows (MP3)

Play Episode Listen Later Feb 5, 2026 165:19


he podcast dives into the explosive advances in agentic AI, where developers and even Fortune 100 companies are racing to use powerful tools like Gastown, despite their unfinished and sometimes dangerous edges. If you thought ChatGPT was a revolution, wait until you hear how developers are orchestrating armies of AIs with real-world impact. Anthropic's Move Into Legal Is Sinking Data Services Stocks Data centers in space makes no sense The hitchhiker's guide to Musk's SpaceX memo Two kinds of AI users are emerging. The gap between them is astonishing. Does AI already have human-level intelligence? The evidence is clear - Nature OpenAI will retire several models, including GPT-4o, from ChatGPT next month Jensen Huang says Nvidia would love to back an OpenAI IPO, and there's 'no drama' with Sam Altman Firefox will soon let you block all of its generative AI features Salesforce signs $5.6B deal to inject agentic AI into the US Army HHS Is Making an AI Tool to Create Hypotheses About Vaccine Injury Claims French office of Elon Musk's X raided by Paris prosecutor's cybercrime unit An AI Toy Exposed 50K Logs of Its Chats With Kids To Anyone With a Gmail Account Darren Aronofsky's AI Studio Used Artificial Intelligence Tools for Revolutionary War Animated Series — but Hired Human Actors to Voice Founding Fathers Forget Hinge or Bumble. This App Promises a Personal AI Matchmaker Scientists Launch AI DinoTracker App That Identifies Dinosaur Footprints Project Genie: Experimenting with infinite, interactive worlds Anthropic Takes Aim at OpenAI's ChatGPT in Super Bowl Ad Debut Move to Ban Social Media for Kids Gains Traction in Europe The Matrix Resurrections Is a Messy, Imperfect Triumph The Thatcher Effect and other Optical Toys Fascinating Research: AIs are highly inconsistent [i.e., random] when recommending brands or products Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Steve Yegge Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT monarch.com with code IM zscaler.com/security helixsleep.com/machines