Podcasts about Figma

  • 1,225PODCASTS
  • 2,624EPISODES
  • 48mAVG DURATION
  • 1DAILY NEW EPISODE
  • Mar 16, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories



Best podcasts about Figma

Show all podcasts related to figma

Latest podcast episodes about Figma

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: The 8 Moats of Enduring Software Companies: How to Analyse for Durability and Defensibility in a World of AI | Why Dropouts are "AI Maxing" the World & Remote Early-Stage Companies are Dying with Gokul Rajaram

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Mar 16, 2026 78:21


Gokul Rajaram is one of the greatest operators turned investors of the last 2 decades. He is trusted as the go to advisor for the greatest founders in the world. Today he serves as a Board Director at three public companies: Coinbase, Pinterest and The Trade Desk. Prior to Marathon (his firm), Gokul served on the executive team at DoorDash and Block. Before Block, he served as Product Director of Ads at Facebook. Earlier in his career, Gokul served as a Product Management Director for Google AdSense. Gokul is also a prolific angel investor, having invested in 700+ companies, including Airtable, Figma, Groq, Runway, Supabase, and Vercel.  AGENDA: 03:53 — Investing Lessons from Google, Doordash and Facebook 05:32 — Why Mark Zuckerberg is the Greatest Distribution Genius Alive 07:23 — Why Every Company Today Needs to be Multi-Product 09:16 — Negative Gross Margins: Are the Best Companies Actually Built on "Shit" Economics? 10:50 — The SaaS Apocalypse: Is the Entire Sector Going to Zero? 12:15 — The 8 Moats of Enduring Software Companies: How to Analyse Companies 14:50 — Why Brand is No Longer a Strong Moat (And What Replaced It) 16:13 — Salesforce vs. Atlassian: Which Systems of Record are Dying? 18:13 — Outcome-Based Pricing: Is This the Total Death of Seat Pricing? 20:16 — The Bolt-On AI Trap: Why Rebuilding Your Entire UX is Non-Negotiable 23:44 — Are the Outcome Sizes of Vertical SaaS Large Enough for VC Today? 28:16 — The Zombie Cohort: What Happens to Private Companies with High Valuations? 32:44 — Is "King Making" Complete Bullshit? 34:21 — Durability Over Margins: What Really Matters in a 100x Growth World 35:36 — The Non-Consumption Miracle: Why Granola and Gamma are Crushing It 38:50 — The PayPal Rule: Can You Raise Prices 5 Times in 3 Years? 42:47 — My Biggest Miss: How I Misread the Shopify Billion-Dollar Mark 45:18 — The Courage to Bet: Why Instacart is the Best VC Deal Ever 46:33 — Seed vs. Growth Pricing: When Does Price Actually Destroy Returns? 50:53 — Does "Proprietary Founder Access" Even Exist? 54:33 — Double Down or Diversify? The Truth About Fund Reserves 59:44 — The Vanta Anti-Portfolio: A Mistake I'll Never Forget 01:01:21 — When to Sell: The "Sell a Third, Hold a Third, Trade a Third" Rule 01:04:12 — Why Remote Early-Stage Companies are Dying 01:07:33 — Why Mid-Level Partners are Fleeing Mega Funds 01:09:47 — The Best CEO Superpowers: Larry, Mark, Jack, and Tony 01:12:33 — The Next 10 Years: Why Dropouts are "AI Maxing" the World    

Career Strategy Podcast with Sarah Doody
166 - UX Hiring Insights Dan Maccarone on Thinking Over Tools & UX Career Reinvention

Career Strategy Podcast with Sarah Doody

Play Episode Listen Later Mar 16, 2026 65:41


UX hiring insights from a UX veteran with 25+ years in UX and product. In this episode, Sarah Doody interviews Dan Maccarone, co-founder of Hard Candy Shell and Charming Robot, fractional Chief Product Officer, and a UX expert who's worked on products for Hulu, Rent the Runway, Foursquare, and the Wall Street Journal. In the episode Dan shares about what he actually looks for when hiring UX people (spoiler: it's not your Figma skills).Dan shares why he doesn't care about tools, why he conducts interviews over drinks instead of in conference rooms, and how he evaluates candidates based on curiosity, empathy, and how they think, not what software they know. He also gets into career reinvention, the rise of fractional leadership roles, and why your hobbies outside of UX might matter more than your case studies.If you're a UX or Product professional navigating your next career move, this conversation will challenge what you think hiring managers care about.What's discussed in this episode:Why Dan has hired people who didn't know Figma — and doesn't careWhat curiosity and a humanities background signal to a hiring managerWhy Dan prefers to conducts interviews with candidates over coffee or drinks, not in conference roomsHow he uses observation and empathy cues to evaluate candidates (the same way you'd do user research)Why he hates design assignments and considers them insultingWhat "career reinvention" looks like after 25 years in UX and how to know when it's timeThe real requirements for going fractional (and why it's not for everyone)Why your identity and hobbies outside of work actually make you better at your jobHow he's re-invented his own UX career multiple times

Supra Insider
#102: How to stand out in a crowded space | Elan Miller (Founder @ Off-Menu)

Supra Insider

Play Episode Listen Later Mar 16, 2026 92:46


What happens when everyone can build, but no one breaks through the noise?In this episode of Supra Insider, Ben Erez sits down with Elan Miller, founder and CEO of branding and design studio Off-Menu, for the podcast's first live in-person recording. Elan unpacks why this moment is uniquely challenging for brand storytelling—AI has made it easier than ever to build and ship products, but harder than ever to get people to care. He explains how the standard tech playbook (great product + clever go-to-market) no longer works when 10 competitors can copy you within a month, and why honorable points of view are the only sustainable moat.They explore Anthropic's Keep Thinking campaign and Super Bowl ads as a masterclass in positioning against OpenAI, discuss why successful positioning must repel people as much as it resonates, and unpack the Granola rebrand (including Ben's honest reaction as a customer). Elan shares why most rebrands fail (visual makeover without moving anything forward), the different reasons companies should rebrand (talent attraction, internal alignment, crossing the chasm), and his process for finding the “holy s**t insight” that makes people feel seen. Plus, how he's building AI tools that turn brand strategy into practical inputs for higher-quality outputs, and why strong point of view is the antidote to slop.If you're building in a crowded space and struggling to stand out, wondering whether a rebrand is the right move, or trying to articulate what makes you different in a way that actually resonates—this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

Millennial Investing - The Investor’s Podcast Network
TIVP063: Figma Inc. (FIG): Recovering From An 80% Post-IPO Decline w/ Shawn O'Malley & Daniel Mahncke

Millennial Investing - The Investor’s Podcast Network

Play Episode Listen Later Mar 15, 2026 72:10


Shawn O'Malley and Daniel Mahncke break down the emerging design giant Figma Inc. (ticker: FIG) and discuss whether the company can expand further into other enterprise design software verticals against Adobe. In this episode, you'll learn how Figma burst onto the scene after three long years of toiling in the background, why Figma's stock has crash 80% since IPO, and whether Figma's stock is attractively priced at current levels. IN THIS EPISODE, YOU'LL LEARN: 00:00:00 - Intro 00:09:01 - Why the design process used to be so messy and disjointed before Figma came along 00:11:08 - How Figma was born out of a partnership at Brown University 00:28:53 - How Figma is turning from a single-hit product into a more diversified platform 00:36:51 - What Figma is doing to redefine the future of AI in collaborative design 00:52:06 - What to make of Figma's young CEO, Dylan Field 00:54:32 - Why Figma crashed after its IPO 00:56:14 - How IPO-related stock-based-comp accounting distorted Figma's 2024 & 2025 financials 01:03:54 - Whether Shawn and Daniel add FIG to their Intrinsic Value Portfolio *Disclaimer: Slight timestamp discrepancies may occur due to podcast platform differences. BOOKS AND RESOURCES The Investors Podcast Network is excited to debut a new community known as The Intrinsic Value Community for investors to learn, share ideas, network, and join calls with experts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Sign up for the waitlist(!)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Sign up for ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Intrinsic Value Newsletter.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Learn how to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠join us⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ in Omaha for the 2026 Berkshire Hathaway shareholder meeting. Track ⁠⁠⁠⁠The Intrinsic Value Portfolio⁠⁠⁠⁠. Shawn & Daniel use ⁠⁠Fiscal.ai⁠⁠ for every company they research — use their ⁠⁠referral link⁠⁠ to get started with a 15% discount! Shawn's meditation app made via Figma. Figma's CEO on the future of design. Figma's CEO on the In Good Company podcast. Figma's investor relations' page. Why Figma wins (blog article). Explore our previous Intrinsic Value breakdowns: ⁠⁠Uber⁠⁠, ⁠⁠Nike⁠⁠, ⁠⁠Reddit⁠⁠, ⁠⁠Nintendo⁠⁠, ⁠⁠Airbnb⁠⁠, ⁠⁠AutoZone⁠⁠, ⁠⁠Alphabet,⁠⁠ ⁠⁠Ulta,⁠⁠ ⁠⁠John Deere,⁠⁠ ⁠⁠Madison Square Garden Sports⁠⁠. Related ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠books⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ mentioned in the podcast. Ad-free episodes on our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Premium Feed⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. NEW TO THE SHOW? Follow our official social media accounts: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠X (Twitter)⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Facebook⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Browse through all our episodes (complete with transcripts) ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠here⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Try Shawn's favorite tool for picking stock winners and managing our portfolios: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠TIP Finance⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Enjoy exclusive perks from our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠favorite Apps and Services⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠. Learn how to better start, manage, and grow your business with the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠best business podcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor's Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm

Mon Carnet, l'actu numérique
{ENTREVUE} - Alexia Danton : Figma et l'IA redessinent le rôle des designers

Mon Carnet, l'actu numérique

Play Episode Listen Later Mar 13, 2026 16:43


Jean-François Poulin s'intéresse aux nouveautés de Figma et à l'arrivée de l'intelligence artificielle dans les outils de design. Avec Alexia Danton, porte-parole de l'entreprise, il explore comment ces fonctions permettent désormais de générer des interfaces ou des prototypes beaucoup plus rapidement, parfois même par des utilisateurs qui ne sont pas designers.

Supra Insider
#101: Why everyone should have an AI-powered cloud computer | Ben Guo (Cofounder @ Zo)

Supra Insider

Play Episode Listen Later Mar 12, 2026 61:09


What if your computer didn't need a screen in front of you to get work done? That's the shift Ben Guo, co-founder of Zo, is building toward, and this conversation gets into the specifics of what that actually looks like day to day.In this episode of Supra Insider, Marc Baselga and Ben Erez sit down with Ben Guo to explore Zo: a personal cloud computer with built-in AI agents, file storage, scheduled tasks, and the ability to receive commands over text or email. Together, they unpack how Zo differs from the OpenClaw movement and why Ben thinks the personal cloud becomes a device category everyone eventually owns.The conversation goes deep on how the Zo team actually builds software: writing AI-generated markdown plans before touching any code, reviewing those plans as GitHub PRs, and largely abandoning the traditional to-do backlog in favor of just prompting something and letting it run. They also get into the real overhead that comes with this new way of working, including context management, delegation judgment, and figuring out what belongs where.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
NVIDIA's AI Engineers: Agent Inference at Planetary Scale and "Speed of Light" — Nader Khalil (Brev), Kyle Kranen (Dynamo)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 10, 2026 83:37


Join Kyle, Nader, Vibhu, and swyx live at NVIDIA GTC next week!Now that AIE Europe tix are ~sold out, our attention turns to Miami and World's Fair!The definitive AI Accelerator chip company has more than 10xed this AI Summer:And is now a $4.4 trillion megacorp… that is somehow still moving like a startup. We are blessed to have a unique relationship with our first ever NVIDIA guests: Kyle Kranen who gave a great inference keynote at the first World's Fair and is one of the leading architects of NVIDIA Dynamo (a Datacenter scale inference framework supporting SGLang, TRT-LLM, vLLM), and Nader Khalil, a friend of swyx from our days in Celo in The Arena, who has been drawing developers at GTC since before they were even a glimmer in the eye of NVIDIA:Nader discusses how NVIDIA Brev has drastically reduced the barriers to entry for developers to get a top of the line GPU up and running, and Kyle explains NVIDIA Dynamo as a data center scale inference engine that optimizes serving by scaling out, leveraging techniques like prefill/decode disaggregation, scheduling, and Kubernetes-based orchestration, framed around cost, latency, and quality tradeoffs. We also dive into Jensen's “SOL” (Speed of Light) first-principles urgency concept, long-context limits and model/hardware co-design, internal model APIs (https://build.nvidia.com), and upcoming Dynamo and agent sessions at GTC.Full Video pod on YouTubeTimestamps00:00 Agent Security Basics00:39 Podcast Welcome and Guests07:19 Acquisition and DevEx Shift13:48 SOL Culture and Dynamo Setup27:38 Why Scale Out Wins29:02 Scale Up Limits Explained30:24 From Laptop to Multi Node33:07 Cost Quality Latency Tradeoffs38:42 Disaggregation Prefill vs Decode41:05 Kubernetes Scaling with Grove43:20 Context Length and Co Design57:34 Security Meets Agents58:01 Agent Permissions Model59:10 Build Nvidia Inference Gateway01:01:52 Hackathons And Autonomy Dreams01:10:26 Local GPUs And Scaling Inference01:15:31 Long Running Agents And SF ReflectionsTranscriptAgent Security BasicsNader: Agents can do three things. They can access your files, they can access the internet, and then now they can write custom code and execute it. You literally only let an agent do two of those three things. If you can access your files and you can write custom code, you don't want internet access because that's one to see full vulnerability, right?If you have access to internet and your file system, you should know the full scope of what that agent's capable of doing. Otherwise, now we can get injected or something that can happen. And so that's a lot of what we've been thinking about is like, you know, how do we both enable this because it's clearly the future.But then also, you know, what, what are these enforcement points that we can start to like protect?swyx: All right.Podcast Welcome and Guestsswyx: Welcome to the Lean Space podcast in the Chromo studio. Welcome to all the guests here. Uh, we are back with our guest host Viu. Welcome. Good to have you back. And our friends, uh, Netter and Kyle from Nvidia. Welcome.Kyle: Yeah, thanks for having us.swyx: Yeah, thank you. Actually, I don't even know your titles.Uh, I know you're like architect something of Dynamo.Kyle: Yeah. I, I'm one of the engineering leaders [00:01:00] and a architects of Dynamo.swyx: And you're director of something and developers, developer tech.Nader: Yeah.swyx: You're the developers, developers, developers guy at nvidia,Nader: open source agent marketing, brev,swyx: and likeNader: Devrel tools and stuff.swyx: Yeah. BeenNader: the focus.swyx: And we're, we're kind of recording this ahead of Nvidia, GTC, which is coming to town, uh, again, uh, or taking over town, uh, which, uh, which we'll all be at. Um, and we'll talk a little bit about your sessions and stuff. Yeah.Nader: We're super excited for it.GTC Booth Stunt Storiesswyx: One of my favorite memories for Nader, like you always do like marketing stunts and like while you were at Rev, you like had this surfboard that you like, went down to GTC with and like, NA Nvidia apparently, like did so much that they bought you.Like what, what was that like? What was that?Nader: Yeah. Yeah, we, we, um. Our logo was a chaka. We, we, uh, we were always just kind of like trying to keep true to who we were. I think, you know, some stuff, startups, you're like trying to pretend that you're a bigger, more mature company than you are. And it was actually Evan Conrad from SF Compute who was just like, you guys are like previousswyx: guest.Yeah.Nader: Amazing. Oh, really? Amazing. Yeah. He was just like, guys, you're two dudes in the room. Why are you [00:02:00] pretending that you're not? Uh, and so then we were like, okay, let's make the logo a shaka. We brought surfboards to our booth to GTC and the energy was great. Yeah. Some palm trees too. They,Kyle: they actually poked out over like the, the walls so you could, you could see the bread booth.Oh, that's so funny. AndNader: no one else,Kyle: just from very far away.Nader: Oh, so you remember it backKyle: then? Yeah I remember it pre-acquisition. I was like, oh, those guys look cool,Nader: dude. That makes sense. ‘cause uh, we, so we signed up really last minute, and so we had the last booth. It was all the way in the corner. And so I was, I was worried that no one was gonna come.So that's why we had like the palm trees. We really came in with the surfboards. We even had one of our investors bring her dog and then she was just like walking the dog around to try to like, bring energy towards our booth. Yeah.swyx: Steph.Kyle: Yeah. Yeah, she's the best,swyx: you know, as a conference organizer, I love that.Right? Like, it's like everyone who sponsors a conference comes, does their booth. They're like, we are changing the future of ai or something, some generic b******t and like, no, like actually try to stand out, make it fun, right? And people still remember it after three years.Nader: Yeah. Yeah. You know what's so funny?I'll, I'll send, I'll give you this clip if you wanna, if you wanna add it [00:03:00] in, but, uh, my wife was at the time fiance, she was in medical school and she came to help us. ‘cause it was like a big moment for us. And so we, we bought this cricket, it's like a vinyl, like a vinyl, uh, printer. ‘cause like, how else are we gonna label the surfboard?So, we got a surfboard, luckily was able to purchase that on the company card. We got a cricket and it was just like fine tuning for enterprises or something like that, that we put on the. On the surfboard and it's 1:00 AM the day before we go to GTC. She's helping me put these like vinyl stickers on.And she goes, you son of, she's like, if you pull this off, you son of a b***h. And so, uh, right. Pretty much after the acquisition, I stitched that with the mag music acquisition. I sent it to our family group chat. Ohswyx: Yeah. No, well, she, she made a good choice there. Was that like basically the origin story for Launchable is that we, it was, and maybe we should explain what Brev is andNader: Yeah.Yeah. Uh, I mean, brev is just, it's a developer tool that makes it really easy to get a GPU. So we connect a bunch of different GPU sources. So the basics of it is like, how quickly can we SSH you into a G, into a GPU and whenever we would talk to users, they wanted A GPU. They wanted an A 100. And if you go to like any cloud [00:04:00] provisioning page, usually it's like three pages of forms or in the forms somewhere there's a dropdown.And in the dropdown there's some weird code that you know to translate to an A 100. And I remember just thinking like. Every time someone says they want an A 100, like the piece of text that they're telling me that they want is like, stuffed away in the corner. Yeah. And so we were like, what if the biggest piece of text was what the user's asking for?And so when you go to Brev, it's just big GPU chips with the type that you want withswyx: beautiful animations that you worked on pre, like pre you can, like, now you can just prompt it. But back in the day. Yeah. Yeah. Those were handcraft, handcrafted artisanal code.Nader: Yeah. I was actually really proud of that because, uh, it was an, i I made it in Figma.Yeah. And then I found, I was like really struggling to figure out how to turn it from like Figma to react. So what it actually is, is just an SVG and I, I have all the styles and so when you change the chip, whether it's like active or not it changes the SVG code and that somehow like renders like, looks like it's animating, but it, we just had the transition slow, but it's just like the, a JavaScript function to change the like underlying SVG.Yeah. And that was how I ended up like figuring out how to move it from from Figma. But yeah, that's Art Artisan. [00:05:00]Kyle: Speaking of marketing stunts though, he actually used those SVGs. Or kind of use those SVGs to make these cards.Nader: Oh yeah. LikeKyle: a GPU gift card Yes. That he handed out everywhere. That was actually my first impression of thatNader: one.Yeah,swyx: yeah, yeah.Nader: Yeah.swyx: I think I still have one of them.Nader: They look great.Kyle: Yeah.Nader: I have a ton of them still actually in our garage, which just, they don't have labels. We should honestly like bring, bring them back. But, um, I found this old printing press here, actually just around the corner on Ven ness. And it's a third generation San Francisco shop.And so I come in an excited startup founder trying to like, and they just have this crazy old machinery and I'm in awe. ‘cause the the whole building is so physical. Like you're seeing these machines, they have like pedals to like move these saws and whatever. I don't know what this machinery is, but I saw all three generations.Like there's like the grandpa, the father and the son, and the son was like, around my age. Well,swyx: it's like a holy, holy trinity.Nader: It's funny because we, so I just took the same SVG and we just like printed it and it's foil printing, so they make a a, a mold. That's like an inverse of like the A 100 and then they put the foil on it [00:06:00] and then they press it into the paper.And I remember once we got them, he was like, Hey, don't forget about us. You know, I guess like early Apple and Cisco's first business cards were all made there. And so he was like, yeah, we, we get like the startup businesses but then as they mature, they kind of go somewhere else. And so I actually, I think we were talking with marketing about like using them for some, we should go back and make some cards.swyx: Yeah, yeah, yeah. You know, I remember, you know, as a very, very small breadth investor, I was like, why are we spending time like, doing these like stunts for GPUs? Like, you know, I think like as a, you know, typical like cloud hard hardware person, you go into an AWS you pick like T five X xl, whatever, and it's just like from a list and you look at the specs like, why animate this GP?And, and I, I do think like it just shows the level of care that goes throughout birth and Yeah. And now, and also the, and,Nader: and Nvidia. I think that's what the, the thing that struck me most when we first came in was like the amount of passion that everyone has. Like, I think, um, you know, you talk to, you talk to Kyle, you talk to, like, every VP that I've met at Nvidia goes so close to the metal.Like, I remember it was almost a year ago, and like my VP asked me, he's like, Hey, [00:07:00] what's cursor? And like, are you using it? And if so, why? Surprised at this, and he downloaded Cursor and he was asking me to help him like, use it. And I thought that was, uh, or like, just show him what he, you know, why we were using it.And so, the amount of care that I think everyone has and the passion, appreciate, passion and appreciation for the moment. Right. This is a very unique time. So it's really cool to see everyone really like, uh, appreciate that.swyx: Yeah.Acquisition and DevEx Shiftswyx: One thing I wanted to do before we move over to sort of like research topics and, uh, the, the stuff that Kyle's working on is just tell the story of the acquisition, right?Like, not many people have been, been through an acquisition with Nvidia. What's it like? Uh, what, yeah, just anything you'd like to say.Nader: It's a crazy experience. I think, uh, you know, we were the thing that was the most exciting for us was. Our goal was just to make it easier for developers.We wanted to find access to GPUs, make it easier to do that. And then all, oh, actually your question about launchable. So launchable was just make one click exper, like one click deploys for any software on top of the GPU. Mm-hmm. And so what we really liked about Nvidia was that it felt like we just got a lot more resources to do all of that.I think, uh, you [00:08:00] know, NVIDIA's goal is to make things as easy for developers as possible. So there was a really nice like synergy there. I think that, you know, when it comes to like an acquisition, I think the amount that the soul of the products align, I think is gonna be. Is going speak to the success of the acquisition.Yeah. And so it in many ways feels like we're home. This is a really great outcome for us. Like we you know, I love brev.nvidia.com. Like you should, you should use it's, it's theKyle: front page for GPUs.Nader: Yeah. Yeah. If you want GP views,Kyle: you go there, getswyx: it there, and it's like internally is growing very quickly.I, I don't remember You said some stats there.Nader: Yeah, yeah, yeah. It's, uh, I, I wish I had the exact numbers, but like internally, externally, it's been growing really quickly. We've been working with a bunch of partners with a bunch of different customers and ISVs, if you have a solution that you want someone that runs on the GPU and you want people to use it quickly, we can bundle it up, uh, in a launchable and make it a one click run.If you're doing things and you want just like a sandbox or something to run on, right. Like open claw. Huge moment. Super exciting. Our, uh, and we'll talk into it more, but. You know, internally, people wanna run this, and you, we know we have to be really careful from the security implications. Do we let this run on the corporate network?Security's guidance was, Hey, [00:09:00] run this on breath, it's in, you know, it's, it's, it's a vm, it's sitting in the cloud, it's off the corporate network. It's isolated. And so that's been our stance internally and externally about how to even run something like open call while we figure out how to run these things securely.But yeah,swyx: I think there's also like, you almost like we're the right team at the right time when Nvidia is starting to invest a lot more in developer experience or whatever you call it. Yeah. Uh, UX or I don't know what you call it, like software. Like obviously NVIDIA is always invested in software, but like, there's like, this is like a different audience.Yeah. It's aNader: widerKyle: developer base.swyx: Yeah. Right.Nader: Yeah. Yeah. You know, it's funny, it's like, it's not, uh,swyx: so like, what, what is it called internally? What, what is this that people should be aware that is going on there?Nader: Uh, what, like developer experienceswyx: or, yeah, yeah. Is it's called just developer experience or is there like a broader strategy hereNader: in Nvidia?Um, Nvidia always wants to make a good developer experience. The thing is and a lot of the technology is just really complicated. Like, it's not, it's uh, you know, I think, um. The thing that's been really growing or the AI's growing is having a huge moment, not [00:10:00] because like, let's say data scientists in 2018, were quiet then and are much louder now.The pie is com, right? There's a whole bunch of new audiences. My mom's wondering what she's doing. My sister's learned, like taught herself how to code. Like the, um, you know, I, I actually think just generally AI's a big equalizer and you're seeing a more like technologically literate society, I guess.Like everyone's, everyone's learning how to code. Uh, there isn't really an excuse for that. And so building a good UX means that you really understand who your end user is. And when your end user becomes such a wide, uh, variety of people, then you have to almost like reinvent the practice, right? Yeah. You haveKyle: to, and actually build more developer ux, right?Because the, there are tiers of developer base that were added. You know, the, the hackers that are building on top of open claw, right? For example, have never used gpu. They don't know what kuda is. They, they, they just want to run something.Nader: Yeah.Kyle: You need new UX that is not just. Hey, you know, how do you program something in Cuda and run it?And then, and then we built, you know, like when Deep Learning was getting big, we built, we built Torch and, and, but so recently the amount of like [00:11:00] layers that are added to that developer stack has just exploded because AI has become ubiquitous. Everyone's using it in different ways. Yeah. It'sNader: moving fast in every direction.Vertical, horizontal.Vibhu: Yeah. You guys, you even take it down to hardware, like the DGX Spark, you know, it's, it's basically the same system as just throwing it up on big GPU cluster.Nader: Yeah, yeah, yeah. It's amazing. Blackwell.swyx: Yeah. Uh, we saw the preview at the last year's GTC and that was one of the better performing, uh, videos so far, and video coverage so far.Awesome. This will beat it. Um,Nader: that wasswyx: actually, we have fingersNader: crossed. Yeah.DGX Spark and Remote AccessNader: Even when Grace Blackwell or when, um, uh, DGX Spark was first coming out getting to be involved in that from the beginning of the developer experience. And it just comes back to what youswyx: were involved.Nader: Yeah. St. St.swyx: Mars.Nader: Yeah. Yeah. I mean from, it was just like, I, I got an email, we just got thrown into the loop and suddenly yeah, I, it was actually really funny ‘cause I'm still pretty fresh from the acquisition and I'm, I'm getting an email from a bunch of the engineering VPs about like, the new hardware, GPU chip, like we're, or not chip, but just GPU system that we're putting out.And I'm like, okay, cool. Matters. Now involved with this for the ux, I'm like. What am I gonna do [00:12:00] here? So, I remember the first meeting, I was just like kind of quiet as I was hearing engineering VPs talk about what this box could be, what it could do, how we should use it. And I remember, uh, one of the first ideas that people were idea was like, oh, the first thing that it was like, I think a quote was like, the first thing someone's gonna wanna do with this is get two of them and run a Kubernetes cluster on top of them.And I was like, oh, I think I know why I'm here. I was like, the first thing we're doing is easy. SSH into the machine. And then, and you know, just kind of like scoping it down of like, once you can do that every, you, like the person who wants to run a Kubernetes cluster onto Sparks has a higher propensity for pain, then, then you know someone who buys it and wants to run open Claw right now, right?If you can make sure that that's as effortless as possible, then the rest becomes easy. So there's a tool called Nvidia Sync. It just makes the SSH connection really simple. So, you know, if you think about it like. If you have a Mac, uh, or a PC or whatever, if you have a laptop and you buy this GPU and you want to use it, you should be able to use it like it's A-A-G-P-U in the cloud, right?Um, but there's all this friction of like, how do you actually get into that? That's part of [00:13:00] Revs value proposition is just, you know, there's a CLI that wraps SSH and makes it simple. And so our goal is just get you into that machine really easily. And one thing we just launched at CES, it's in, it's still in like early access.We're ironing out some kinks, but it should be ready by GTC. You can register your spark on Brev. And so now if youswyx: like remote managed yeah, local hardware. Single pane of glass. Yeah. Yeah. Because Brev can already manage other clouds anyway, right?Vibhu: Yeah, yeah. And you use the spark on Brev as well, right?Nader: Yeah. But yeah, exactly. So, so you, you, so you, you set it up at home you can run the command on it, and then it gets it's essentially it'll appear in your Brev account, and then you can take your laptop to a Starbucks or to a cafe, and you'll continue to use your, you can continue use your spark just like any other cloud node on Brev.Yeah. Yeah. And it's just like a pre-provisioned centerswyx: in yourNader: home. Yeah, exactly.swyx: Yeah. Yeah.Vibhu: Tiny little data center.Nader: Tiny little, the size ofVibhu: your phone.SOL Culture and Dynamo Setupswyx: One more thing before we move on to Kyle. Just have so many Jensen stories and I just love, love mining Jensen stories. Uh, my favorite so far is SOL. Uh, what is, yeah, what is S-O-L-S-O-LNader: is actually, i, I think [00:14:00] of all the lessons I've learned, that one's definitely my favorite.Kyle: It'll always stick with you.Nader: Yeah. Yeah. I, you know, in your startup, everything's existential, right? Like we've, we've run out of money. We were like, on the risk of, of losing payroll, we've had to contract our team because we l ran outta money. And so like, um, because of that you're really always forcing yourself to I to like understand the root cause of everything.If you get a date, if you get a timeline, you know exactly why that date or timeline is there. You're, you're pushing every boundary and like, you're not just say, you're not just accepting like a, a no. Just because. And so as you start to introduce more layers, as you start to become a much larger organization, SOL is is essentially like what is the physics, right?The speed of light moves at a certain speed. So if flight's moving some slower, then you know something's in the way. So before trying to like layer reality back in of like, why can't this be delivered at some date? Let's just understand the physics. What is the theoretical limit to like, uh, how fast this can go?And then start to tell me why. ‘cause otherwise people will start telling you why something can't be done. But actually I think any great leader's goal is just to create urgency. Yeah. [00:15:00] There's an infiniteKyle: create compelling events, right?Nader: Yeah.Kyle: Yeah. So l is a term video is used to instigate a compelling event.You say this is done. How do we get there? What is the minimum? As much as necessary, as little as possible thing that it takes for us to get exactly here and. It helps you just break through a bunch of noise.swyx: Yeah.Kyle: Instantly.swyx: One thing I'm unclear about is, can only Jensen use the SOL card? Like, oh, no, no, no.Not everyone get the b******t out because obviously it's Jensen, but like, can someone else be like, no, likeKyle: frontline engineers use it.Nader: Yeah. Every, I think it's not so much about like, get the b******t out. It's like, it's like, give me the root understanding, right? Like, if you tell me something takes three weeks, it like, well, what's the first principles?Yeah, the first principles. It's like, what's the, what? Like why is it three weeks? What is the actual yeah. What's the actual limit of why this is gonna take three weeks? If you're gonna, if you, if let's say you wanted to buy a new computer and someone told you it's gonna be here in five days, what's the SOL?Well, like the SOL is like, I could walk into a Best Buy and pick it up for you. Right? So then anything that's like beyond that is, and is that practical? Is that how we're gonna, you know, let's say give everyone in the [00:16:00] company a laptop, like obviously not. So then like that's the SOL and then it's like, okay, well if we have to get more than 10, suddenly there might be some, right?And so now we can kind of piece the reality back.swyx: So, so this is the. Paul Graham do things that don't scale. Yeah. And this is also the, what people would now call behi agency. Yeah.Kyle: It's actually really interesting because there's a, there's a second hardware angle to SOL that like doesn't come up for all the org sol is used like culturally at aswyx: media for everything.I'm also mining for like, I think that can be annoying sometimes. And like someone keeps going IOO you and you're like, guys, like we have to be stable. We have to, we to f*****g plan. Yeah.Kyle: It's an interesting balance.Nader: Yeah. I encounter that with like, actually just with, with Alec, right? ‘cause we, we have a new conference so we need to launch, we have, we have goals of what we wanna launch by, uh, by the conference and like, yeah.At the end of the day, where isswyx: this GTC?Nader: Um, well this is like, so we, I mean we did it for CES, we did for GT CDC before that we're doing it for GTC San Jose. So I mean, like every, you know, we have a new moment. Um, and we want to launch something. Yeah. And we want to do so at SOL and that does mean that some, there's some level of prioritization that needs [00:17:00] to happen.And so it, it is difficult, right? I think, um, you have to be careful with what you're pushing. You know, stability is important and that should be factored into S-O-L-S-O-L isn't just like, build everything and let it break, you know, that, that's part of the conversation. So as you're laying, layering in all the details, one of them might be, Hey, we could build this, but then it's not gonna be stable for X, y, z reasons.And so that was like, one of our conversations for CES was, you know, hey, like we, we can get this into early access registering your spark with brev. But there are a lot of things that we need to do in order to feel really comfortable from a security perspective, right? There's a lot of networking involved before we deliver that to users.So it's like, okay. Let's get this to a point where we can at least let people experiment with it. We had it in a booth, we had it in Jensen's keynote, and then let's go iron out all the networking kinks. And that's not easy. And so, uh, that can come later. And so that was the way that we layered that back in.Yeah. ButKyle: It's not really about saying like, you don't have to do the, the maintenance or operational work. It's more about saying, you know, it's kind of like [00:18:00] highlights how progress is incremental, right? Like, what is the minimum thing that we can get to. And then there's SOL for like every component after that.But there's the SOL to get you, get you to the, the starting line. And that, that's usually how it's asked. Yeah. On the other side, you know, like SOL came out of like hardware at Nvidia. Right. So SOL is like literally if we ran the accelerator or the GPU with like at basically full speed with like no other constraints, like how FAST would be able to make a program go.swyx: Yeah. Yeah. Right.Kyle: Soswyx: in, in training that like, you know, then you work back to like some percentage of like MFU for example.Kyle: Yeah, that's a, that's a great example. So like, there's an, there's an S-O-L-M-F-U, and then there's like, you know, what's practically achievable.swyx: Cool. Should we move on to sort of, uh, Kyle's side?Uh, Kyle, you're coming more from the data science world. And, uh, I, I mean I always, whenever, whenever I meet someone who's done working in tabular stuff, graph neural networks, time series, these are basically when I go to new reps, I go to ICML, I walk the back halls. There's always like a small group of graph people.Yes. Absolute small group of tabular people. [00:19:00] And like, there's no one there. And like, it's very like, you know what I mean? Like, yeah, no, like it's, it's important interesting work if you care about solving the problems that they solve.Kyle: Yeah.swyx: But everyone else is just LMS all the time.Kyle: Yeah. I mean it's like, it's like the black hole, right?Has the event horizon reached this yet in nerves? Um,swyx: but like, you know, those are, those are transformers too. Yeah. And, and those are also like interesting things. Anyway, uh, I just wanted to spend a little bit of time on, on those, that background before we go into Dynamo, uh, proper.Kyle: Yeah, sure. I took a different path to Nvidia than that, or I joined six years ago, seven, if you count, when I was an intern.So I joined Nvidia, like right outta college. And the first thing I jumped into was not what I'd done in, during internship, which was like, you know, like some stuff for autonomous vehicles, like heavyweight object detection. I jumped into like, you know, something, I'm like, recommenders, this is popular. Andswyx: yeah, he did RexiKyle: as well.Yeah, Rexi. Yeah. I mean that, that was the taboo data at the time, right? You have tables of like, audience qualities and item qualities, and you're trying to figure out like which member of [00:20:00] the audience matches which item or, or more practically which item matches which member of the audience. And at the time, really it was like we were trying to enable.Uh, recommender, which had historically been like a little bit of a CP based workflow into something that like, ran really well in GPUs. And it's since been done. Like there are a bunch of libraries for Axis that run on GPUs. Uh, the common models like Deeplearning recommendation model, which came outta meta and the wide and deep model, which was used or was released by Google were very accelerated by GPUs using, you know, the fast HBM on the chips, especially to do, you know, vector lookups.But it was very interesting at the time and super, super relevant because like we were starting to get like. This explosion of feeds and things that required rec recommenders to just actively be on all the time. And sort of transitioned that a little bit towards graph neural networks when I discovered them because I was like, okay, you can actually use graphical neural networks to represent like, relationships between people, items, concepts, and that, that interested me.So I jumped into that at [00:21:00] Nvidia and, and got really involved for like two-ish years.swyx: Yeah. Uh, and something I learned from Brian Zaro Yeah. Is that you can just kind of choose your own path in Nvidia.Kyle: Oh my God. Yeah.swyx: Which is not a normal big Corp thing. Yeah. Like you, you have a lane, you stay in your lane.Nader: I think probably the reason why I enjoy being in a, a big company, the mission is the boss probably from a startup guy. Yeah. The missionswyx: is the boss.Nader: Yeah. Uh, it feels like a big game of pickup basketball. Like, you know, if you play one, if you wanna play basketball, you just go up to the court and you're like, Hey look, we're gonna play this game and we need three.Yeah. And you just like find your three. That's honestly for every new initiative that's what it feels like. Yeah.Vibhu: It also like shows, right? Like Nvidia. Just releasing state-of-the-art stuff in every domain. Yeah. Like, okay, you expect foundation models with Nemo tron voice just randomly parakeet.Call parakeet just comes out another one, uh, voice. TheKyle: video voice team has always been producing.Vibhu: Yeah. There's always just every other domain of paper that comes out, dataset that comes out. It's like, I mean, it also stems back to what Nvidia has to do, right? You have to make chips years before they're actually produced.Right? So you need to know, you need to really [00:22:00] focus. TheKyle: design process starts likeVibhu: exactlyKyle: three to five years before the chip gets to the market.Vibhu: Yeah. I, I'm curious more about what that's like, right? So like, you have specialist teams. Is it just like, you know, people find an interest, you go in, you go deep on whatever, and that kind of feeds back into, you know, okay, we, we expect predictions.Like the internals at Nvidia must be crazy. Right? You know? Yeah. Yeah. You know, you, you must. Not even without selling to people, you have your own predictions of where things are going. Yeah. And they're very based, very grounded. Right?Kyle: Yeah. It, it, it's really interesting. So there's like two things that I think that Amed does, which are quite interesting.Uh, one is like, we really index into passion. There's a big. Sort of organizational top sound push to like ensure that people are working on the things that they're passionate about. So if someone proposes something that's interesting, many times they can just email someone like way up the chain that they would find this relevant and say like, Hey, can I go work on this?Nader: It's actually like I worked at a, a big company for a couple years before, uh, starting on my startup journey and like, it felt very weird if you were to like email out of chain, if that makes [00:23:00] sense. Yeah. The emails at Nvidia are like mosh pitsswyx: shoot,Nader: and it's just like 60 people, just whatever. And like they're, there's this,swyx: they got messy like, reply all you,Nader: oh, it's in, it's insane.It's insane. They justKyle: help. You know, Maxim,Nader: the context. But, but that's actually like, I've actually, so this is a weird thing where I used to be like, why would we send emails? We have Slack. I am the entire, I'm the exact opposite. I feel so bad for anyone who's like messaging me on Slack ‘cause I'm so unresponsive.swyx: Your emailNader: Maxi, email Maxim. I'm email maxing Now email is a different, email is perfect because man, we can't work together. I'm email is great, right? Because important threads get bumped back up, right? Yeah, yeah. Um, and so Slack doesn't do that. So I just have like this casino going off on the right or on the left and like, I don't know which thread was from where or what, but like the threads get And then also just like the subject, so you can have like working threads.I think what's difficult is like when you're small, if you're just not 40,000 people I think Slack will work fine, but there's, I don't know what the inflection point is. There is gonna be a point where that becomes really messy and you'll actually prefer having email. ‘cause you can have working threads.You can cc more than nine people in a thread.Kyle: You can fork stuff.Nader: You can [00:24:00] fork stuff, which is super nice and just like y Yeah. And so, but that is part of where you can propose a plan. You can also just. Start, honestly, momentum's the only authority, right? So like, if you can just start, start to make a little bit of progress and show someone something, and then they can try it.That's, I think what's been, you know, I think the most effective way to push anything for forward. And that's both at Nvidia and I think just generally.Kyle: Yeah, there's, there's the other concept that like is explored a lot at Nvidia, which is this idea of a zero billion dollar business. Like market creation is a big thing at Nvidia.Like,swyx: oh, you want to go and start a zero billion dollar business?Kyle: Jensen says, we are completely happy investing in zero billion dollar markets. We don't care if this creates revenue. It's important for us to know about this market. We think it will be important in the future. It can be zero billion dollars for a while.I'm probably minging as words here for, but like, you know, like, I'll give an example. NVIDIA's been working on autonomous driving for a a long time,swyx: like an Nvidia car.Kyle: No, they, they'veVibhu: used the Mercedes, right? They're around the HQ and I think it finally just got licensed out. Now they're starting to be used quite a [00:25:00] bit.For 10 years you've been seeing Mercedes with Nvidia logos driving.Kyle: If you're in like the South San Santa Clara, it's, it's actually from South. Yeah. So, um. Zero billion dollar markets are, are a thing like, you know, Jensen,swyx: I mean, okay, look, cars are not a zero billion dollar market. But yeah, that's a bad example.Nader: I think, I think he's, he's messaging, uh, zero today, but, or even like internally, right? Like, like it's like, uh, an org doesn't have to ruthlessly find revenue very quickly to justify their existence. Right. Like a lot of the important research, a lot of the important technology being developed that, that's kind ofKyle: where research, research is very ide ideologically free at Nvidia.Yeah. Like they can pursue things that they wereswyx: Were you research officially?Kyle: I was never in research. Officially. I was always in engineering. Yeah. We in, I'm in an org called Deep Warning Algorithms, which is basically just how do we make things that are relevant to deep warning go fast.swyx: That sounds freaking cool.Vibhu: And I think a lot of that is underappreciated, right? Like time series. This week Google put out time. FF paper. Yeah. A new time series, paper res. Uh, Symantec, ID [00:26:00] started applying Transformers LMS to Yes. Rec system. Yes. And when you think the scale of companies deploying these right. Amazon recommendations, Google web search, it's like, it's huge scale andKyle: Yeah.Vibhu: You want fast?Kyle: Yeah. Yeah. Yeah. Actually it's, it, I, there's a fun moment that brought me like full circle. Like, uh, Amazon Ads recently gave a talk where they talked about using Dynamo for generative recommendation, which was like super, like weirdly cathartic for me. I'm like, oh my God. I've, I've supplanted what I was working on.Like, I, you're using LMS now to do what I was doing five years ago.swyx: Yeah. Amazing. And let's go right into Dynamo. Uh, maybe introduce Yeah, sure. To the top down and Yeah.Kyle: I think at this point a lot of people are familiar with the term of inference. Like funnily enough, like I went from, you know, inference being like a really niche topic to being something that's like discussed on like normal people's Twitter feeds.It's,Nader: it's on billboardsKyle: here now. Yeah. Very, very strange. Driving, driving, seeing just an inference ad on 1 0 1 inference at scale is becoming a lot more important. Uh, we have these moments like, you know, open claw where you have these [00:27:00] agents that take lots and lots of tokens, but produce, incredible results.There are many different aspects of test time scaling so that, you know, you can use more inference to generate a better result than if you were to use like a short amount of inference. There's reasoning, there's quiring, there's, adding agency to the model, allowing it to call tools and use skills.Dyno sort came about at Nvidia. Because myself and a couple others were, were sort of talking about the, these concepts that like, you know, you have inference engines like VLMS, shelan, tenor, TLM and they have like one single copy. They, they, they sort of think about like things as like one single copy, like one replica, right?Why Scale Out WinsKyle: Like one version of the model. But when you're actually serving things at scale, you can't just scale up that replica because you end up with like performance problems. There's a scaling limit to scaling up replicas. So you actually have to scale out to use a, maybe some Kubernetes type terminology.We kind of realized that there was like. A lot of potential optimization that we could do in scaling out and building systems for data [00:28:00] center scale inference. So Dynamo is this data center scale inference engine that sits on top of the frameworks like VLM Shilling and 10 T lm and just makes things go faster because you can leverage the economy of scale.The fact that you have KV cash, which we can define a little bit later, uh, in all these machines that is like unique and you wanna figure out like the ways to maximize your cash hits or you want to employ new techniques in inference like disaggregation, which Dynamo had introduced to the world in, in, in March, not introduced, it was a academic talk, but beforehand.But we are, you know, one of the first frameworks to start, supporting it. And we wanna like, sort of combine all these techniques into sort of a modular framework that allows you to. Accelerate your inference at scale.Nader: By the way, Kyle and I became friends on my first date, Nvidia, and I always loved, ‘cause like he always teaches meswyx: new things.Yeah. By the way, this is why I wanted to put two of you together. I was like, yeah, this is, this is gonna beKyle: good. It's very, it's very different, you know, like we've, we, we've, we've talked to each other a bunch [00:29:00] actually, you asked like, why, why can't we scale up?Nader: Yeah.Scale Up Limits ExplainedNader: model, you said model replicas.Kyle: Yeah. So you, so scale up means assigning moreswyx: heavier?Kyle: Yeah, heavier. Like making things heavier. Yeah, adding more GPUs. Adding more CPUs. Scale out is just like having a barrier saying, I'm gonna duplicate my representation of the model or a representation of this microservice or something, and I'm gonna like, replicate it Many times.Handle, load. And the reason that you can't scale, scale up, uh, past some points is like, you know, there, there, there are sort of hardware bounds and algorithmic bounds on, on that type of scaling. So I'll give you a good example that's like very trivial. Let's say you're on an H 100. The Maxim ENV link domain for H 100, for most Ds H one hundreds is heus, right?So if you scaled up past that, you're gonna have to figure out ways to handle the fact that now for the GPUs to communicate, you have to do it over Infin band, which is still very fast, but is not as fast as ENV link.swyx: Is it like one order of magnitude, like hundreds or,Kyle: it's about an order of magnitude?Yeah. Okay. Um, soswyx: not terrible.Kyle: [00:30:00] Yeah. I, I need to, I need to remember the, the data sheet here, like, I think it's like about 500 gigabytes. Uh, a second unidirectional for ENV link, and about 50 gigabytes a second unidirectional for Infin Band. I, it, it depends on the, the generation.swyx: I just wanna set this up for people who are not familiar with these kinds of like layers and the trash speedVibhu: and all that.Of course.From Laptop to Multi NodeVibhu: Also, maybe even just going like a few steps back before that, like most people are very familiar with. You see a, you know, you can use on your laptop, whatever these steel viol, lm you can just run inference there. All, there's all, you can, youcan run it on thatVibhu: laptop. You can run on laptop.Then you get to, okay, uh, models got pretty big, right? JLM five, they doubled the size, so mm-hmm. Uh, what do you do when you have to go from, okay, I can get 128 gigs of memory. I can run it on a spark. Then you have to go multi GPU. Yeah. Okay. Multi GPU, there's some support there. Now, if I'm a company and I don't have like.I'm not hiring the best researchers for this. Right. But I need to go [00:31:00] multi-node, right? I have a lot of servers. Okay, now there's efficiency problems, right? You can have multiple eight H 100 nodes, but, you know, is that as a, like, how do you do that efficiently?Kyle: Yeah. How do you like represent them? How do you choose how to represent the model?Yeah, exactly right. That's a, that's like a hard question. Everyone asks, how do you size oh, I wanna run GLM five, which just came out new model. There have been like four of them in the past week, by the way, like a bunch of new models.swyx: You know why? Right? Deep seek.Kyle: No comment. Oh. Yeah, but Ggl, LM five, right?We, we have this, new model. It's, it's like a large size, and you have to figure out how to both scale up and scale out, right? Because you have to find the right representation that you care about. Everyone does this differently. Let's be very clear. Everyone figures this out in their own path.Nader: I feel like a lot of AI or ML even is like, is like this. I think people think, you know, I, I was, there was some tweet a few months ago that was like, why hasn't fine tuning as a service taken off? You know, that might be me. It might have been you. Yeah. But people want it to be such an easy recipe to follow.But even like if you look at an ML model and specificKyle: to you Yeah,Nader: yeah.Kyle: And the [00:32:00] model,Nader: the situation, and there's just so much tinkering, right? Like when you see a model that has however many experts in the ME model, it's like, why that many experts? I don't, they, you know, they tried a bunch of things and that one seemed to do better.I think when it comes to how you're serving inference, you know, you have a bunch of decisions to make and there you can always argue that you can take something and make it more optimal. But I think it's this internal calibration and appetite for continued calibration.Vibhu: Yeah. And that doesn't mean like, you know, people aren't taking a shot at this, like tinker from thinking machines, you know?Yeah. RL as a service. Yeah, totally. It's, it also gets even harder when you try to do big model training, right? We're not the best at training Moes, uh, when they're pre-trained. Like we saw this with LAMA three, right? They're trained in such a sparse way that meta knows there's gonna be a bunch of inference done on these, right?They'll open source it, but it's very trained for what meta infrastructure wants, right? They wanna, they wanna inference it a lot. Now the question to basically think about is, okay, say you wanna serve a chat application, a coding copilot, right? You're doing a layer of rl, you're serving a model for X amount of people.Is it a chat model, a coding model? Dynamo, you know, back to that,Kyle: it's [00:33:00] like, yeah, sorry. So you we, we sort of like jumped off of, you know, jumped, uh, on that topic. Everyone has like, their own, own journey.Cost Quality Latency TradeoffsKyle: And I, I like to think of it as defined by like, what is the model you need? What is the accuracy you need?Actually I talked to NA about this earlier. There's three axes you care about. What is the quality that you're able to produce? So like, are you accurate enough or can you complete the task with enough, performance, high enough performance. Yeah, yeah. Uh, there's cost. Can you serve the model or serve your workflow?Because it's not just the model anymore, it's the workflow. It's the multi turn with an agent cheaply enough. And then can you serve it fast enough? And we're seeing all three of these, like, play out, like we saw, we saw new models from OpenAI that you know, are faster. You have like these new fast versions of models.You can change the amount of thinking to change the amount of quality, right? Produce more tokens, but at a higher cost in a, in a higher latency. And really like when you start this journey of like trying to figure out how you wanna host a model, you, you, you think about three things. What is the model I need to serve?How many times do I need to call it? What is the input sequence link was [00:34:00] the, what does the workflow look like on top of it? What is the SLA, what is the latency SLA that I need to achieve? Because there's usually some, this is usually like a constant, you, you know, the SLA that you need to hit and then like you try and find the lowest cost version that hits all of these constraints.Usually, you know, you, you start with those things and you say you, you kind of do like a bit of experimentation across some common configurations. You change the tensor parallel size, which is a form of parallelismVibhu: I take, it goes even deeper first. Gotta think what model.Kyle: Yes, course,ofKyle: course. It's like, it's like a multi-step design process because as you said, you can, you can choose a smaller model and then do more test time scaling and it'll equate the quality of a larger model because you're doing the test time scaling or you're adding a harness or something.So yes, it, it goes way deeper than that. But from the performance perspective, like once you get to the model you need, you need to host, you look at that and you say, Hey. I have this model, I need to serve it at the speed. What is the right configuration for that?Nader: You guys see the recent, uh, there was a paper I just saw like a few days ago that, uh, if you run [00:35:00] the same prompt twice, you're getting like double Just try itagain.Nader: Yeah, exactly.Vibhu: And you get a lot. Yeah. But the, the key thing there is you give the context of the failed try, right? Yeah. So it takes a shot. And this has been like, you know, basic guidance for quite a while. Just try again. ‘cause you know, trying, just try again. Did you try again? All adviceNader: in life.Vibhu: Just, it's a paper from Google, if I'm not mistaken, right?Yeah,Vibhu: yeah. I think it, it's like a seven bas little short paper. Yeah. Yeah. The title's very cute. And it's just like, yeah, just try again. Give it ask context,Kyle: multi-shot. You just like, say like, hey, like, you know, like take, take a little bit more, take a little bit more information, try and fail. Fail.Vibhu: And that basic concept has gone pretty deep.There's like, um, self distillation, rl where you, you do self distillation, you do rl and you have past failure and you know, that gives some signal so people take, try it again. Not strong enough.swyx: Uh, for, for listeners, uh, who listen to here, uh, vivo actually, and I, and we run a second YouTube channel for our paper club where, oh, that's awesome.Vivo just covered this. Yeah. Awesome. Self desolation and all that's, that's why he, to speed [00:36:00] on it.Nader: I'll to check it out.swyx: Yeah. It, it's just a good practice, like everyone needs, like a paper club where like you just read papers together and the social pressure just kind of forces you to just,Nader: we, we,there'sNader: like a big inference.Kyle: ReadingNader: group at a video. I feel so bad every time. I I, he put it on like, on our, he shared it.swyx: One, one ofNader: your guys,swyx: uh, is, is big in that, I forget es han Yeah, yeah,Kyle: es Han's on my team. Actually. Funny. There's a, there's a, there's a employee transfer between us. Han worked for Nater at Brev, and now he, he's on my team.He wasNader: our head of ai. And then, yeah, once we got in, andswyx: because I'm always looking for like, okay, can, can I start at another podcast that only does that thing? Yeah. And, uh, Esan was like, I was trying to like nudge Esan into like, is there something here? I mean, I don't think there's, there's new infant techniques every day.So it's like, it's likeKyle: you would, you would actually be surprised, um, the amount of blog posts you see. And ifswyx: there's a period where it was like, Medusa hydra, what Eagle, like, youKyle: know, now we have new forms of decode, uh, we have new forms of specula, of decoding or new,swyx: what,Kyle: what are youVibhu: excited? And it's exciting when you guys put out something like Tron.‘cause I remember the paper on this Tron three, [00:37:00] uh, the amount of like post train, the on tokens that the GPU rich can just train on. And it, it was a hybrid state space model, right? Yeah.Kyle: It's co-designed for the hardware.Vibhu: Yeah, go design for the hardware. And one of the things was always, you know, the state space models don't scale as well when you do a conversion or whatever the performance.And you guys are like, no, just keep draining. And Nitron shows a lot of that. Yeah.Nader: Also, something cool about Nitron it was released in layers, if you will, very similar to Dynamo. It's, it's, it's essentially it was released as you can, the pre-training, post-training data sets are released. Yeah. The recipes on how to do it are released.The model itself is released. It's full model. You just benefit from us turning on the GPUs. But there are companies like, uh, ServiceNow took the dataset and they trained their own model and we were super excited and like, you know, celebrated that work.ZoomVibhu: different. Zoom is, zoom is CGI, I think, uh, you know, also just to add like a lot of models don't put out based models and if there's that, why is fine tuning not taken off?You know, you can do your own training. Yeah,Kyle: sure.Vibhu: You guys put out based model, I think you put out everything.Nader: I believe I know [00:38:00]swyx: about base. BasicallyVibhu: without baseswyx: basic can be cancelable.Vibhu: Yeah. Base can be cancelable.swyx: Yeah.Vibhu: Safety training.swyx: Did we get a full picture of dymo? I, I don't know if we, what,Nader: what I'd love is you, you mentioned the three axes like break it down of like, you know, what's prefilled decode and like what are the optimizations that we can get with Dynamo?Kyle: Yeah. That, that's, that's, that's a great point. So to summarize on that three axis problem, right, there are three things that determine whether or not something can be done with inference, cost, quality, latency, right? Dynamo is supposed to be there to provide you like the runtime that allows you to pull levers to, you know, mix it up and move around the parade of frontier or the preto surface that determines is this actually possible with inference And AI todayNader: gives you the knobs.Kyle: Yeah, exactly. It gives you the knobs.Disaggregation Prefill vs DecodeKyle: Uh, and one thing that like we, we use a lot in contemporary inference and is, you know, starting to like pick up from, you know, in, in general knowledge is this co concept of disaggregation. So historically. Models would be hosted with a single inference engine. And that inference engine [00:39:00] would ping pong between two phases.There's prefill where you're reading the sequence generating KV cache, which is basically just a set of vectors that represent the sequence. And then using that KV cache to generate new tokens, which is called Decode. And some brilliant researchers across multiple different papers essentially made the realization that if you separate these two phases, you actually gain some benefits.Those benefits are basically a you don't have to worry about step synchronous scheduling. So the way that an inference engine works is you do one step and then you finish it, and then you schedule, you start scheduling the next step there. It's not like fully asynchronous. And the problem with that is you would have, uh, essentially pre-fill and decode are, are actually very different in terms of both their resource requirements and their sometimes their runtime.So you would have like prefill that would like block decode steps because you, you'd still be pre-filing and you couldn't schedule because you know the step has to end. So you remove that scheduling issue and then you also allow you, or you yourself, to like [00:40:00] split the work into two different ki types of pools.So pre-fill typically, and, and this changes as, as model architecture changes. Pre-fill is, right now, compute bound most of the time with the sequence is sufficiently long. It's compute bound. On the decode side because you're doing a full Passover, all the weights and the entire sequence, every time you do a decode step and you're, you don't have the quadratic computation of KV cache, it's usually memory bound because you're retrieving a linear amount of memory and you're doing a linear amount of compute as opposed to prefill where you retrieve a linear amount of memory and then use a quadratic.You know,Nader: it's funny, someone exo Labs did a really cool demo where for the DGX Spark, which has a lot more compute, you can do the pre the compute hungry prefill on a DG X spark and then do the decode on a, on a Mac. Yeah. And soVibhu: that's faster.Nader: Yeah. Yeah.Kyle: So you could, you can do that. You can do machine strat stratification.Nader: Yeah.Kyle: And like with our future generation generations of hardware, we actually announced, like with Reuben, this [00:41:00] new accelerator that is prefilled specific. It's called Reuben, CPX. SoKubernetes Scaling with GroveNader: I have a question when you do the scale out. Yeah. Is scaling out easier with Dynamo? Because when you need a new node, you can dedicate it to either the Prefill or, uh, decode.Kyle: Yeah. So Dynamo actually has like a, a Kubernetes component in it called Grove that allows you to, to do this like crazy scaling specialization. It has like this hot, it's a representation that, I don't wanna go too deep into Kubernetes here, but there was a previous way that you would like launch multi-node work.Uh, it's called Leader Worker Set. It's in the Kubernetes standard, and Leader worker set is great. It served a lot of people super well for a long period of time. But one of the things that it's struggles with is representing a set of cases where you have a multi-node replica that has a pair, right?You know, prefill and decode, or it's not paired, but it has like a second stage that has a ratio that changes over time. And prefill and decode are like two different things as your workload changes, right? The amount of prefill you'll need to do may change. [00:42:00] The amount of decode that you, you'll need to do might change, right?Like, let's say you start getting like insanely long queries, right? That probably means that your prefill scales like harder because you're hitting these, this quadratic scaling growth.swyx: Yeah.And then for listeners, like prefill will be long input. Decode would be long output, for example, right?Kyle: Yeah. So like decode, decode scale. I mean, decode is funny because the amount of tokens that you produce scales with the output length, but the amount of work that you do per step scales with the amount of tokens in the context.swyx: Yes.Kyle: So both scales with the input and the output.swyx: That's true.Kyle: But on the pre-fold view code side, like if.Suddenly, like the amount of work you're doing on the decode side stays about the same or like scales a little bit, and then the prefilled side like jumps up a lot. You actually don't want that ratio to be the same. You want it to change over time. So Dynamo has a set of components that A, tell you how to scale.It tells you how many prefilled workers and decoded workers you, it thinks you should have, and also provides a scheduling API for Kubernetes that allows you to actually represent and affect this scheduling on, on, on your actual [00:43:00] hardware, on your compute infrastructure.Nader: Not gonna lie. I feel a little embarrassed for being proud of my SVG function earlier.swyx: No, itNader: wasreallyKyle: cute. I, Iswyx: likeNader: it's all,swyx: it's all engineering. It's all engineering. Um, that's where I'mKyle: technical.swyx: One thing I'm, I'm kind of just curious about with all with you see at a systems level, everything going on here. Mm-hmm. And we, you know, we're scaling it up in, in multi, in distributed systems.Context Length and Co Designswyx: Um, I think one thing that's like kind of, of the moment right now is people are asking, is there any SOL sort of upper bounds. In terms of like, let's call, just call it context length for one for of a better word, but you can break it down however you like.Nader: Yeah.swyx: I just think like, well, yeah, I mean, like clearly you can engage in hybrid architectures and throw in some state space models in there.All, all you want, but it looks, still looks very attention heavy.Kyle: Yes. Uh, yeah. Long context is attention heavy. I mean, we have these hybrid models, um,swyx: to take and most, most models like cap out at a million contexts and that's it. Yeah. Like for the last two years has been it.Kyle: Yeah. The model hardware context co-design thing that we're seeing these days is actually super [00:44:00] interesting.It's like my, my passion, like my secret side passion. We see models like Kimmy or G-P-T-O-S-S. I'm use these because I, I know specific things about these models. So Kimmy two comes out, right? And it's an interesting model. It's like, like a deep seek style architecture is MLA. It's basically deep seek, scaled like a little bit differently, um, and obviously trained differently as well.But they, they talked about, why they made the design choices for context. Kimmy has more experts, but fewer attention heads, and I believe a slightly smaller attention, uh, like dimension. But I need to remember, I need to check that. Uh, it doesn't matter. But they discussed this actually at length in a blog post on ji, which is like our pu which is like credit puswyx: Yeah.Kyle: Um, in, in China. Chinese red.swyx: Yeah.Kyle: It's, yeah. So it, it's, it's actually an incredible blog post. Uh, like all the mls people in, in, in that, I've seen that on GPU are like very brilliant, but they, they talk about like the creators of Kimi K two [00:45:00] actually like, talked about it on, on, on there in the blog post.And they say, we, we actually did an experiment, right? Attention scales with the number of heads, obviously. Like if you have 64 heads versus 32 heads, you do half the work of attention. You still scale quadratic, but you do half the work. And they made a, a very specific like. Sort of barter in their system, in their architecture, they basically said, Hey, what if we gave it more experts, so we're gonna use more memory capacity.But we keep the amount of activated experts the same. We increase the expert sparsity, so we have fewer experts act. The ratio to of experts activated to number of experts is smaller, and we decrease the number of attention heads.Vibhu: And kind of for context, what the, what we had been seeing was you make models sparser instead.So no one was really touching heads. You're just having, uh,Kyle: well, they, they did, they implicitly made it sparser.Vibhu: Yeah, yeah. For, for Kimmy. They did,Kyle: yes.Vibhu: They also made it sparser. But basically what we were seeing was people were at the level of, okay, there's a sparsity ratio. You want more total parameters, less active, and that's sparsity.[00:46:00]But what you see from papers, like, the labs like moonshot deep seek, they go to the level of, okay, outside of just number of experts, you can also change how many attention heads and less attention layers. More attention. Layers. Layers, yeah. Yes, yes. So, and that's all basically coming back to, just tied together is like hardware model, co-design, which isKyle: hardware model, co model, context, co-design.Vibhu: Yeah.Kyle: Right. Like if you were training a, a model that was like. Really, really short context, uh, or like really is good at super short context tasks. You may like design it in a way such that like you don't care about attention scaling because it hasn't hit that, like the turning point where like the quadratic curve takes over.Nader: How do you consider attention or context as a separate part of the co-design? Like I would imagine hardware or just how I would've thought of it is like hardware model. Co-design would be hardware model context co-designKyle: because the harness and the context that is produced by the harness is a part of the model.Once it's trained in,Vibhu: like even though towards the end you'll do long context, you're not changing architecture through I see. Training. Yeah.Kyle: I mean you can try.swyx: You're saying [00:47:00] everyone's training the harness into the model.Kyle: I would say to some degree, orswyx: there's co-design for harness. I know there's a small amount, but I feel like not everyone has like gone full send on this.Kyle: I think, I think I think it's important to internalize the harness that you think the model will be running. Running into the model.swyx: Yeah. Interesting. Okay. Bash is like the universal harness,Kyle: right? Like I'll, I'll give. An example here, right? I mean, or just like a, like a, it's easy proof, right? If you can train against a harness and you're using that harness for everything, wouldn't you just train with the harness to ensure that you get the best possible quality out of,swyx: Well, the, uh, I, I can provide a counter argument.Yeah, sure. Which is what you wanna provide a generally useful model for other people to plug into their harnesses, right? So if youKyle: Yeah. Harnesses can be open, open source, right?swyx: Yeah. So I mean, that's, that's effectively what's happening with Codex.Kyle: Yeah.swyx: And, but like you may want like a different search tool and then you may have to name it differently or,Nader: I don't know how much people have pushed on this, but can you.Train a model, would it be, have you have people compared training a model for the for the harness versus [00:48:00] like post training forswyx: I think it's the same thing. It's the same thing. It's okay. Just extra post training. INader: see.swyx: And so, I mean, cognition does this course, it does this where you, you just have to like, if your tool is slightly different, um, either force your tool to be like the tool that they train for.Hmm. Or undo their training for their tool and then Oh, that's re retrain. Yeah. It's, it's really annoying and like,Kyle: I would hope that eventually we hit like a certain level of generality with respect to training newswyx: tools. This is not a GI like, it's, this is a really stupid like. Learn my tool b***h.Like, I don't know if, I don't know if I can say that, but like, you know, um, I think what my point kind of is, is that there's, like, I look at slopes of the scaling laws and like, this slope is not working, man. We, we are at a million token con

Supra Insider
#100: Reflecting on two years and 100 podcast episodes | Marc Baselga & Ben Erez

Supra Insider

Play Episode Listen Later Mar 9, 2026 22:13


What does it take to go from “1 out of 10 chance we hit 100 episodes” to actually getting there?In this special milestone episode, Marc Baselga and Ben Erez reflect on reaching 100 episodes of Supra Insider. They share the raw truth about early imposter syndrome—having a Google Doc with pre-written questions, worrying about sounding stupid, focusing more on optics than enjoyment. They discuss the key turning points that made the podcast sustainable: bringing in an editor (reducing their workload from 6-8 hours per week to just recording), stopping the intro recordings, and setting fixed “sacred” time slots that never move.They explore what they've learned about guest selection (intuition-based, not heavily strategic), the tension between timeless vs. timely content, and what successful podcasts have in common—regardless of format. Whether it's Acquired (catalog value, timeless deep dives) or TBPN (daily, day-of relevant), the common thread is two co-hosts who genuinely enjoy each other, are obsessed with making it better over time, stay authentic, and avoid inorganic pressures that force the show to be something it isn't.If you're thinking about starting a podcast, struggling to make one sustainable, or wondering how to build something meaningful that fits your life—this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

Mon Carnet, l'actu numérique
Mon Carnet du 6 mars 2026

Mon Carnet, l'actu numérique

Play Episode Listen Later Mar 6, 2026 94:01


Mon Carnet, le podcast de Bruno Guglielminetti Vendredi 6 mars 2026 Le grand magazine francophone de l'actualité numérique Une présentation de R2i.ca Débrief avec Jérôme Colombain (3:00) Retour sur l'actualité technologique : WMC & Apple Entrevues : Benoît Martel (R2i) : Souveraineté numérique au Canada (24:51) Billets : Berthomet : Podcamp Toronto et sécurité des écouteurs (42:11) Michel : 30 ans de Pokémon et l'héritage d'une franchise (49:38) Dupont-Gagnon : Cybersécurité chez les jeunes (56:08) Weber : Vibe coding ou programmation intuitive (1:01:58) Ricoul : Intelligence artificielle et responsabilité collective (1:09:26) Entrevue : Jean-François Poulin : Les nouveautés de Figma (1:16:15) Merci à R2i de soutenir la production de Mon Carnet Collaborateurs : Jérôme Colombain, Stéphane Berthomet, Carl-Edwin Michel, Catherine Dupont-Gagnon, Thierry Weber, Stéphane Ricoul, Jean-François Poulin www.MonCarnet.com Une production de Guglielminetti.com Mars 2026

canada mars pok retour jean fran figma carnet wmc berthomet colombain thierry weber ricoul
Semiose Podcast
De estagiária a superintendente no Itaú: 30 anos de carreira com Priscila Rocha | Semiose Podcast

Semiose Podcast

Play Episode Listen Later Mar 5, 2026 73:43


Neste episódio do Semiose Podcast, recebemos Priscila Rocha, Superintendente de Engenharia TI no Itaú, para uma aula sobre carreira, resiliência e liderança humanizada.Com uma trajetória de 30 anos no maior banco da América Latina, Priscila compartilha como saiu do estágio em Ciências da Computação para a superintendência, liderando hoje mais de 250 pessoas. Ela abre o jogo sobre os desafios de ser uma das poucas mulheres na tecnologia no início de sua carreira e como o protagonismo foi essencial para sua evolução.Neste episódio, conversamos sobre:- Evolução Profissional: A jornada de 30 anos dentro do Itaú, passando de desenvolvedora Mainframe a gestora de grandes comunidades técnicas.- Lidando com Erros: O dia em que ela apagou um banco de dados em produção e como o feedback e a coragem de pedir ajuda transformaram sua visão sobre falhas.- Os Pilares da Liderança: Por que credibilidade, atitude e posicionamento são mais importantes que o cargo em si.- PDI Além do Cargo: Como construir um Plano de Desenvolvimento Individual focado em competências e habilidades para "pegar a onda" certa.- Equilíbrio e Propósito: A importância do trabalho voluntário, da fé e da família na construção de uma vida com significado.Link da Convidada:https://www.linkedin.com/in/priscilasrocha/______________________________________✅Recomendações de Conteúdo:Curso de UX/UI Design: ⁠⁠⁠⁠https://cursouidesign.com.br/Curso de Figma: ⁠⁠⁠⁠https://cursofigma.com.brFundamentos do Design Visual: ⁠⁠⁠⁠https://fundamentosdodesign.com.brEbook Heurísticas de Nielsen: ⁠⁠⁠⁠http://papodeux.com.br/conteudo/ebook-heursticas-de-nielsen✅Siga o Semiose nas Redes Sociais:Instagram: ⁠⁠⁠⁠https://www.instagram.com/semiosepodcast/⁠⁠⁠⁠⁠⁠⁠LinkedIn: ⁠⁠⁠⁠https://www.linkedin.com/company/semiosedesign/TikTok: ⁠⁠⁠⁠https://www.tiktok.com/@semiosepodcast#podcastbrasil #podcastdesign #semiosepodcast

NerdOut@Spotify
34: Redesigning our Apple TV App *Release Notes*

NerdOut@Spotify

Play Episode Listen Later Mar 5, 2026 21:16


What does it take to ship a video-first Spotify experience on the biggest screen in your house?In this Release Notes episode of the NerdOut@Spotify podcast, you'll hear about the redesigned Spotify app on Apple TV — why we rebuilt it, what it took to build a fully native tvOS app from the ground up, and how we evolved from a templated TVML-based UI to a modern, pixel-perfect experience.The result is a faster, more visual experience tailored for a bigger screen, with expanded video support, AI DJ, on-screen lyrics, and improved navigation across the app.Along the way, we dig into tvOS focus-based navigation, service runtime and dependency wrangling, and the important role AI coding agents played in helping us move faster — from generating UI from Figma designs, to mapping complex dependency trees, to automating visual diffs for pixel perfection.The new Spotify app is available on the Apple TV App Store now.Read what else we're nerding out about on the Spotify Engineering Blog: engineering.atspotify.comYou should follow us on Twitter @SpotifyEng, LinkedIn, and YouTube!

Alles auf Aktien
Schaefflers harter Absturz und die neue Inflationsgefahr

Alles auf Aktien

Play Episode Listen Later Mar 4, 2026 23:35


In der heutigen Folge sprechen die Finanzjournalisten Daniel Eckert und Lea Oetjen über den Kursrutsch von Beiersdorf, wegweisende Insiderverkäufe und einen Dämpfer für die Edelmetalle. Außerdem geht es um Deutsche Börse, Kion, Palantir, Robinhood, DraftKings, Figma, Coinbase, Circle, Tempus AI, ARK Innovation ETF (WKN: A14Y8H), Siemens Energy, Diageo, On Holding, Rheinmetall, Hensoldt, Renk, Ottobock und Newmont. Wir freuen uns an Feedback über aaa@welt.de. Noch mehr "Alles auf Aktien" findet Ihr bei WELTplus und Apple Podcasts – inklusive aller Artikel der Hosts und AAA-Newsletter. Hier bei WELT: https://www.welt.de/podcasts/alles-auf-aktien/plus247399208/Boersen-Podcast-AAA-Bonus-Folgen-Jede-Woche-noch-mehr-Antworten-auf-Eure-Boersen-Fragen.html. Der Börsen-Podcast Disclaimer: Die im Podcast besprochenen Aktien und Fonds stellen keine spezifischen Kauf- oder Anlage-Empfehlungen dar. Die Moderatoren und der Verlag haften nicht für etwaige Verluste, die aufgrund der Umsetzung der Gedanken oder Ideen entstehen. Hörtipps: Für alle, die noch mehr wissen wollen: Holger Zschäpitz können Sie jede Woche im Finanz- und Wirtschaftspodcast "Deffner&Zschäpitz" hören. +++ Werbung +++ Du möchtest mehr über unsere Werbepartner erfahren? Hier findest du alle Infos & Rabatte! https://linktr.ee/alles_auf_aktien Impressum: https://www.welt.de/services/article7893735/Impressum.html Datenschutz: https://www.welt.de/services/article157550705/Datenschutzerklaerung-WELT-DIGITAL.html

UXpeditious: A UserZoom Podcast
Speed is no longer the constraint in product design—judgment is | Figma's Andrew Hogan

UXpeditious: A UserZoom Podcast

Play Episode Listen Later Mar 2, 2026 46:08


Episode web page: https://bit.ly/4sg3a3k Episode summary: In this episode of Insights Unlocked, Jason Giles sits down with Andrew Hogan, who leads insights at Figma, to explore what the future of design looks like as AI reshapes product development. Drawing from Figma's State of Design 2026 report and recent hiring research, Andrew shares why more people than ever are participating in design—and what that means for craft, quality, and leadership. With 60% of new Figma files created by non-designers, design is becoming shared infrastructure across organizations. Andrew and Jason unpack the tension between speed and confidence in AI-enabled workflows, debating whether craft is about polish, problem solving, or something deeper. They explore why taste and discernment matter more in a world where you can generate 30 design variations in seconds—and how leaders must define what “good” looks like if they want to scale quality. The conversation also dives into hiring trends, the growing demand for senior designers who can navigate complexity, and the importance of strong design systems as more cross-functional teams begin prototyping. Ultimately, the episode reframes AI not as a replacement for designers, but as an accelerator that increases the need for thoughtful validation, customer understanding, and clear decision-making. You'll learn: Why AI makes taste and discernment more important—not less What the State of Design 2026 reveals about craft and hiring trends Why speed is increasing faster than confidence How design systems help scale quality across teams What leaders should define before scaling AI-driven workflows How to avoid false confidence when using AI prototypes Why design is becoming infrastructure inside modern organizations Resources & links: Andrew Hogan on LinkedIn (https://www.linkedin.com/in/ahhogan/) Figma State of Design 2026 report (https://www.figma.com/reports/state-of-the-designer-2026/) IDC study on the growing design workforce (https://www.figma.com/blog/idc-design-population-study/) Jason Giles on LinkedIn (https://www.linkedin.com/in/jaygiles/) UserTesting's latest report: Defensible Design in the Age of AI (https://www.usertesting.com/resources/reports/defensible-design-in-the-age-of-ai) Nathan Isaacs on LinkedIn (https://www.linkedin.com/in/nathanisaacs/) Learn more about Insights Unlocked: https://www.usertesting.com/podcast

Supra Insider
#99: How the air force prepared me for product management | Yaniv Fatal (Founding PM @ Blast Security, formerly @ Wiz)

Supra Insider

Play Episode Listen Later Mar 2, 2026 72:06


What does it take to go from zero tech experience to founding PM at a cybersecurity startup in three years?In this episode of Supra Insider, Marc Baselga and Ben Erez sit down with Yaniv Fatal, founding product manager at Blast Security, to unpack his remarkable journey from elite Israeli Air Force pilot to tech. After 13 years in the military and zero technical background, Yaniv failed 20+ interviews before landing at Wiz (later acquired by Google for $32B). He shares how he applied pilot debriefing methodology to each rejection, learned cloud security from absolute zero in weeks, and built credibility through relentless questioning and delivering results nobody else could.They explore Yaniv's philosophy on learning: mastering fundamentals first (no shortcuts), being comfortable asking “dumb questions,” and the belief that you don't really understand something until you can teach it. Plus, his approach to long-term goal setting—he and his wife keep a notebook with goals for where they want to be at age 45, including his aim to be CEO or C-level, which drives every decision he makes today. And why product management is his chosen path to that goal, inspired by the fact that CEOs of Google and Microsoft were all PMs first.If you're considering a major career transition, struggling with imposter syndrome while learning something completely new, or trying to figure out how to set goals that actually drive your daily decisions—this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

Lenny's Podcast: Product | Growth | Career
The design process is dead. Here's what's replacing it. | Jenny Wen (head of design at Claude)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Mar 1, 2026 77:25


Jenny Wen leads design for Claude at Anthropic. Prior to this, she was Director of Design at Figma, where she led the teams behind FigJam and Slides. Before that, she was a designer at Dropbox, Square, and Shopify.—We discuss:1. Why the classic discovery → mock → iterate design process is becoming obsolete2. What a day in the life of a designer at Anthropic looks like, including her AI tool stack3. Whether AI will eventually surpass humans in taste and judgment4. Why Jenny left a director role at Figma to return to IC work at Anthropic5. The three archetypes Jenny is hiring for now6. Why chatbot interfaces may be more durable than most people expect—Brought to you by:Mercury—Radically different banking: https://mercury.com/?utm_source=lennys&utm_medium=sponsored_newsletter&utm_campaign=26q1_brand_campaignOrkes—The enterprise platform for reliable applications and agentic workflows: https://www.orkes.io/Omni—AI analytics your customers can trust: https://omni.co/lenny—Episode transcript: https://www.lennysnewsletter.com/p/the-design-process-is-dead—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Jenny Wen:• X: https://x.com/jenny_wen• LinkedIn: https://www.linkedin.com/in/jennywen• Substack: https://jennywen.substack.com• Website: https://jennywen.ca—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Jenny Wen(04:23) Why the traditional design process is dead(06:33) The two new types of design work(10:00) How widespread this shift will be(13:00) Day-to-day life as a designer at Anthropic(18:45) Jenny's AI stack(20:03) Why Figma still matters for exploration(22:25) Advice for working with engineers(24:19) How to maintain craft, quality, and trust in the AI era(27:35) Will AI ever have “taste”?(31:38) The future of chatbot interfaces(35:33) Moving from director back to IC(41:00) The 10-day build of Claude Cowork(46:06) Hiring: the three archetypes(50:44) Advice for new and senior designers(54:42) The value of “low leverage” tasks for managers(57:52) Why the best teams roast each other(01:01:45) The legibility framework(01:07:22) Lightning round and final thoughts—Referenced:• Figma: https://www.figma.com• Anthropic: https://www.anthropic.com• v0: https://v0.app• Navigating a Design Career with Jenny Wen | Figma at Waterloo: https://www.youtube.com/watch?v=OHcBPMh2ivk• Claude Cowork: https://claude.com/product/cowork• Use Claude Code in VS Code: https://code.claude.com/docs/en/vs-code• Claude Code in Slack: https://code.claude.com/docs/en/slack• Lex Fridman's website: https://lexfridman.com• Head of Claude Code: What happens after coding is solved | Boris Cherny: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens• OpenClaw: https://openclaw.ai• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Marc Andreessen: The real AI boom hasn't even started yet: https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom• Socratica: https://www.socratica.info• Anthropic's CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next• Radical Candor: From theory to practice with author Kim Scott: https://www.lennysnewsletter.com/p/radical-candor-from-theory-to-practice• Evan Tana's ‘legibility matrix' on X: https://x.com/evantana/status/1927404374252269667• How to spot a top 1% startup early: https://www.lennysnewsletter.com/p/how-to-spot-a-top-1-startup-early• Palantir: https://www.palantir.com• Stripe: https://stripe.com• Linear: https://linear.app• Notion: https://www.notion.com• Julie Zhuo's website: https://www.juliezhuo.com• Sentimental Value: https://www.imdb.com/title/tt27714581• The Pitt on Prime Video: https://www.amazon.com/The-Pitt-Season-1/dp/B0DNRR8QWD• Noah Wyle: https://en.wikipedia.org/wiki/Noah_Wyle• ER on Prime Video: https://www.amazon.com/gp/video/detail/B0FWZSDYRP• Retro: https://retro.app• Granola: https://www.granola.ai—Recommended books:• Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity: https://www.amazon.com/Radical-Candor-Kick-Ass-Without-Humanity/dp/1250103509• The Power Broker: Robert Moses and the Fall of New York: https://www.amazon.com/Power-Broker-Robert-Moses-Fall/dp/0394480767• Insomniac City: New York, Oliver Sacks, and Me: https://www.amazon.com/Insomniac-City-New-York-Oliver/dp/162040494X—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

狗熊有话说
#548 别等工程师了,自己用 Claude 发布网站

狗熊有话说

Play Episode Listen Later Feb 27, 2026 18:51 Transcription Available


### 本期简介你有没有一个 Figma 落地页,设计早就做完了,却一直没上线?拦住你的往往不是设计本身,而是环境配置、响应式适配、部署和域名这些"技术活"。这期节目里,Bear 完整拆解了自己用 **Claude Code** 将 Figma 静态设计稿发布为真实网站的全过程——零编程经验,半天完成,全程自然语言操作。适用于落地页、作品集、案例研究等静态网站场景。---### 核心流程拆解**第一步:用 Plan Mode 规划全局**在 Claude Code 中按 `Shift + Tab × 2` 进入 Plan Mode,先让 AI 制定完整方案,不急着执行。框架、步骤、依赖项一次看清楚,满意再开始。**第二步:连接 Figma MCP,提取设计 Token**把 Figma 设计链接丢给 Claude,让它连接 MCP 自动识别颜色、字体、间距等设计 Token,以及各个页面区块的结构。**第三步:搭建本地环境并还原设计**框架选用 **Next.js + Tailwind CSS**,大约 20 分钟,Claude 就能把 90% 的设计稿还原成本地可运行的网站。**第四步:做响应式,但别全靠 AI**移动端适配时,如果 AI 在同一个问题上反复循环(比如 Hero 图片裁剪方式),不要硬耗 Token——**手动去 Figma 做好裁好的图,直接替换**,效率更高。这是本期最重要的一个教训。**第五步:截图 + 粘贴调整细节**发现哪里和设计稿不对,直接截图 `Ctrl+V` 粘到 Claude,描述问题,它会自动对照原始设计修复。加箭头标注效果更好,就像和坐在旁边的开发一起协作。**第六步:上传 GitHub,部署到 Vercel,连接域名**一切搞定后,让 Claude 把代码推到 GitHub,连接 Vercel 托管,再绑定自己的域名。还顺带生成了 README 和博客草稿。---### 三条关键收获1. **控制范围**:你在发布落地页,不是在造产品,保持克制2. **先规划,再迭代**:Plan Mode 先行,配合小步视觉检查3. **知道边界**:AI 在主观视觉判断上容易卡壳,这时候人工介入反而更快---### 提到的工具与资源-

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Anthropic Wipes Billions Off Markets | Citrini Research: The Ultimate Breakdown: Agents, "Ghost GDP", Consumer Spend etc. | Figma Earnings Beat & Four Public Stocks to Buy | Jack Altman Joins Benchmark

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Feb 26, 2026 80:09


AGENDA: 03:55 Anthropic Security Product Wipes Billions Off Public Markets 11:17 Do Agents Turn SaaS Incumbents into Valueless Databases 22:07 Anthropic Secondary Sale Makes Hundreds Decamillionaires 23:20 Citrini Research Piece: Everything You Need To Know 26:04 Will DoorDash Be Replaced by Agents 34:22 Will "Ghost GDP" Soften Consumer Spending Power 42:46 Why No Public Company Has Created a Good Agent Product 47:19 Is Tech Private Equity and Thoma Bravo F***** in this Market 51:05 OpenAI Massively Increases Spending Plans: Analysis 56:24 Figma Fights Back: Earnings Through the Roof 01:02:12 Momentum Versus Value: Four Public Stocks to Buy 01:09:30 Jack Altman Joins Benchmark Capital    

The No Film School Podcast
What These DPs Used Instead of Stills to Land Their Sundance Films

The No Film School Podcast

Play Episode Listen Later Feb 26, 2026 57:19


Recorded live at the 2026 Sundance Film Festival in Park City, this annual Director of Photography Roundtable features No Film School's GG Hawkins in conversation with cinematographers Lidia Nikonova, Sam Levy, and Maria Herrera. The group discusses their unconventional paths into cinematography—from orchestras and photojournalism to weddings and radio DJing—how they landed their Sundance projects, and why connection, rhythm, and trust matter more than flashy lookbooks. They also break down the tools they used to communicate vision, navigate long dialogue scenes, and adapt to technical and emotional challenges on set. In this episode, No Film School's GG Hawkins and guests discuss… Shooting at the 2026 Sundance Film Festival and hosting at the BraveMaker house Maria Herrera's transition from music to cinematography and operating handheld for emotionally intense performances Sam Levy's mentorship under Harris Savides and how that shaped his approach to narrative filmmaking Lidia Nikonova's journey from photojournalism and the Canon EOS 5D Mark II to AFI and shooting narrative features How each DP landed their Sundance projects through relationships, cold emails, and creative chemistry When to bring visual references to a director meeting—and when to just listen Using tools like Figma to build collaborative lookbooks and visual worlds Why dialogue rhythm and musicality influence cinematography choices Shooting on 35mm with an Arricam ST versus digital on the ARRI Alexa 35 Working with vintage Super Baltar lenses (famously used on The Godfather) for a modern crime thriller Referencing L'Argent by Robert Bresson for insert shots and cinematic economy How to approach 10+ page dialogue scenes without losing visual intention The value of shooting weddings and low-budget projects to build craft and confidence Advice for emerging cinematographers: show up early, trust your vision, and get your reps in Memorable Quotes: “This child will never play a musical instrument ever in her life.” “If you have good dialogues, it's like, okay, here's something.” “Just connect with her.” “Show up at least one hour early… and do not use your phone on set.” Guests: Lidia Nikonova Sam Levy Maria Herrera Find No Film School everywhere: On the Web: No Film School Facebook: No Film School on Facebook Twitter: No Film School on Twitter YouTube: No Film School on YouTube Instagram: No Film School on Instagram

The Tech Trek
The Hiring Mistake That Kills Most Startups (And What to Do Instead)

The Tech Trek

Play Episode Listen Later Feb 24, 2026 27:12


Riya Grover, CEO and co founder of Sequence, breaks down what “good CEO” actually looks like when the job is messy, fast, and high stakes. This is a practical conversation about building excellence through people, clarity, and direction, not through heroics or micromanagement. Riya runs a revenue automation platform for finance teams, helping companies automate order to cash, billing, invoicing, accounts receivable, and revenue recognition. From that seat, she shares a founder level view on leadership that is direct, repeatable, and built for real operating constraints.Key takeaways• The CEO's highest leverage job is building the bench, your company becomes the team you assemble• High performance culture comes from a clear bar, fast decisions when it is not met, and leaders who own outcomes• Great teams do not need more policies, they need context, goals, trade offs, and clarity• Separate reversible decisions from irreversible ones, move fast on two way doors, slow down on one way doors• Hiring signal to watch, motivation and hunger for the stretch challenge often beats the “done it before” resumeTimestamped highlights00:32 What Sequence does, why order to cash is still painfully manual01:48 The CEO role is less about functions, more about direction and execution03:23 Excellence starts with talent density, do not compromise on the bar06:10 Why companies win, direction plus distribution, and the Figma example11:01 Getting real feedback as a leader, how to reduce hierarchy and increase ownership14:39 “They need clarity,” decision frameworks over micromanagement18:01 The hidden damage of the founder weighing in on every micro decision20:53 Hiring underrated talent, motivation, ambiguity tolerance, and the stretch role24:38 Why the CEO should invest time in hiring, the leverage math is obviousA line worth keepingThey do not need policies, they need clarity. Pro tips you can steal• Promote leaders who have done the job and set the pace, it earns trust and improves decision quality• Give teams context and constraints, then treat your input like any other input• Use the door test, reversible decisions get speed and delegation, irreversible ones get more diligence• In hiring, look for motivation plus clear thinking, then bet on aptitude over the perfect backgroundCall to actionIf this one helped you think more clearly about leadership and hiring, follow the show and share the episode with one operator who is building under pressure. New conversations drop with different guests and different problems, so you always have something useful to steal.

狗熊有话说
#547 设计师为什么要试试 Google Stitch?- Skip the Blank Page: A Designer's Real Workflow with Stitch and Figma

狗熊有话说

Play Episode Listen Later Feb 24, 2026 18:29 Transcription Available


本期 Bear 用自己的 FDP(兼职设计合伙人)落地页作为真实案例,完整演示了一套「Stitch 开始,Figma 收尾」的 AI 辅助设计工作流。如果你还没有把 AI 工具融入设计流程,这期值得一听。

Parlons Design
#402 Top 5 : Les outils Figma à découvrir en 2026 !

Parlons Design

Play Episode Listen Later Feb 24, 2026 11:11


Des features IA complètement magiques aux outils traditionnels oubliés, découvrez ou re-découvrez quelques pépites cachées au sein de Figma qui pourraient bien booster votre efficacité !Découvrez la formation UX France sur https://uxfrance.com ou en prenant directement contact à commercial.uxfrance@gmail.com

Supra Insider
#98: Why mid-career people are doubling down on self-learning | Gagan Biyani (CEO and Co-Founder @ Maven)

Supra Insider

Play Episode Listen Later Feb 23, 2026 74:20


What if the biggest barrier to learning AI isn't the tools—it's how we approach learning itself?In this episode of Supra Insider, Marc Baselga and Ben Erez sit down with Gagan Biyani, CEO and co-founder of Maven, to unpack why this moment is critical for mid-career professionals to prioritize self-learning. Gagan shares lessons from running a cohort-based learning platform and conducting 30-50 interviews with companies struggling to adopt AI. He explains why AI is like witnessing the internet as a child—you can't afford not to learn it—and why building the learning habit matters more than what you learn first.They explore the five problems companies face with AI education: trying to generalize training when every role needs different tools, listening to tinkerers instead of bridge adopters, and delegating to chiefs of staff instead of having C-level sponsors run the trainings. Gagan shares Maven's own journey—why their design team needed to rebuild the design system before AI could be useful, how they're changing team ratios from 3-4 engineers per designer to just 2, and why social media is terrible for learning anything that requires weeks of dedication.If you're a mid-career professional feeling overwhelmed by AI, a leader trying to build a culture of self-learning at your company, or wondering how to actually integrate AI into your workflows—this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

Keen On Democracy
The Silicon Gods Must Have Their Blood: How Public Venture Capital Might Kill Venture Capitalism

Keen On Democracy

Play Episode Listen Later Feb 21, 2026 38:19


"They are changing venture capital from a 30% tax to 0% tax. If Robinhood succeeds, it makes Sequoia and Andreessen's business model untenable." — Keith TeareThe Silicon Gods must have their blood. And they've finally come for the funders of disruption, the venture capitalists, who are now being disrupted by something called Public Venture Capital (PVC). That, at least, is the view of That Was The Week publisher Keith Teare, who leads his newsletter this week with Robinhood's new venture fund. This new stock-trading app for millennials is going after Sequoia and Andreessen Horowitz—not by competing on deal flow, but by charging 0% carry instead of 20-30%. Robinhood promises it blows the doors off traditional venture capital.But Keith urges caution over PVCs. Robinhood is packaging late-stage private assets—companies like Databricks that would have IPO'd years ago but are staying private longer. By the time retail investors get access, employees are already cashing out through tender offers because they think the peak is near. The poster child: Figma, which did secondaries at $12 billion after Adobe's $20 billion acquisition failed. A lot of (dumb) people bought at the top and are now slightly less stupid.Fortunately, this week's tech roundup isn't just about get-rich-quick investment schemes. We also discuss Yasha Mounk's sobering experiment: he asked AI to write a political philosophy paper and found it "depressingly good"—publishable in an academic journal. Keith reframes this supposed "death of the humanities" as automation, not democratization. The humans aren't being leveled up; they're masquerading as producers while AI does the work. But craft still matters. When technology relieves humans of the mundane, he hopes, it elevates the special.Lastly but not least, we get to the abundance debate. Peter Diamandis and Singularity University have promised something called "exponential abundance" by 2035. Keith is sympathetic. I am not. The only thing I'm willing to guarantee is that we'll still be talking abundantly about abundance in 2035. And that the Silicon Valley Gods will have their blood. Five Takeaways●      Robinhood Is Charging 0% Carry: Sequoia and Andreessen take 20-30% of profits. Robinhood takes nothing. If they scale, the traditional VC model becomes untenable.●      But You're Buying at the Top: These are late-stage assets. Employees are selling through tender offers because they think peak valuation is near. Ask the people who bought Figma at $12 billion.●      AI Is Automating the Humanities: Yasha Mounk found AI could write "depressingly good" political philosophy. This isn't democratization—it's humans masquerading as producers.●      Craft Still Retains Its Power: Technology relieves humans of the mundane—and elevates the special. Creativity that breaks through will always command attention.●      The Abundance Debate Continues: Diamandis says abundance by 2035. Keith agrees land is already abundant. Andrew calls this "such a stupid thing to say." About the GuestKeith Teare is the publisher of That Was The Week and Executive Chairman of SignalRank. He is a serial entrepreneur and longtime observer of Silicon Valley. Keith joins Keen On America every Saturday for The Week That Was.ReferencesCompanies mentioned:●      Robinhood is launching a publicly listed venture fund, raising up to $1 billion at $25/share with 0% carry. They already have $340 million in assets including Databricks.●      Figma is cited as a cautionary tale: after Adobe's failed $20 billion acquisition, it did secondaries at $12 billion—many bought at the top.●      Polymarket is a prediction market platform that Robinhood has responded to by adding prediction markets to its offerings.People mentioned:●      Yasha Mounk wrote about AI writing "depressingly good" political philosophy papers that could be published in academic journals.●      Peter Diamandis and Dr. Alexander Wisner-Gross of Singularity University argue that exponential abundance is coming by 2035.●      Packy McCormick wrote about power in the age of intelligence.About Keen On AmericaNobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States—hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.WebsiteSubstackYouTubeApple PodcastsSpotify Chapters:(00:00) - Introduction: If it's Saturday, it must be revolution (02:11) - Robinhood's venture fund announcement (03:17) - What is Robinhood's day job? (07:43) - Secondary markets and tender offers (10:33) - Democratization or late-stage risk? (14:09) - Is Robinhood just gambling? (16:08) - Private vs. public market returns (19:02) - Is finance merging with betting? (24:23) - Blowing the doors off Sequoia and Andreessen (26:27) - Yasha Mounk: AI automating the humanities (28:47) - Where does power go in the age of AI? (30:42) - Craft retains its power (31:33) - The abundance debate (34:00) - Is land abundant? Andrew loses patience (00:00) - Chapter 15 (00:00) - Chapter 16 (00:00) - Introduction: If it's Saturday, it must be revolution (02:11) - Robinhood's venture fund announcement (03:17) - What is Robinhood's day job? (07:43) - Secondary markets and tender offers (10:33) - Democratization or late-stage risk? (14:09) - Is Robinhood just gambling? (16:08) - Private vs. public market returns (19:02) - Is finance merging with betting? (24:23) - Blowing the doors off Sequoia and Andreessen (26:27) - Yasha Mounk: AI automating the humanities

This Week in Pre-IPO Stocks
E248: OpenAI $280B in 2030 revenue! + “buys” OpenClaw; Grafana $9B valuation; World Labs $5B valuation; + more

This Week in Pre-IPO Stocks

Play Episode Listen Later Feb 21, 2026 19:52


Send a textInvest in pre-IPO stocks with AG Dillon & Co. Contact aaron.dillon@agdillon.com to learn more. Financial advisors only. www.agdillon.com00:00 - Intro00:02 - AG Dillon Funds closing on Mar 31, 202600:51 - OpenAI Financials $280B revenue target meets $665B cost wall03:58 - OpenAI “buys” OpenClaw, Steinberger joins OpenAI04:42 - OpenAI Series C aims to shatter records at $850B post money05:41 - OpenAI and Tata bet on India with a 100 MW to 1 GW buildout path06:29 - Grafana's $9B round talks ride a $400M ARR wave07:23 - World Labs lands Autodesk and targets a rumored $5B valuation08:18 - Temporal wants to be the load bearing layer for agent execution09:31 - Mesh Optical's $50M Series A targets the chokepoint inside AI data centers10:43 - Render's $1.5B valuation is a bet that AI apps need a new runtime11:40 - Stash acquired by Grab for $425M13:06 - Physical Superintelligence pitches a physics breakthrough factory with a 20 person team14:07 - Figma plugs Claude Code into design and risks losing the workflow15:00 - Anthropic ships Sonnet 4.6 just 12 days after Opus 4.615:26 - Stripe's Bridge wins OCC trust charter signal as stablecoin scrutiny rises16:37 - Cohere puts 70 plus languages on device with a 3.35B parameter model17:53 - ElevenLabs turns agent risk into an insurable product at $12.2B secondary19:05 - Mistral buys Koyeb and adds 16 engineers to harden its compute stack

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Anthropic Raises $30BN at $380BN Valuation | Thrive Raises New $10BN Fund | OpenAI Buys OpenClaw | Stripe Raises at $140BN: Is Adyen Wildly Undervalued? | Monday, Figma, Shopify: Which are Buys vs Sells?

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Feb 19, 2026 94:11


AGENDA: 04:14 Anthropic's $30B Raise at $380B 06:18 Why SaaS Stocks Keep Getting Crushed 18:15 Wall Street's New Religion: AI Replaces Headcount  22:42 The Bear Case for Shopify: What Could Go Wrong? 31:51 Replit and Lovable are Proof Figma Missed Out: Figma; Buy or Sell?  48:42 Stripe Raises at $140BN: Is Stripe Wildly Overvalued or Adyen Undervalued?  54:36 OpenAI Buys OpenClaw 01:06:28 Thrive's $10B Growth Fund 01:09:10 Arif Janmohamed Leaves Lightspeed for New Firm 01:17:12 Workday's Founder Returns as CEO: Will it Work?  01:20:34 Which Founder Returns Next: HubSpot, Twilio, Gitlab? 01:24:03 Is Monday.com a Screaming Buy? 01:28:25 Jason and Harry Bet $200,000  

Squawk on the Street
Walmart Beats, OpenAI's Altman and Anthropic's Amodei Talk Exclusively to CNBC 2/19/26

Squawk on the Street

Play Episode Listen Later Feb 19, 2026 43:36


Carl Quintanilla, Jim Cramer and David Faber discussed market reaction to Walmart's Q4 beat and what new CEO John Furner said on the earnings call about consumer spending. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei refused to hold hands during a group photo shoot with tech leaders at an AI summit in India. Both men spoke exclusively to CNBC: Altman on the U.S.-China AI arms race, Amodei on AI's effect on jobs. Also in focus: OpenAI's march toward a new $100 billion funding round, more pain for software stocks, Etsy jumps on the sale of second-hand fashion app Depop to eBay, Blue Owl slides on a report about redemptions, a flashback to what Jim said about Figma on the date of its stellar public debut in July 2025.   Squawk on the Street Disclaimer Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Rundown
Walmart Issues Soft Guidance, Figma Sees AI Surge

The Rundown

Play Episode Listen Later Feb 19, 2026 9:55


Market update for Thursday February 19, 2026Check out the Public app for incredible investing tools and to support the show (LINK)Follow us on Instagram (@TheRundownDaily) for bonus content and instant reactions.In today's episode:Bitcoin drops near 2026 lows as crypto enthusiasm fadesWalmart beats on earnings but issues cautious guidanceFigma revenue jump 40% as AI monetization acceleratesEtsy sells Depop to eBay for $1.2BCarvana stock slides after earnings missAmazon surpasses Walmart in annual revenue

TechCheck
China's Lunar New Year tech showcase, Plus Figma's Anthropic partnership 2/17/26

TechCheck

Play Episode Listen Later Feb 17, 2026 6:35


China's tech giants are kicking off the Lunar New Year with a wave of AI and robotics announcements. Plus, what Figma's new “Code to Canvas” partnership with Anthropic means for the future of Software. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Parlons Design
#401 Prototypage : Le guide design complet - Figma, Lovable, Google Sheets et +

Parlons Design

Play Episode Listen Later Feb 17, 2026 12:49


Quand prototyper ? Quoi prototyper ? Avec quels outils ? On débrief tout ces aspects design ensemble dans ce petit guide du prototypage ! En passant de Figma à Lovable avec un petit détour sur Google Sheets & PowerPoint, découvrez une nouvelle variété de possibilités de prototypage !Découvrez la formation UX France sur https://uxfrance.com ou en prenant directement contact à commercial.uxfrance@gmail.com

UXpeditious: A UserZoom Podcast
How TruStage's design team operationalized UX research

UXpeditious: A UserZoom Podcast

Play Episode Listen Later Feb 16, 2026 41:49


Episode web page: https://bit.ly/4k9H4fT Episode summary: In this episode of Insights Unlocked, design and research leaders from TruStage share how they transformed UX research from an inconsistent, ad-hoc effort into a scalable, trusted practice embedded directly within their design team. Through a creative “cookbook” framework, the team built shared standards, accelerated time to insights, and increased stakeholder confidence—without sacrificing flexibility or creativity. What you'll learn Why TruStage shifted from siloed research teams to an embedded UX research model How a visual “cookbook” system helped standardize research without making it rigid The power of shared language and artifacts to build stakeholder trust and buy-in How repeatable research “meal plans” enabled faster pivots and better decision-making What it takes to scale research volume while improving quality and consistency Key themes and ideas From potluck to practice. The TruStage team describes their early research approach as a “potluck”—rich in individual expertise but lacking consistency. By designing a shared system, they moved toward a polished, repeatable research practice that stakeholders could rely on. The research cookbook framework. Using food metaphors, the team created: Recipes for designers and researchers that explain how to run specific studies Menus for stakeholders that clearly outline value, effort, and outcomes Meal plans that bundle methods together across stages of the product lifecycle This framework helped align internal teams and external partners around expectations, scope, and impact. Embedding research into everyday workflows. By building the system directly in Figma and connecting it to their agile tooling, TruStage made research easy to plan, prioritize, and execute—removing friction that previously slowed teams down. Scaling impact through trust and clarity. Clear artifacts and shared standards made research easier to explain, faster to approve, and more likely to be requested. As a result, the team more than doubled the number of research stories completed year over year and shifted from “selling” research to responding to demand. Empowering teams through co-creation. Rather than dictating a process from the top down, the team involved designers across experience levels in shaping the system. This created stronger ownership, higher adoption, and a culture where research felt both accessible and fun. Advice for teams operationalizing research Lean into tools your team already loves and uses daily Invest time in shared philosophy and language—not just templates Co-create systems with the people who will use them Treat research operations as an evolving practice, not a one-time deliverable Resources & links TruStage's website (https://www.trustage.com/) Nick Higbee on LinkedIn (https://www.linkedin.com/in/nicholas-higbee-95540425/) Benny Brooks on LinkedIn (https://www.linkedin.com/in/thebenbrooks/) Betsy Drews on LinkedIn (https://www.linkedin.com/in/betsy-drews-4a30256b/) Natalie Padilla on LinkedIn (https://www.linkedin.com/in/natalie-weiner/) Nathan Isaacs on LinkedIn (https://www.linkedin.com/in/nathanisaacs/) Learn more about Insights Unlocked: https://www.usertesting.com/podcast

Supra Insider
#97: What it means to be a forward-deployed product leader | Chase Schwalbach (SVP Product & Technology @ Millie)

Supra Insider

Play Episode Listen Later Feb 16, 2026 70:40


What if the best way to lead product is to build it yourself first?In this episode of Supra Insider, Marc Baselga and Ben Erez sit down with Chase Schwalbach, SVP of Product and Technology at Millie, to unpack a radically different approach to product leadership. Despite his title, Chase spent months as an IC, rolling up his sleeves to build healthcare infrastructure, teach himself AI eval systems, and ship a sophisticated patient chatbot, all before bringing his team in. He explains why shielding the team from early-stage messiness, moving at speed, and feeling the pain yourself leads to better products.They explore how Chase built a team of AI agents (supervisor + specialized sub-agents) from scratch, why treating prompts like deterministic code requires extreme precision, and how he taught himself evals through pure iteration. Plus, the converging worlds of PM and engineering, why technical PMs and product-minded engineers are becoming the same role, why handoffs kill velocity in an AI-native world, and what “context engineering” actually means when your codebase needs to work for both humans and AI agents.If you're a product leader wondering whether to get more hands-on, an engineer considering the jump to PM (or vice versa), or building AI systems in regulated industries like healthcare, this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

Chip Stock Investor Podcast
Software Apocalypse or Opportunity? Interview with Braden Dennis, CO-Founder and CEO of Fiscal.ai

Chip Stock Investor Podcast

Play Episode Listen Later Feb 16, 2026 23:20


Is AI eating the software industry, or is it just making it more powerful?In this episode, we sit down with Braden Dennis, CEO and co-founder of Fiscal.ai, to discuss the shift happening in enterprise SaaS. If you've watched our videos, you know we use Fiscal's charts every single day to analyze the markets, so it was great to get Braden's perspective on where the industry is headed.We dive deep into the software apocalypse narrative and whether it's based in reality or just a market overreaction. Braden explains why maintaining software is getting easier, how his engineering team has achieved 10x productivity, and why internal AI solutions are coming for the "busy work" that off-the-shelf SaaS can't solve.Join us on Discord with Semiconductor Insider, sign up on our website: www.chipstockinvestor.com/membershipSupercharge your analysis with AI! Get 15% of your membership with our special link here: https://fiscal.ai/csi/Sign Up For Our Newsletter: https://mailchi.mp/b1228c12f284/sign-up-landing-page-short-formChapters:0:00 – Is AI Eating Software? 1:12 – Meet Braden Dennis, CEO of Fiscal.ai 1:45 – Why Software Engineering Has Changed Completely 2:40 – 2026 Outlook: Opportunities vs. Traps 3:30 – What Software is Becoming Obsolete? 4:30 – Automating the "Unsolvable" Internal Busy Work 5:15 – "Intelligence in the Sky": A New Data Layer 6:10 – Pricing Power Debate: Will Clients Pay Less? 7:45 – Broadcom & VMware Case Study8:55 – Comparing the Software Correction to 2018 Semiconductors 11:45 – Lessons on Market Cyclicality 13:55 – The Problem with Late-Stage Venture Capital 16:00 – Why We Need More Tech IPOs 18:10 – The Incentive for Founders to Stay Private 20:00 – Evaluating Figma and Adobe in the AI Age21:30 – ServiceNow: Narrative vs. Financial Reality 22:45 – Final Verdict: Being Selective in a Sell-offIf you found this video useful, please make sure to like and subscribe!*********************************************************Affiliate links that are sprinkled in throughout this video. If something catches your eye and you decide to buy it, we might earn a little coffee money. Thanks for helping us (Kasey) fuel our caffeine addiction!Content in this video is for general information or entertainment only and is not specific or individual investment advice. Forecasts and information presented may not develop as predicted and there is no guarantee any strategies presented will be successful. All investing involves risk, and you could lose some or all of your principal. #AI #SaaS #SoftwareStocks #Investing #ChipStockInvestor #FiscalAI #TechInvesting #ServiceNow #stockmarket2026 Nick and Kasey own shares of Adobe, Figma, ServiceNow

Les Cast Codeurs Podcast
LCC 337 - Datacenters Carrier Class dans l'espace

Les Cast Codeurs Podcast

Play Episode Listen Later Feb 16, 2026 94:19


Emmanuel et Guillaume discutent de divers sujets liés à la programmation, notamment les systèmes de fichiers en Java, le Data Oriented Programming, les défis de JPA avec Kotlin, et les nouvelles fonctionnalités de Quarkus. Ils explorent également des sujets un peu fous comme la création de datacenters dans l'espace. Pas mal d'architecture aussi. Enregistré le 13 février 2026 Téléchargement de l'épisode LesCastCodeurs-Episode-337.mp3 ou en vidéo sur YouTube. News Langages Comment implémenter un file system en Java https://foojay.io/today/bootstrapping-a-java-file-system/ Créer un système de fichiers Java personnalisé avec NIO.2 pour des usages variés (VCS, archives, systèmes distants). Évolution Java: java.io.File (1.0) -> NIO (1.4) -> NIO.2 (1.7) pour personnalisation via FileSystem. Recommander conception préalable; API Java est orientée POSIX. Composants clés à considérer: Conception URI (scheme unique, chemin). Gestion de l'arborescence (BD, métadonnées, efficacité). Stockage binaire (emplacement, chiffrement, versions). Minimum pour démarrer (4 composants): Implémenter Path (représente fichier/répertoire). Étendre FileSystem (instance du système). Étendre FileSystemProvider (moteur, enregistré par scheme). Enregistrer FileSystemProvider via META-INF/services. Étapes suivantes: Couche BD (arborescence), opérations répertoire/fichier de base, stockage, tests. Processus long et exigeant, mais gratifiant.   Un article de brian goetz sur le futur du data oriented programming en Java https://openjdk.org/projects/amber/design-notes/beyond-records Le projet Amber de Java introduit les "carrier classes", une évolution des records qui permet plus de flexibilité tout en gardant les avantages du pattern matching et de la reconstruction Les records imposent des contraintes strictes (immutabilité, représentation exacte de l'état) qui limitent leur usage pour des classes avec état muable ou dérivé Les carrier classes permettent de déclarer une state description complète et canonique sans imposer que la représentation interne corresponde exactement à l'API publique Le modificateur "component" sur les champs permet au compilateur de dériver automatiquement les accesseurs pour les composants alignés avec la state description Les compact constructors sont généralisés aux carrier classes, générant automatiquement l'initialisation des component fields Les carrier classes supportent la déconstruction via pattern matching comme les records, rendant possible leur usage dans les instanceof et switch Les carrier interfaces permettent de définir une state description sur une interface, obligeant les implémentations à fournir les accesseurs correspondants L'extension entre carrier classes est possible, avec dérivation automatique des appels super() quand les composants parent sont subsumés par l'enfant Les records deviennent un cas particulier de carrier classes avec des contraintes supplémentaires (final, extends Record, component fields privés et finaux obligatoires) L'évolution compatible des records est améliorée en permettant l'ajout de composants en fin de liste et la déconstruction partielle par préfixe Comment éviter les pièges courants avec JPA et Kotlin - https://blog.jetbrains.com/idea/2026/01/how-to-avoid-common-pitfalls-with-jpa-and-kotlin/ JPA est une spécification Java pour la persistance objet-relationnel, mais son utilisation avec Kotlin présente des incompatibilités dues aux différences de conception des deux langages Les classes Kotlin sont finales par défaut, ce qui empêche la création de proxies par JPA pour le lazy loading et les opérations transactionnelles Le plugin kotlin-jpa génère automatiquement des constructeurs sans argument et rend les classes open, résolvant les problèmes de compatibilité Les data classes Kotlin ne sont pas adaptées aux entités JPA car elles génèrent equals/hashCode basés sur tous les champs, causant des problèmes avec les relations lazy L'utilisation de lateinit var pour les relations peut provoquer des exceptions si on accède aux propriétés avant leur initialisation par JPA Les types non-nullables Kotlin peuvent entrer en conflit avec le comportement de JPA qui initialise les entités avec des valeurs null temporaires Le backing field direct dans les getters/setters personnalisés peut contourner la logique de JPA et casser le lazy loading IntelliJ IDEA 2024.3 introduit des inspections pour détecter automatiquement ces problèmes et propose des quick-fixes L'IDE détecte les entités finales, les data classes inappropriées, les problèmes de constructeurs et l'usage incorrect de lateinit Ces nouvelles fonctionnalités aident les développeurs à éviter les bugs subtils liés à l'utilisation de JPA avec Kotlin Librairies Guide sur MapStruct @IterableMapping - https://www.baeldung.com/java-mapstruct-iterablemapping MapStruct est une bibliothèque Java pour générer automatiquement des mappers entre beans, l'annotation @IterableMapping permet de configurer finement le mapping de collections L'attribut dateFormat permet de formater automatiquement des dates lors du mapping de listes sans écrire de boucle manuelle L'attribut qualifiedByName permet de spécifier quelle méthode custom appliquer sur chaque élément de la collection à mapper Exemple d'usage : filtrer des données sensibles comme des mots de passe en mappant uniquement certains champs via une méthode dédiée L'attribut nullValueMappingStrategy permet de contrôler le comportement quand la collection source est null (retourner null ou une collection vide) L'annotation fonctionne pour tous types de collections Java (List, Set, etc.) et génère le code de boucle nécessaire Possibilité d'appliquer des formats numériques avec numberFormat pour convertir des nombres en chaînes avec un format spécifique MapStruct génère l'implémentation complète du mapper au moment de la compilation, éliminant le code boilerplate L'annotation peut être combinée avec @Named pour créer des méthodes de mapping réutilisables et nommées Le mapping des collections supporte les conversions de types complexes au-delà des simples conversions de types primitifs Accès aux fichiers Samba depuis Java avec JCIFS - https://www.baeldung.com/java-samba-jcifs JCIFS est une bibliothèque Java permettant d'accéder aux partages Samba/SMB sans monter de lecteur réseau, supportant le protocole SMB3 on pense aux galériens qui doivent se connecter aux systèmes dit legacy La configuration nécessite un contexte CIFS (CIFSContext) et des objets SmbFile pour représenter les ressources distantes L'authentification se fait via NtlmPasswordAuthenticator avec domaine, nom d'utilisateur et mot de passe La bibliothèque permet de lister les fichiers et dossiers avec listFiles() et vérifier leurs propriétés (taille, date de modification) Création de fichiers avec createNewFile() et de dossiers avec mkdir() ou mkdirs() pour créer toute une arborescence Suppression via delete() qui peut parcourir et supprimer récursivement des arborescences entières Copie de fichiers entre partages Samba avec copyTo(), mais impossibilité de copier depuis le système de fichiers local Pour copier depuis le système local, utilisation des streams SmbFileInputStream et SmbFileOutputStream Les opérations peuvent cibler différents serveurs Samba et différents partages (anonymes ou protégés par mot de passe) La bibliothèque s'intègre dans des blocs try-with-resources pour une gestion automatique des ressources Quarkus 3.31 - Support complet Java 25, nouveau packaging Maven et Panache Next - https://quarkus.io/blog/quarkus-3-31-released/ Support complet de Java 25 avec images runtime et native Nouveau packaging Maven de type quarkus avec lifecycle optimisé pour des builds plus rapides voici un article complet pour plus de detail https://quarkus.io/blog/building-large-applications/ Introduction de Panache Next, nouvelle génération avec meilleure expérience développeur et API unifiée ORM/Reactive Mise à jour vers Hibernate ORM 7.2, Reactive 3.2, Search 8.2 Support de Hibernate Spatial pour les données géospatiales Passage à Testcontainers 2 et JUnit 6 Annotations de sécurité supportées sur les repositories Jakarta Data Chiffrement des tokens OIDC pour les implémentations custom TokenStateManager Support OAuth 2.0 Pushed Authorization Requests dans l'extension OIDC Maven 3.9 maintenant requis minimum pour les projets Quarkus A2A Java SDK 1.0.0.Alpha1 - Alignement avec la spécification 1.0 du protocole Agent2Agent - https://quarkus.io/blog/a2a-java-sdk-1-0-0-alpha1/ Le SDK Java A2A implémente le protocole Agent2Agent qui permet la communication standardisée entre agents IA pour découvrir des capacités, déléguer des tâches et collaborer Passage à la version 1.0 de la spécification marque la transition d'expérimental à production-ready avec des changements cassants assumés Modernisation complète du module spec avec des Java records partout remplaçant le mix précédent de classes et records pour plus de cohérence Adoption de Protocol Buffers comme source de vérité avec des mappers MapStruct pour la conversion et Gson pour JSON-RPC Les builders utilisent maintenant des méthodes factory statiques au lieu de constructeurs publics suivant les best practices Java modernes Introduction de trois BOMs Maven pour simplifier la gestion des dépendances du SDK core, des extensions et des implémentations de référence Quarkus AgentCard évolue avec une liste supportedInterfaces remplaçant url et preferredTransport pour plus de flexibilité dans la déclaration des protocoles Support de la pagination ajouté pour ListTasks et les endpoints de configuration des notifications push avec des wrappers Result appropriés Interface A2AHttpClient pluggable permettant des implémentations HTTP personnalisées avec une implémentation Vert.x fournie Travail continu vers la conformité complète avec le TCK 1.0 en cours de développement parallèlement à la finalisation de la spécification Pourquoi Quarkus finit par "cliquer" : les 10 questions que se posent les développeurs Java - https://www.the-main-thread.com/p/quarkus-java-developers-top-questions-2025 un article qui revele et repond aux questions des gens qui ont utilisé Quarkus depuis 4-6 mois, les non noob questions Quarkus est un framework Java moderne optimisé pour le cloud qui propose des temps de démarrage ultra-rapides et une empreinte mémoire réduite Pourquoi Quarkus démarre si vite ? Le framework effectue le travail lourd au moment du build (scanning, indexation, génération de bytecode) plutôt qu'au runtime Quand utiliser le mode réactif plutôt qu'impératif ? Le réactif est pertinent pour les workloads avec haute concurrence et dominance I/O, l'impératif reste plus simple dans les autres cas Quelle est la différence entre Dev Services et Testcontainers ? Dev Services utilise Testcontainers en gérant automatiquement le cycle de vie, les ports et la configuration sans cérémonie Comment la DI de Quarkus diffère de Spring ? CDI est un standard basé sur la sécurité des types et la découverte au build-time, différent de l'approche framework de Spring Comment gérer la configuration entre environnements ? Quarkus permet de scaler depuis le développement local jusqu'à Kubernetes avec des profils, fichiers multiples et configuration externe Comment tester correctement les applications Quarkus ? @QuarkusTest démarre l'application une fois pour toute la suite de tests, changeant le modèle mental par rapport à Spring Boot Que fait vraiment Panache en coulisses ? Panache est du JPA avec des opinions fortes et des défauts propres, enveloppant Hibernate avec un style Active Record Doit-on utiliser les images natives et quand ? Les images natives brillent pour le serverless et l'edge grâce au démarrage rapide et la faible empreinte mémoire, mais tous les apps n'en bénéficient pas Comment Quarkus s'intègre avec Kubernetes ? Le framework génère automatiquement les ressources Kubernetes, gère les health checks et métriques comme s'il était nativement conçu pour cet écosystème Comment intégrer l'IA dans une application Quarkus ? LangChain4j permet d'ajouter embeddings, retrieval, guardrails et observabilité directement en Java sans passer par Python Infrastructure Les alternatives à MinIO https://rmoff.net/2026/01/14/alternatives-to-minio-for-single-node-local-s3/ MinIO a abandonné le support single-node fin 2025 pour des raisons commerciales, cassant de nombreuses démos et pipelines CI/CD qui l'utilisaient pour émuler S3 localement L'auteur cherche un remplacement simple avec image Docker, compatibilité S3, licence open source, déploiement mono-nœud facile et communauté active S3Proxy est très léger et facile à configurer, semble être l'option la plus simple mais repose sur un seul contributeur RustFS est facile à utiliser et inclut une GUI, mais c'est un projet très récent en version alpha avec une faille de sécurité majeure récente SeaweedFS existe depuis 2012 avec support S3 depuis 2018, relativement facile à configurer et dispose d'une interface web basique Zenko CloudServer remplace facilement MinIO mais la documentation et le branding (cloudserver/zenko/scality) peuvent prêter à confusion Garage nécessite une configuration complexe avec fichier TOML et conteneur d'initialisation séparé, pas un simple remplacement drop-in Apache Ozone requiert au minimum quatre nœuds pour fonctionner, beaucoup trop lourd pour un usage local simple L'auteur recommande SeaweedFS et S3Proxy comme remplaçants viables, RustFS en maybe, et élimine Garage et Ozone pour leur complexité Garage a une histoire tres associative, il vient du collectif https://deuxfleurs.fr/ qui offre un cloud distribué sans datacenter C'est certainement pas une bonne idée, les datacenters dans l'espace https://taranis.ie/datacenters-in-space-are-a-terrible-horrible-no-good-idea/ Avis d'expert (ex-NASA/Google, Dr en électronique spatiale) : Centres de données spatiaux, une "terrible" idée. Incompatibilité fondamentale : L'électronique (surtout IA/GPU) est inadaptée à l'environnement spatial. Énergie : Accès limité. Le solaire (type ISS) est insuffisant pour l'échelle de l'IA. Le nucléaire (RTG) est trop faible. Refroidissement : L'espace n'est pas "froid" ; absence de convection. Nécessite des radiateurs gigantesques (ex: 531m² pour 200kW). Radiations : Provoque erreurs (SEU, SEL) et dommages. Les GPU sont très vulnérables. Blindage lourd et inefficace. Les puces "durcies" sont très lentes. Communications : Bande passante très limitée (1Gbps radio vs 100Gbps terrestre). Le laser est tributaire des conditions atmosphériques. Conclusion : Projet extrêmement difficile, coûteux et aux performances médiocres. Data et Intelligence Artificielle Guillaume a développé un serveur MCP pour arXiv (le site de publication de papiers de recherche) en Java avec le framework Quarkus https://glaforge.dev/posts/2026/01/18/implementing-an-arxiv-mcp-server-with-quarkus-in-java/ Implémentation d'un serveur MCP (Model Context Protocol) arXiv en Java avec Quarkus. Objectif : Accéder aux publications arXiv et illustrer les fonctionnalités moins connues du protocole MCP. Mise en œuvre : Utilisation du framework Quarkus (Java) et son support MCP étendu. Assistance par Antigravity (IDE agentique) pour le développement et l'intégration de l'API arXiv. Interaction avec l'API arXiv : requêtes HTTP, format XML Atom pour les résultats, parser XML Jackson. Fonctionnalités MCP exposées : Outils (@Tool) : Recherche de publications (search_papers). Ressources (@Resource, @ResourceTemplate) : Taxonomie des catégories arXiv, métadonnées des articles (via un template d'URI). Prompts (@Prompt) : Exemples pour résumer des articles ou construire des requêtes de recherche. Configuration : Le serveur peut fonctionner en STDIO (local) ou via HTTP Streamable (local ou distant), avec une configuration simple dans des clients comme Gemini CLI. Conclusion : Quarkus simplifie la création de serveurs MCP riches en fonctionnalités, rendant les données et services "prêts pour l'IA" avec l'aide d'outils d'IA comme Antigravity. Anthropic ne mettra pas de pub dans Claude https://www.anthropic.com/news/claude-is-a-space-to-think c'est en reaction au plan non public d'OpenAi de mettre de la pub pour pousser les gens au mode payant OpenAI a besoin de cash et est probablement le plus utilisé pour gratuit au monde Anthropic annonce que Claude restera sans publicité pour préserver son rôle d'assistant conversationnel dédié au travail et à la réflexion approfondie. Les conversations avec Claude sont souvent sensibles, personnelles ou impliquent des tâches complexes d'ingénierie logicielle où les publicités seraient inappropriées. L'analyse des conversations montre qu'une part significative aborde des sujets délicats similaires à ceux évoqués avec un conseiller de confiance. Un modèle publicitaire créerait des incitations contradictoires avec le principe fondamental d'être "genuinely helpful" inscrit dans la Constitution de Claude. Les publicités introduiraient un conflit d'intérêt potentiel où les recommandations pourraient être influencées par des motivations commerciales plutôt que par l'intérêt de l'utilisateur. Le modèle économique d'Anthropic repose sur les contrats entreprise et les abonnements payants, permettant de réinvestir dans l'amélioration de Claude. Anthropic maintient l'accès gratuit avec des modèles de pointe et propose des tarifs réduits pour les ONG et l'éducation dans plus de 60 pays. Le commerce "agentique" sera supporté mais uniquement à l'initiative de l'utilisateur, jamais des annonceurs, pour préserver la confiance. Les intégrations tierces comme Figma, Asana ou Canva continueront d'être développées en gardant l'utilisateur aux commandes. Anthropic compare Claude à un cahier ou un tableau blanc : des espaces de pensée purs, sans publicité. Infinispan 16.1 est sorti https://infinispan.org/blog/2026/02/04/infinispan-16-1 déjà le nom de la release mérite une mention Le memory bounded par cache et par ensemble de cache s est pas facile à faire en Java Une nouvelle api OpenAPI AOT caché dans les images container Un serveur MCP local juste avec un fichier Java ? C'est possible avec LangChain4j et JBang https://glaforge.dev/posts/2026/02/11/zero-boilerplate-java-stdio-mcp-servers-with-langchain4j-and-jbang/ Création rapide de serveurs MCP Java sans boilerplate. MCP (Model Context Protocol): standard pour connecter les LLM à des outils et données. Le tutoriel répond au manque d'options simples pour les développeurs Java, face à une prédominance de Python/TypeScript dans l'écosystème MCP. La solution utilise: LangChain4j: qui intègre un nouveau module serveur MCP pour le protocole STDIO. JBang: permet d'exécuter des fichiers Java comme des scripts, éliminant les fichiers de build (pom.xml, Gradle). Implémentation: se fait via un seul fichier .java. JBang gère automatiquement les dépendances (//DEPS). L'annotation @Tool de LangChain4j expose les méthodes Java aux LLM. StdioMcpServerTransport gère la communication JSON-RPC via l'entrée/sortie standard (STDIO). Point crucial: Les logs doivent impérativement être redirigés vers System.err pour éviter de corrompre System.out, qui est réservé à la communication MCP (messages JSON-RPC). Facilite l'intégration locale avec des outils comme Gemini CLI, Claude Code, etc. Reciprocal Rank Fusion : un algorithme utile et souvent utilisé pour faire de la recherche hybride, pour mélanger du RAG et des recherches par mots-clé https://glaforge.dev/posts/2026/02/10/advanced-rag-understanding-reciprocal-rank-fusion-in-hybrid-search/ RAG : Qualité LLM dépend de la récupération. Recherche Hybride : Combiner vectoriel et mots-clés (BM25) est optimal. Défi : Fusionner des scores d'échelles différentes. Solution : Reciprocal Rank Fusion (RRF). RRF : Algorithme robuste qui fusionne des listes de résultats en se basant uniquement sur le rang des documents, ignorant les scores. Avantages RRF : Pas de normalisation de scores, scalable, excellente première étape de réorganisation. Architecture RAG fréquente : RRF (large sélection) + Cross-Encoder / modèle de reranking (précision fine). RAG-Fusion : Utilise un LLM pour générer plusieurs variantes de requête, puis RRF agrège tous les résultats pour renforcer le consensus et réduire les hallucinations. Implémentation : LangChain4j utilise RRF par défaut pour agréger les résultats de plusieurs retrievers. Les dernières fonctionnalités de Gemini et Nano Banana supportées dans LangChain4j https://glaforge.dev/posts/2026/02/06/latest-gemini-and-nano-banana-enhancements-in-langchain4j/ Nouveaux modèles d'images Nano Banana (Gemini 2.5/3.0) pour génération et édition (jusqu'à 4K). "Grounding" via Google Search (pour images et texte) et Google Maps (localisation, Gemini 2.5). Outil de contexte URL (Gemini 3.0) pour lecture directe de pages web. Agents multimodaux (AiServices) capables de générer des images. Configuration de la réflexion (profondeur Chain-of-Thought) pour Gemini 3.0. Métadonnées enrichies : usage des tokens et détails des sources de "grounding". Comment configurer Gemini CLI comment agent de code dans IntelliJ grâce au protocole ACP https://glaforge.dev/posts/2026/02/01/how-to-integrate-gemini-cli-with-intellij-idea-using-acp/ But : Intégrer Gemini CLI à IntelliJ IDEA via l'Agent Client Protocol (ACP). Prérequis : IntelliJ IDEA 2025.3+, Node.js (v20+), Gemini CLI. Étapes : Installer Gemini CLI (npm install -g @google/gemini-cli). Localiser l'exécutable gemini. Configurer ~/.jetbrains/acp.json (chemin exécutable, --experimental-acp, use_idea_mcp: true). Redémarrer IDEA, sélectionner "Gemini CLI" dans l'Assistant IA. Usage : Gemini interagit avec le code et exécute des commandes (contexte projet). Important : S'assurer du flag --experimental-acp dans la configuration. Outillage PipeNet, une alternative (open source aussi) à LocalTunnel, mais un plus évoluée https://pipenet.dev/ pipenet: Alternative open-source et moderne à localtunnel (client + serveur). Usages: Développement local (partage, webhooks), intégration SDK, auto-hébergement sécurisé. Fonctionnalités: Client (expose ports locaux, sous-domaines), Serveur (déploiement, domaines personnalisés, optimisé cloud mono-port). Avantages vs localtunnel: Déploiement cloud sur un seul port, support multi-domaines, TypeScript/ESM, maintenance active. Protocoles: HTTP/S, WebSocket, SSE, HTTP Streaming. Intégration: CLI ou SDK JavaScript. JSON-IO — une librairie comme Jackson ou GSON, supportant JSON5, TOON, et qui pourrait être utile pour l'utilisation du "structured output" des LLMs quand ils ne produisent pas du JSON parfait https://github.com/jdereg/json-io json-io : Librairie Java pour la sérialisation et désérialisation JSON/TOON. Gère les graphes d'objets complexes, les références cycliques et les types polymorphes. Support complet JSON5 (lecture et écriture), y compris des fonctionnalités non prises en charge par Jackson/Gson. Format TOON : Notation orientée token, optimisée pour les LLM, réduisant l'utilisation de tokens de 40 à 50% par rapport au JSON. Légère : Aucune dépendance externe (sauf java-util), taille de JAR réduite (~330K). Compatible JDK 1.8 à 24, ainsi qu'avec les environnements JPMS et OSGi. Deux modes de conversion : vers des objets Java typés (toJava()) ou vers des Map (toMaps()). Options de configuration étendues via ReadOptionsBuilder et WriteOptionsBuilder. Optimisée pour les déploiements cloud natifs et les architectures de microservices. Utiliser mailpit et testcontainer pour tester vos envois d'emails https://foojay.io/today/testing-emails-with-testcontainers-and-mailpit/ l'article montre via SpringBoot et sans. Et voici l'extension Quarkus https://quarkus.io/extensions/io.quarkiverse.mailpit/quarkus-mailpit/?tab=docs Tester l'envoi d'emails en développement est complexe car on ne peut pas utiliser de vrais serveurs SMTP Mailpit est un serveur SMTP de test qui capture les emails et propose une interface web pour les consulter Testcontainers permet de démarrer Mailpit dans un conteneur Docker pour les tests d'intégration L'article montre comment configurer une application SpringBoot pour envoyer des emails via JavaMail Un module Testcontainers dédié à Mailpit facilite son intégration dans les tests Le conteneur Mailpit expose un port SMTP (1025) et une API HTTP (8025) pour vérifier les emails reçus Les tests peuvent interroger l'API HTTP de Mailpit pour valider le contenu des emails envoyés Cette approche évite d'utiliser des mocks et teste réellement l'envoi d'emails Mailpit peut aussi servir en développement local pour visualiser les emails sans les envoyer réellement La solution fonctionne avec n'importe quel framework Java supportant JavaMail Architecture Comment scaler un système de 0 à 10 millions d'utilisateurs https://blog.algomaster.io/p/scaling-a-system-from-0-to-10-million-users Philosophie : Scalabilité incrémentale, résoudre les goulots d'étranglement sans sur-ingénierie. 0-100 utilisateurs : Serveur unique (app, DB, jobs). 100-1K : Séparer app et DB (services gérés, pooling). 1K-10K : Équilibreur de charge, multi-serveurs d'app (stateless via sessions partagées). 10K-100K : Caching, réplicas de lecture DB, CDN (réduire charge DB). 100K-500K : Auto-scaling, applications stateless (authentification JWT). 500K-10M : Sharding DB, microservices, files de messages (traitement asynchrone). 10M+ : Déploiement multi-régions, CQRS, persistance polyglotte, infra personnalisée. Principes clés : Simplicité, mesure, stateless essentiel, cache/asynchrone, sharding prudent, compromis (CAP), coût de la complexité. Patterns d'Architecture 2026 - Du Hype à la Réalité du Terrain (Part 1/2) - https://blog.ippon.fr/2026/01/30/patterns-darchitecture-2026-part-1/ L'article présente quatre patterns d'architecture logicielle pour répondre aux enjeux de scalabilité, résilience et agilité business dans les systèmes modernes Il présentent leurs raisons et leurs pièges Un bon rappel L'Event-Driven Architecture permet une communication asynchrone entre systèmes via des événements publiés et consommés, évitant le couplage direct Les bénéfices de l'EDA incluent la scalabilité indépendante des composants, la résilience face aux pannes et l'ajout facile de nouveaux cas d'usage Le pattern API-First associé à un API Gateway centralise la sécurité, le routage et l'observabilité des APIs avec un catalogue unifié Le Backend for Frontend crée des APIs spécifiques par canal (mobile, web, partenaires) pour optimiser l'expérience utilisateur CQRS sépare les modèles de lecture et d'écriture avec des bases optimisées distinctes, tandis que l'Event Sourcing stocke tous les événements plutôt que l'état actuel Le Saga Pattern gère les transactions distribuées via orchestration centralisée ou chorégraphie événementielle pour coordonner plusieurs microservices Les pièges courants incluent l'explosion d'événements granulaires, la complexité du debugging distribué, et la mauvaise gestion de la cohérence finale Les technologies phares sont Kafka pour l'event streaming, Kong pour l'API Gateway, EventStoreDB pour l'Event Sourcing et Temporal pour les Sagas Ces patterns nécessitent une maturité technique et ne sont pas adaptés aux applications CRUD simples ou aux équipes junior Patterns d'architecture 2026 : du hype à la réalité terrain part. 2 - https://blog.ippon.fr/2026/02/04/patterns-darchitecture-2026-part-2/ Deuxième partie d'un guide pratique sur les patterns d'architecture logicielle et système éprouvés pour moderniser et structurer les applications en 2026 Strangler Fig permet de migrer progressivement un système legacy en l'enveloppant petit à petit plutôt que de tout réécrire d'un coup (70% d'échec pour les big bang) Anti-Corruption Layer protège votre nouveau domaine métier des modèles externes et legacy en créant une couche de traduction entre les systèmes Service Mesh gère automatiquement la communication inter-services dans les architectures microservices (sécurité mTLS, observabilité, résilience) Architecture Hexagonale sépare le coeur métier des détails techniques via des ports et adaptateurs pour améliorer la testabilité et l'évolutivité Chaque pattern est illustré par un cas client concret avec résultats mesurables et liste des pièges à éviter lors de l'implémentation Les technologies 2026 mentionnées incluent Istio, Linkerd pour service mesh, LaunchDarkly pour feature flags, NGINX et Kong pour API gateway Tableau comparatif final aide à choisir le bon pattern selon la complexité, le scope et le use case spécifique du projet L'article insiste sur une approche pragmatique : ne pas utiliser un pattern juste parce qu'il est moderne mais parce qu'il résout un problème réel Pour les systèmes simples type CRUD ou avec peu de services, ces patterns peuvent introduire une complexité inutile qu'il faut savoir éviter Méthodologies Le rêve récurrent de remplacer voire supprimer les développeurs https://www.caimito.net/en/blog/2025/12/07/the-recurring-dream-of-replacing-developers.html Depuis 1969, chaque décennie voit une tentative de réduire le besoin de développeurs (de COBOL, UML, visual builders… à IA). Motivation : frustration des dirigeants face aux délais et coûts de développement. La complexité logicielle est intrinsèque et intellectuelle, non pas une question d'outils. Chaque vague technologique apporte de la valeur mais ne supprime pas l'expertise humaine. L'IA assiste les développeurs, améliore l'efficacité, mais ne remplace ni le jugement ni la gestion de la complexité. La demande de logiciels excède l'offre car la contrainte majeure est la réflexion nécessaire pour gérer cette complexité. Pour les dirigeants : les outils rendent-ils nos développeurs plus efficaces sur les problèmes complexes et réduisent-ils les tâches répétitives ? Le "rêve" de remplacer les développeurs, irréalisable, est un moteur d'innovation créant des outils précieux. Comment creuser des sujets à l'ère de l'IA générative. Quid du partage et la curation de ces recherches ? https://glaforge.dev/posts/2026/02/04/researching-topics-in-the-age-of-ai-rock-solid-webhooks-case-study/ Recherche initiale de l'auteur sur les webhooks en 2019, processus long et manuel. L'IA (Deep Research, Gemini, NotebookLM) facilite désormais la recherche approfondie, l'exploration de sujets et le partage des résultats. L'IA a identifié et validé des pratiques clés pour des déploiements de webhooks résilients, en grande partie les mêmes que celles trouvées précédemment par l'auteur. Génération d'artefacts par l'IA : rapport détaillé, résumé concis, illustration sketchnote, et même une présentation (slide deck). Guillaume s'interroge sur le partage public de ces rapports de recherche générés par l'IA, tout en souhaitant éviter le "AI Slop". Loi, société et organisation Le logiciel menacé par le vibe coding https://www.techbuzz.ai/articles/we-built-a-monday-com-clone-in-under-an-hour-with-ai Deux journalistes de CNBC sans expérience de code ont créé un clone fonctionnel de Monday.com en moins de 60 minutes pour 5 à 15 dollars. L'expérience valide les craintes des investisseurs qui ont provoqué une baisse de 30% des actions des entreprises SaaS. L'IA a non seulement reproduit les fonctionnalités de base mais a aussi recherché Monday.com de manière autonome pour identifier et recréer ses fonctionnalités clés. Cette technique appelée "vibe-coding" permet aux non-développeurs de construire des applications via des instructions en anglais courant. Les entreprises les plus vulnérables sont celles offrant des outils "qui se posent sur le travail" comme Atlassian, Adobe, HubSpot, Zendesk et Smartsheet. Les entreprises de cybersécurité comme CrowdStrike et Palo Alto sont considérées plus protégées grâce aux effets de réseau et aux barrières réglementaires. Les systèmes d'enregistrement comme Salesforce restent plus difficiles à répliquer en raison de leur profondeur d'intégration et de données d'entreprise. Le coût de 5 à 15 dollars par construction permet aux entreprises de prototyper plusieurs solutions personnalisées pour moins cher qu'une seule licence Monday.com. L'expérience soulève des questions sur la pérennité du marché de 5 milliards de dollars des outils de gestion de projet face à l'IA générative. Conférences En complément de l'agenda des conférences de Aurélie Vache, il y a également le site https://javaconferences.org/ (fait par Brian Vermeer) avec toutes les conférences Java à venir ! La liste des conférences provenant de Developers Conferences Agenda/List par Aurélie Vache et contributeurs : 12-13 février 2026 : Touraine Tech #26 - Tours (France) 12-13 février 2026 : World Artificial Intelligence Cannes Festival - Cannes (France) 19 février 2026 : ObservabilityCON on the Road - Paris (France) 6 mars 2026 : WordCamp Nice 2026 - Nice (France) 18 mars 2026 : Jupyter Workshops: AI in Jupyter: Building Extensible AI Capabilities for Interactive Computing - Saint-Maur-des-Fossés (France) 18-19 mars 2026 : Agile Niort 2026 - Niort (France) 20 mars 2026 : Atlantique Day 2026 - Nantes (France) 26 mars 2026 : Data Days Lille - Lille (France) 26-27 mars 2026 : SymfonyLive Paris 2026 - Paris (France) 26-27 mars 2026 : REACT PARIS - Paris (France) 27-29 mars 2026 : Shift - Nantes (France) 31 mars 2026 : ParisTestConf - Paris (France) 31 mars 2026-1 avril 2026 : FlowCon France 2026 - Paris (France) 1 avril 2026 : AWS Summit Paris - Paris (France) 2 avril 2026 : Pragma Cannes 2026 - Cannes (France) 2-3 avril 2026 : Xen Spring Meetup 2026 - Grenoble (France) 7 avril 2026 : PyTorch Conference Europe - Paris (France) 9-10 avril 2026 : Android Makers by droidcon 2026 - Paris (France) 9-11 avril 2026 : Drupalcamp Grenoble 2026 - Grenoble (France) 16-17 avril 2026 : MiXiT 2026 - Lyon (France) 17-18 avril 2026 : Faiseuses du Web 5 - Dinan (France) 22-24 avril 2026 : Devoxx France 2026 - Paris (France) 23-25 avril 2026 : Devoxx Greece - Athens (Greece) 6-7 mai 2026 : Devoxx UK 2026 - London (UK) 12 mai 2026 : Lead Innovation Day - Leadership Edition - Paris (France) 19 mai 2026 : La Product Conf Paris 2026 - Paris (France) 21-22 mai 2026 : Flupa UX Days 2026 - Paris (France) 22 mai 2026 : AFUP Day 2026 Lille - Lille (France) 22 mai 2026 : AFUP Day 2026 Paris - Paris (France) 22 mai 2026 : AFUP Day 2026 Bordeaux - Bordeaux (France) 22 mai 2026 : AFUP Day 2026 Lyon - Lyon (France) 28 mai 2026 : DevCon 27 : I.A. & Vibe Coding - Paris (France) 28 mai 2026 : Cloud Toulouse 2026 - Toulouse (France) 29 mai 2026 : NG Baguette Conf 2026 - Paris (France) 29 mai 2026 : Agile Tour Strasbourg 2026 - Strasbourg (France) 2-3 juin 2026 : Agile Tour Rennes 2026 - Rennes (France) 2-3 juin 2026 : OW2Con - Paris-Châtillon (France) 3 juin 2026 : IA–NA - La Rochelle (France) 5 juin 2026 : TechReady - Nantes (France) 5 juin 2026 : Fork it! - Rouen - Rouen (France) 6 juin 2026 : Polycloud - Montpellier (France) 9 juin 2026 : JFTL - Montrouge (France) 9 juin 2026 : C: - Caen (France) 11-12 juin 2026 : DevQuest Niort - Niort (France) 11-12 juin 2026 : DevLille 2026 - Lille (France) 12 juin 2026 : Tech F'Est 2026 - Nancy (France) 16 juin 2026 : Mobilis In Mobile 2026 - Nantes (France) 17-19 juin 2026 : Devoxx Poland - Krakow (Poland) 17-20 juin 2026 : VivaTech - Paris (France) 18 juin 2026 : Tech'Work - Lyon (France) 22-26 juin 2026 : Galaxy Community Conference - Clermont-Ferrand (France) 24-25 juin 2026 : Agi'Lille 2026 - Lille (France) 24-26 juin 2026 : BreizhCamp 2026 - Rennes (France) 2 juillet 2026 : Azur Tech Summer 2026 - Valbonne (France) 2-3 juillet 2026 : Sunny Tech - Montpellier (France) 3 juillet 2026 : Agile Lyon 2026 - Lyon (France) 6-8 juillet 2026 : Riviera Dev - Sophia Antipolis (France) 2 août 2026 : 4th Tech Summit on Artificial Intelligence & Robotics - Paris (France) 20-22 août 2026 : 4th Tech Summit on AI & Robotics - Paris (France) & Online 4 septembre 2026 : JUG Summer Camp 2026 - La Rochelle (France) 17-18 septembre 2026 : API Platform Conference 2026 - Lille (France) 24 septembre 2026 : PlatformCon Live Day Paris 2026 - Paris (France) 1 octobre 2026 : WAX 2026 - Marseille (France) 1-2 octobre 2026 : Volcamp - Clermont-Ferrand (France) 5-9 octobre 2026 : Devoxx Belgium - Antwerp (Belgium) Nous contacter Pour réagir à cet épisode, venez discuter sur le groupe Google https://groups.google.com/group/lescastcodeurs Contactez-nous via X/twitter https://twitter.com/lescastcodeurs ou Bluesky https://bsky.app/profile/lescastcodeurs.com Faire un crowdcast ou une crowdquestion Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Tous les épisodes et toutes les infos sur https://lescastcodeurs.com/

In Depth
Figma is not the source of truth | Ryan Lucas (VP of Design, Rippling)

In Depth

Play Episode Listen Later Feb 12, 2026 66:14


In the second Executive Function episode, Brett sits down with Ryan Lucas, VP of Design at Rippling. Before Rippling, Ryan led design at Retool and co-founded multiple startups, bringing a rare founder's perspective to design leadership. A trained industrial designer, Ryan traces the roots of modern software design back 2,000 years to make the case that products must be useful, usable, and desirable - and above all, used. In today's episode, we discuss: Why design leaders who stop designing stop leading The four pillars every design manager must master How to delegate when you're a perfectionist Why leaders need strong opinions How to scale good judgment What Rippling's operating system teaches about speed and commitments References: Airbnb: https://www.airbnb.com/ Amazon: https://www.amazon.com/ Apple: https://www.apple.com/ Asana: https://www.asana.com/ Brian Chesky: https://www.linkedin.com/in/brianchesky/ CrossFit: https://www.crossfit.com/ Figma: https://www.figma.com/ Honeywell: https://www.honeywell.com/ Liz Sanders: https://www.linkedin.com/in/sandersliz/ Nest: https://store.google.com/category/google_nest Notion: https://www.notion.so/ Parker Conrad: https://www.linkedin.com/in/parkerconrad/ Patrick Collison: https://www.linkedin.com/in/patrickcollison/ Retool: https://retool.com/ Rippling: https://www.rippling.com/ Stripe: https://www.stripe.com/ Where to find Ryan: LinkedIn: https://www.linkedin.com/in/ryanwlucas/ Where to find Brett: LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/ Twitter/X: https://twitter.com/brettberson Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter/X: https://twitter.com/firstround YouTube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast Timestamps: 00:00 Intro 00:08 What design actually does at a software company 01:40 The roots of design: from industrial design to software 03:29 Useful, usable, desirable — and used 04:49 How design relates to engineering, product, and marketing 08:15 Measuring success as a design leader 12:40 The gap between director and VP-level design leadership 14:23 Why great design leaders jump up and down in altitude 19:26 The four pillars every design manager must master 21:34 Over-indexing on quality and the perfectionist trap 25:11 When lowering the quality bar actually cost the business 27:53 How to build judgment through pattern matching 31:25 How Ryan's design team differs from the rest 34:31 Why Figma is not the source of truth 36:32 How Ryan spends his week: recruiting, crits, and staff meetings 38:39 The "Do/Try/Consider" framework 42:12 The most important decisions of the past year 44:05 Should one-on-ones exist? 46:45 How to scale judgment 50:49 What to look for when hiring your first design leader 54:54 Advice for young designers who want to lead 58:24 Demanding yet supportive: A balanced management style 01:02:43 What Rippling's operating system teaches about execution

Digital Insights
Why I'm Not Worried About My AI Dependency

Digital Insights

Play Episode Listen Later Feb 12, 2026 6:57


I have been thinking a lot about AI lately, and specifically about whether we should be worried about our over-reliance on it. Because if I am being completely honest with myself, I use AI for absolutely everything now. Every email that comes in gets pasted into Claude for analysis. Every project brief gets discussed with it. Every piece of writing gets shaped by it. When Claude goes down, my entire workflow grinds to a halt.So should I be worried about this dependency? Should you?After spending the last few weeks working through this question, I have landed somewhere that might be useful to share. Because I think the conversation about AI is happening right now in organizations everywhere, and the dividing line between those who embrace it and those who resist it matters more than most people realize.The dependency questionWhen I first noticed how reliant I had become on AI, my immediate reaction was concern. I started thinking about all the things that could go wrong. What if Claude disappeared tomorrow? What if I was outsourcing too much of my thinking? What if I was losing critical skills?But then I started looking at all the other dependencies in my working life:If the internet goes down, work stopsIf the power goes off, my life stops.If AWS servers fail (which seems to happen every other week), half the tools I rely on become uselessIf Figma stops working, design work haltsJust one more dependencyWe have built our entire professional lives on top of dependencies we barely think about anymore. AI is just one more in that stack.The question is not really whether we should be dependent on it, because that ship has already sailed for most of us. The question is what kind of dependency we are building.The thinking questionThe more interesting concern for me is whether AI makes us stop thinking. I have heard this worry from a lot of people, and I understand where it comes from. Because when you watch someone paste a problem into ChatGPT and blindly implement whatever comes back, it does look like they have outsourced their brain.But I think this misunderstands what most of us are actually doing with AI.Three layers of thinkingThere are different levels of thinking that happen in any given day:Strategic thinking about project direction, what problems need solving, what approach makes senseAnalytical thinking about whether an idea is sound, whether evidence supports a conclusion, whether a design solves the actual problemMundane thinking about how to word an email, how to structure a document, how to format a proposalAI as a thinking partnerWhat I have found is that AI handles that bottom layer beautifully. When a client sends me a long rambling email with five different questions buried in three paragraphs of context, I no longer spend mental energy untangling it. I paste it into Claude and say, "Summarize the key questions here." Then I think about my answers. I tell Claude what I think about each point. Sometimes I ask for its perspective on one or two where I am genuinely uncertain, not because I cannot think through it myself, but because having a sounding board helps me think better.When I worked in an agency, I had colleagues for this. I would turn to Marcus or Chris and say, "What do you think about this?" I do not have that anymore. AI fills that gap. It does not replace my thinking. It helps me think more clearly by taking away the low-level cognitive load and giving me something to bounce ideas against.The value questionWhere this gets really interesting is in what it lets me deliver to clients.The landing page playbook exampleI worked on a project recently where a client wanted to improve the conversion rate of their landing pages. They had a budget that, in the past, would have stretched to maybe three or four sample landing pages and a conversation about why I built them that way. That would have been useful, but limited. They would have had some examples to work from, but not much guidance on how to replicate the approach themselves.With AI, I was able to create an entire playbook. Detailed guidelines for every component. Design principles explained with examples. A system they could use again and again. I delivered probably four times the value in about a third of the time it would have taken me before. The strategic thinking was all mine. The understanding of what makes landing pages convert came from 30 years of doing this work. But the documentation, the articulation, the packaging of that knowledge into something comprehensive and usable came from working with AI.Why clients still need expertiseMost of my clients will not do this work themselves, even with AI:They do not know what questions to askThey do not have the pattern recognition that comes from seeing hundreds of projectsThey cannot evaluate whether the output is actually good or just sounds convincingThey haven't the time to review and iterate upon the output to improve things.That is what they are paying me for. AI does not replace that expertise. It amplifies what I can do with it.The real conversationI think what bothers me most about the anti-AI sentiment I see is that it misses the point. People post about "AI slop" and declare they are "AI-free" as if that is some kind of badge of honor.The conversation should not be about whether to use AI. That question has already been answered by the market. The conversation should be about how to use it well. How to maintain the strategic thinking while leveraging the tool. How to keep the human insight while letting the machine handle the grunt work. How to deliver more value in less time without sacrificing quality.Because in my experience, the people who need UX professionals are not suddenly going to do it themselves just because AI exists. They still do not have the time. They still do not know what questions to ask. They still cannot evaluate quality. What changes is that the UX professionals who embrace AI can deliver significantly more value than those who resist it.The symbiosis advantageI am not threatened by AI. I am empowered by it:It lets me hold far more complexity in my head than I could beforeIt lets me process larger amounts of informationIt lets me deliver more refined, more thorough, more valuable workAll the things AI does badly (high-level strategy, judging quality, understanding human needs, driving projects forward) are exactly the things clients need me for.So I am leaning into this dependency. Deliberately. Because it allows me to deliver more value in less time. My clients get better work, delivered faster, for the same investment. That is why I am in business. AI has become another tool in my arsenal, like Figma or analytics platforms or any of the other things I rely on to do my job well.

When Shift Happens Podcast
E158: Avichal Garg, Electrical Capital CoFounder: Why Bitcoin Hitting $10 Million Is Less Crazy Than You Think

When Shift Happens Podcast

Play Episode Listen Later Feb 12, 2026 71:40


Avichal Garg is co-founder of Electric Capital, one of crypto's most respected early-stage funds, and an early backer of Solana, Kraken, Figma, and Bitwise.THE SHIFT NEWSLETTER

The Official SaaStr Podcast: SaaS | Founders | Investors
SaaStr 841: Going From Blobs to Billions. Clay's Co-Founder Breaks Down Inbound, Outbound, and AI-Powered Sales.

The Official SaaStr Podcast: SaaS | Founders | Investors

Play Episode Listen Later Feb 11, 2026 32:44


SaaStr 841: Going From Blobs to Billions. Clay's Co-Founder Breaks Down Inbound, Outbound, and AI-Powered Sales. Clay's Co-Founder Varun Anand takes the stage at SaaStr to break down how the company went from paying for claymation blobs before generating any revenue to powering growth workflows for companies like Cursor, Anthropic, and Figma. He explains why brand has always been core to Clay's identity, how their CFO roast videos and creative campaigns are actually capturing mindshare in a world where B2B marketing is painfully boring, and why he pushes back on the "use AI for everything" mentality that's taken over the industry. Varun does a full live demo building an inbound qualification workflow from scratch using real audience volunteers, walking through everything from lead enrichment and waterfall data sourcing to AI-powered scoring, personalized meme generation, research brief creation, and CRM updates. He also brings audience members on stage to do live growth hacking for their actual business problems. Beyond the product, this session goes deep on hiring. Varun shares the origin story of the GTM Engineer role, how it went from an internal job title for Clay's non-traditional sales team to the most in-demand position in B2B SaaS, and what he actually looks for when evaluating candidates (hint: it's creativity, not a traditional sales background). He talks about Clay's take-home process, work trials, why they hire generalists who commit to specific roles, and the surprising backgrounds of some of their best hires. Whether you're building out your go-to-market motion, thinking about how to use AI without losing what makes your brand unique, or just trying to figure out what a GTM Engineer actually does, this session covers it all. --------------------- This episode is Sponsored in part by HappyFox: Imagine having AI agents for every support task — one that triages tickets, another that catches duplicates, one that spots churn risks. That'd be pretty amazing, right? HappyFox just made it real with Autopilot. These pre-built AI agents deploy in about 60 seconds and run for as low as 2 cents per successful action. All of it sits inside the HappyFox omnichannel, AI-first support stack — Chatbot, Copilot, and Autopilot working as one. Check them out at happyfox.com/saastr   ---------------------   Hey everybody, the biggest B2B + AI event of the year will be back - SaaStr AI in the SF Bay Area, aka the SaaStr Annual, will be back in May 2026.    With 68% VP-level and above, 36% CEOs and founders and a growing 25% AI-first professional, this is the very best of the best S-tier attendees and decision makers that come to SaaStr each year.     But here's the reality, folks: the longer you wait, the higher ticket prices can get. Early bird tickets are available now, but once they're gone, you'll pay hundreds more so don't wait.    Lock in your spot today by going to podcast.saastrannual.com to get my exclusive discount SaaStr AI SF 2026. We'll see you there.

Category Visionaries
How Maxima moved upmarket from 10-person startups to 500-1,000 employee companies after early customer feedback | Yogi Goel (Maxima)

Category Visionaries

Play Episode Listen Later Feb 9, 2026 22:51


Maxima is building AI agents that automate enterprise accounting while maintaining the auditability and control standards finance teams require. In a recent episode of BUILDERS, we sat down with Yogi Goel, CEO and Co-Founder of Maxima, to explore his eight-year journey at Rubrik from Series C through IPO, and how those lessons shaped his approach to solving the 70-80% of finance time currently wasted on manual work. Topics Discussed: Why Rubrik's approach—entering stagnant markets with first-principles thinking—became Maxima's blueprint Securing $3K-$5K POC commitments from Figma mockups before writing code Why Scale AI and Rippling rejected a point solution and demanded 3-4 modules from day one The compound startup model: building multiple products simultaneously to meet buyer expectations How 17% of CFOs are adopting AI tools today (vs 51% in software development) Why finance teams view AI agents as "digital college freshmen" who need proof of work Hiring from YouTube Studios, Apple, and Robinhood instead of legacy finance software companies How NetSuite World conference booth sizes revealed the data integration infrastructure gap The $3K-$5K validation threshold that proved finance pain was urgent enough to pay pre-product GTM Lessons For B2B Founders: Demand generation unlocks engineering potential: Yogi learned from his Rubrik mentors: "focus on demand and if you have great engineers then they will solve the problems." Maxima built products in 2-3 months they didn't initially know were technically feasible—because customer demand pulled the engineering team forward. For founders with strong technical teams, customer demand should drive the roadmap, not engineering's comfort zone. Trust your engineers to solve hard problems when customers are waiting. $3K-$5K is the pre-product validation threshold: Before writing any code, Yogi secured POC commitments at this price point based solely on Figma mockups. This isn't about revenue—it's about proving urgency. Verbal interest means nothing. Small pilot commitments mean "we'll try it someday." But $3K-$5K pre-product means "this problem is urgent enough to pay before seeing a working solution." Use this threshold to separate real pain from polite interest. Sophisticated buyers will reject your narrow MVP: Scale AI and Rippling told Maxima explicitly: "If you will only build this one thing, we will not buy. You have to commit to building three, four modules." Conventional wisdom says start narrow, but enterprise buyers with complex workflows won't adopt point solutions that create new integration headaches. When sophisticated buyers articulate their real buying criteria, ignore the startup playbook. Yogi built a "compound startup" with 4-5 modules from day one because that's what the market demanded. Target acute pain over easy access: Early-stage companies (10-30 people) were easier to reach but finance wasn't urgent enough. At that scale, it's "build product, ship product"—finance operations aren't broken enough to warrant urgent attention. Companies at 500-1,000+ employees have finance teams drowning in manual work that prevents strategic contribution. Target where pain justifies urgent action and budget exists, not where calendar access is easiest. Hire intensity and first-principles thinking over domain knowledge: Maxima deliberately hired zero engineers from legacy finance software companies. Their frontend engineer came from YouTube Studios. Others came from Apple, Robinhood, Netflix—none with financial product experience. Yogi's three hiring criteria: "incredible intensity, huge confidence in themselves, and fast thinking mode." Domain expertise creates pattern-matching to old solutions. First-principles thinking creates breakthrough products. One team member didn't finish high school but is "one of the best out there." Make AI explainable or finance teams won't adopt: Finance teams adopted faster than expected because Maxima showed every calculation step. "If they can prove by looking at the Math, you know, 18 plus 88 plus 36 is X. And I can see the step of the work, they are willing to give it to them." This isn't about fancy UX—it's about auditor-grade proof of work. Finance professionals won't trust black box outputs. Build transparency into the product architecture, not as an afterthought. This explainability became Maxima's competitive moat. Conference booth sizes reveal infrastructure gaps: At NetSuite World, the largest booths weren't ERP vendors or payment processors—they were data integration companies. This single observation validated that enterprises are desperately solving data fragmentation problems. Companies manually download from Stripe, Snowflake, Salesforce weekly to build Excel pivots. Maxima invested in upstream integrations as core infrastructure from day one. Use industry conferences to validate where companies are spending money on workarounds—that's where infrastructure gaps exist. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM

Supra Insider
#96: Inside Magic Patterns: Why frontend focus helps win over product teams | Alexander Danilowicz (CEO & Co-founder @ Magic Patterns)

Supra Insider

Play Episode Listen Later Feb 9, 2026 68:03


What if the best product decision is saying “no” to what everyone else is building?In this episode of Supra Insider, Marc Baselga and Ben Erez sit down with Alexander Danilowicz, founder and CEO of Magic Patterns, to unpack why his AI prototyping tool is the only one refusing to add backend features—even when competitors like Lovable, Bolt, and v0 are racing in that direction. Alex explains how focusing exclusively on front-end code leads to higher quality prototyping, why many use cases don't actually need a database, and how product teams at large companies can't risk connecting production data to prototyping tools anyway.They explore what it takes to maintain conviction when investors, customers, and the entire market seem to be moving the opposite way. Alex shares how using your own product daily keeps you honest about what's actually broken, why real user feedback looks different from “fake” feature requests (like “add dark mode”), and how a strong co-founding relationship helps you resist temptation when external pressure mounts.If you're a product leader wrestling with feature requests that don't align with your vision, trying to figure out when to follow the market versus when to trust your gut, or building tools in the AI coding space, this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

MacVoices Video
MacVoices #26058: Live! - Adobe's Past, Present, and Future, and The Thinking Game

MacVoices Video

Play Episode Listen Later Feb 6, 2026 43:29


The panel looks at Adobe's past dominance, current challenges, and uncertain future as AI tools and lower-cost alternatives reshape the creative landscape. Chuck Joiner, David Ginsburg, Eric Bolden, Marty Jencius, Web Bixby, Jim Rea, and Jeff Gamet cover how generative AI, subscription fatigue, collaboration gaps, and competitors like Affinity, Canva, and Figma are changing who really needs Adobe services such as Creative Cloud, while reflecting on historical tech shifts and whether Adobe's next chapter has already been written. A documentary recommendation wraps up this session. MacVoices is supported by Incogni. Take your personal data back with Incogni! Get 60% off an annual plan at https://incogni.com/chuck and use code “chuck" at checkout. Show Notes: Chapters: 00:00 Adobe's past, present, and AI disruption01:12 How AI fits into professional creative workflows03:09 Adobe's difficulty pivoting in a fast-moving market04:29 Desktop publishing history: PageMaker, Quark, and InDesign07:09 Public perception of AI “replacing” Adobe tools09:26 Photoshop Elements and missed marketing opportunities12:41 Subscription fatigue and rising alternatives14:04 Collaboration challenges and Canva/Affinity momentum17:45 Shift from print-centric tools to digital workflows22:13 Designers leaving Creative Cloud behind25:12 Adobe's legacy status and future positioning31:31 The Thinking Game documentary recommendation Links:Adobe's stock has slumped more than 45% since the end of 2023, reflecting analyst concerns over the threat of AI-driven disruption to SaaS companieshttps://www.bloomberg.com/news/articles/2026-01-13/adobe-analysts-turn-most-bearish-since-2013-as-ai-threat-looms The Thinking Game | Full documentary | Tribeca Film Festival official selectionhttps://www.youtube.com/watch?v=d95J8yzvjbQ Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn, but prefers Bluesky. Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him on Twitter, by email at embolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Support:      Become a MacVoices Patron on Patreon     http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:     http://macvoices.com      Twitter:     http://www.twitter.com/chuckjoiner     http://www.twitter.com/macvoices      Mastodon:     https://mastodon.cloud/@chuckjoiner      Facebook:     http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:     http://www.facebook.com/macvoices/      MacVoices Group on Facebook:     http://www.facebook.com/groups/macvoice      LinkedIn:     https://www.linkedin.com/in/chuckjoiner/      Instagram:     https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes     Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

MacVoices Audio
MacVoices #26058: Live! - Adobe's Past, Present, and Future, and The Thinking Game

MacVoices Audio

Play Episode Listen Later Feb 6, 2026 43:27


The panel looks at Adobe's past dominance, current challenges, and uncertain future as AI tools and lower-cost alternatives reshape the creative landscape. Chuck Joiner, David Ginsburg, Eric Bolden, Marty Jencius, Web Bixby, Jim Rea, and Jeff Gamet cover how generative AI, subscription fatigue, collaboration gaps, and competitors like Affinity, Canva, and Figma are changing who really needs Adobe services such as Creative Cloud, while reflecting on historical tech shifts and whether Adobe's next chapter has already been written. A documentary recommendation wraps up this session. MacVoices is supported by Incogni. Take your personal data back with Incogni! Get 60% off an annual plan at https://incogni.com/chuck and use code "chuck" at checkout. Show Notes: Chapters: 00:00 Adobe's past, present, and AI disruption 01:12 How AI fits into professional creative workflows 03:09 Adobe's difficulty pivoting in a fast-moving market 04:29 Desktop publishing history: PageMaker, Quark, and InDesign 07:09 Public perception of AI "replacing" Adobe tools 09:26 Photoshop Elements and missed marketing opportunities 12:41 Subscription fatigue and rising alternatives 14:04 Collaboration challenges and Canva/Affinity momentum 17:45 Shift from print-centric tools to digital workflows 22:13 Designers leaving Creative Cloud behind 25:12 Adobe's legacy status and future positioning 31:31 The Thinking Game documentary recommendation Links: Adobe's stock has slumped more than 45% since the end of 2023, reflecting analyst concerns over the threat of AI-driven disruption to SaaS companies https://www.bloomberg.com/news/articles/2026-01-13/adobe-analysts-turn-most-bearish-since-2013-as-ai-threat-looms The Thinking Game | Full documentary | Tribeca Film Festival official selection https://www.youtube.com/watch?v=d95J8yzvjbQ Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn, but prefers Bluesky. Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him on Twitter, by email at embolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. David Ginsburg is the host of the weekly podcast In Touch With iOS where he discusses all things iOS, iPhone, iPad, Apple TV, Apple Watch, and related technologies. He is an IT professional supporting Mac, iOS and Windows users. Visit his YouTube channel at https://youtube.com/daveg65 and find and follow him on Twitter @daveg65 and on Mastodon at @daveg65@mastodon.cloud. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession 'firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Support:      Become a MacVoices Patron on Patreon      http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:      http://macvoices.com      Twitter:      http://www.twitter.com/chuckjoiner      http://www.twitter.com/macvoices      Mastodon:      https://mastodon.cloud/@chuckjoiner      Facebook:      http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:      http://www.facebook.com/macvoices/      MacVoices Group on Facebook:      http://www.facebook.com/groups/macvoice      LinkedIn:      https://www.linkedin.com/in/chuckjoiner/      Instagram:      https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes      Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

Redefining AI - Artificial Intelligence with Squirro
Full Video Episode - The Great AI Reshuffle 2026 Predictions - Who Wins When Systems Change - Sangeet Paul Choudary

Redefining AI - Artificial Intelligence with Squirro

Play Episode Listen Later Feb 5, 2026 19:19


In the episode of Redefining AI, host Lauren Hawker Zafer speaks with Sangeet Paul Choudary, the bestselling author of Platform Revolution and the 2025 Thinkers50 Strategy Award winner for his latest book, Reshuffle.Sangeet argues that we are currently repeating the early mistakes of the Cloud era, viewing AI through the narrow lens of productivity and intelligence benchmarks (like GPT-5) rather than the structural reorganization of work itself. Lauren and Sangeet dive deep into why the next 18 months will bring a massive "narrative correction" as organizations move from asking what AI is to what it does to their capital allocation and organizational architecture.In this episode, you will learn: The Intelligence Trap: Why focusing on brute-force AI performance is a distraction from true system restructuring.The Workforce Split: How to lead through the divide of "Blind Believers" and "Blind Rejectors."The Reshuffle Framework: Why AI is the "missing glue" for complex systems and how to redistribute work now that knowledge is no longer scarce.AI-Native vs. AI-Adopter: How to tell if a company is truly transforming or just "tacking on" tools (The Adobe vs. Figma distinction).Sangeet Paul Choudary breaks down the fundamental shift from AI-adopting to AI-native, and unpacks the most relevant issue in 2026:In an AI-adopting company, the person is the "node" and AI is the tool. In an AI-native company, the system is the node, and work is redistributed based on where intelligence (human or artificial) is most effective.Here is a sharp, condensed way to state that principle:The true shift isn't about augmenting individuals; it's about rethinking the architecture of the organization itself. If you assume work must still be organized around individual silos, you aren't being AI-native. Real transformation happens when you stop asking how AI helps the person and start asking how work should be redistributed and restructured now that intelligence is a decentralized utility.00:00 –  Sangeet Paul Choudary, author of Reshuffle, 2025 Thinkers50 Strategy Award winner 01:30 – The Problem with the "Intelligence-First" AI Narrative02:50 – Beyond Intelligence: How AI Restructures Organizations04:00 – The Winners and Losers of the AI Value Pie05:10 – Moving from Task-Level AI to System-Level Assumptions06:20 – Lessons from the Cloud: Why History Rhymes with AI08:00 – Adobe vs. Figma: A Case Study in Native Architecture09:40 – Reimagining Returns: Breaking the Productivity Optimization Loop11:15 – 2025 Prediction: The Tension, Transition, and Transformation Phases12:50 – Avoiding the Split: Blind Believers vs. Blind Rejectors14:10 – The 18-Month Narrative Correction: From GPT-5 Hype to ROI Reality15:30 – How to Spot a Genuinely AI-Native Company17:00 – Rethinking Organizational Design: Distributed vs. Individual Work18:40 – Why AI is a Strategy and Capital Allocation Decision (Not IT)19:50 – Closing: Aligning Sales and Leadership with the New AI Architecture 

Developer Voices
Building the SpacetimeDB Database, Game-First (with Tyler Cloutier)

Developer Voices

Play Episode Listen Later Feb 4, 2026 101:05


Eighteen months ago, Tyler Cloutier appeared on the show with what sounded like an ambitious (some might say crazy) plan: build a new distributed database from scratch, then use it to power a massively multiplayer online game. That's two of the hardest problems in software, tackled simultaneously. But sometimes the best infrastructure comes from solving your own impossible problems.The game, Bitcraft, has now launched on Steam. SpacetimeDB has hit version 1.0. And Tyler returns to share what actually happened when theory met production reality. We cover the launch day performance disasters (including a cascading failure caused by logging while holding a lock), why single-threaded execution running entirely from L1 cache can outperform sophisticated multi-threaded approaches by two orders of magnitude, and how the database's reducer model - borrowed from functional programming - enables zero-downtime code deployments. We also get into how SpacetimeDB is expanding beyond games with TypeScript support and React hooks that make building real-time multiplayer web apps surprisingly simple.If you're building anything where multiple users need to see the same data update in real time - which, as Tyler points out, describes most successful applications from Figma to Facebook - SpacetimeDB's approach of treating every app as a multiplayer game might be worth understanding.--Support Developer Voices on Patreon: https://patreon.com/DeveloperVoicesSupport Developer Voices on YouTube: https://www.youtube.com/@DeveloperVoices/joinSpacetimeDB: https://spacetimedb.com/SpacetimeDB on GitHub: https://github.com/clockworklabs/SpacetimeDBOur previous episode with Tyler: https://youtu.be/roEsJcQYjd8Clockwork Labs: https://clockworklabs.io/Bitcraft Online: https://bitcraftonline.com/Bitcraft on Steam: https://store.steampowered.com/app/3454650/BitCraft_OnlineWebAssembly: https://webassembly.org/Flecs (ECS for C/C++): https://www.flecs.dev/flecs/TigerBeetle: https://tigerbeetle.com/CockroachDB: https://www.cockroachlabs.com/Google Cloud Spanner: https://cloud.google.com/spannerErlang: https://www.erlang.org/Apache Kafka: https://kafka.apache.org/Tyler Cloutier on X: https://x.com/TylerFCloutierTyler Cloutier on LinkedIn: https://www.linkedin.com/in/tylercloutier/--Kris on Bluesky: https://bsky.app/profile/krisajenkins.bsky.socialKris on Mastodon: http://mastodon.social/@krisajenkinsKris on LinkedIn: https://www.linkedin.com/in/krisjenkins/0:00 Intro2:01 The Architecture of SpacetimeDB5:01 Client-Side Prediction in Multiplayer Games11:00 Reducers and Event Streaming15:00 Launching Bitcraft on Steam19:00 Debugging Launch Performance Problems26:56 Hot-Swapping Server Code Without Downtime30:01 In-Memory Tables and Query Optimization42:00 Is SpacetimeDB Only For Games?51:00 Performance Benchmarking For Web Workloads55:00 Why Single-Threaded Beats Multi-Threaded1:00:01 Multi-Version Concurrency Control Trade-offs1:05:01 Sharding Data Across Multiple Nodes1:10:56 Inter-Module Communication and Actor Models1:17:00 Replication and the Write-Ahead Log1:24:00 Supported Client Languages1:29:00 Getting Started With SpacetimeDB1:39:02 Outro

Marketing Against The Grain
Stop Prompting: Build an AI "Design App" Instead (Demo)

Marketing Against The Grain

Play Episode Listen Later Feb 3, 2026 41:56


Description link: Want access to Lior Albeck's AI toolkit? Get it here: https://clickhubspot.com/eb1adb Ep. 397 If you're not building systems for creative work, are you falling behind? Kipp and Lior Albeck (CEO and Co-Founder of Weavy) dive into how AI is radically changing creative marketing and why system-building is now essential to stay competitive. Learn more on how to break down the mindshift every team needs, how to future-proof your creative assets, and the secrets behind building an AI-native company—plus, practical ways anyone can start systematizing their creative process today. Mentions Lior Albeck https://www.linkedin.com/in/lioralbeck/ Weavy https://www.weavy.ai/ Figma https://www.figma.com/ Zapier https://zapier.com/ Nano Banana https://gemini.google/overview/image-generation/ Get our guide to build your own Custom GPT: https://clickhubspot.com/customgpt We're creating our next round of content and want to ensure it tackles the challenges you're facing at work or in your business. To understand your biggest challenges we've put together a survey and we'd love to hear from you! https://bit.ly/matg-research Resource [Free] Steal our favorite AI Prompts featured on the show! Grab them here: https://clickhubspot.com/aip We're on Social Media! Follow us for everyday marketing wisdom straight to your feed YouTube: ​​https://www.youtube.com/channel/UCGtXqPiNV8YC0GMUzY-EUFg  Twitter: https://twitter.com/matgpod  TikTok: https://www.tiktok.com/@matgpod  Join our community https://landing.connect.com/matg Thank you for tuning into Marketing Against The Grain! Don't forget to hit subscribe and follow us on Apple Podcasts (so you never miss an episode)! https://podcasts.apple.com/us/podcast/marketing-against-the-grain/id1616700934   If you love this show, please leave us a 5-Star Review https://link.chtbl.com/h9_sjBKH and share your favorite episodes with friends. We really appreciate your support. Host Links: Kipp Bodnar, https://twitter.com/kippbodnar   Kieran Flanagan, https://twitter.com/searchbrat  ‘Marketing Against The Grain' is a HubSpot Original Podcast // Brought to you by Hubspot Media // Produced by Darren Clarke.

Supra Insider
#95: How to find your authentic voice online without faking it | Mallory Contois (VP Growth @ Maven, Ex-Pinterest)

Supra Insider

Play Episode Listen Later Feb 2, 2026 68:44


What if the thing holding you back from building a public presence is exactly what would make you stand out?In this episode of Supra Insider, Marc Baselga and Ben Erez sit down with Mallory Contois, VP of Growth at Maven, to unpack why this is the perfect moment for product leaders to start sharing publicly—even if they don't feel polished, interesting, or like they have it all figured out. Mallory explains how we're leaving the era of glossy, aspirational influencer content and entering one where audiences crave authenticity, relatability, and actionable takeaways.They tackle the three biggest mindsets that hold people back: the “influencer hater” who rejects performative content, the person who doesn't think they're interesting enough, and the professional who believes their work should speak for itself. Mallory breaks down why good work alone isn't enough, why consistency beats virality, and how to find your authentic voice without trying to game algorithms or chase trends.If you're a product leader who's been holding back from sharing publicly, wondering whether anyone would find your perspective valuable, or questioning whether personal branding is worth the effort—this episode is for you.All episodes of the podcast are also available on Spotify, Apple and YouTube.New to the pod? Subscribe below to get the next episode in your inbox

Lenny's Podcast: Product | Growth | Career
Dr. Becky on the surprising overlap between great parenting and great leadership

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Feb 1, 2026 91:56


Dr. Becky Kennedy is a clinical psychologist, the bestselling author of Good Inside, and the founder of a parenting platform used by millions. Known for her practical, psychology-based approach to parenting, Dr. Becky shares how the same principles that help parents raise resilient children can make you a much more effective leader. In this conversation, she breaks down why all human systems—whether families or companies—operate on the same fundamental principles, and how understanding these dynamics can make you more effective in every relationship.We discuss:1. Why repair—not perfection—defines strong leadership2. Why you need to connect before you correct to build cooperation and trust3. The “most generous interpretation” framework for handling difficult behaviors4. How to correctly set boundaries (vs. making requests)5. The power of “I believe you, and I believe in you”6. What it looks like to be a “sturdy” leader—Brought to you by:Merge—Fast, secure integrations for your products and agents: https://merge.dev/lennyMetaview—The AI platform for recruiting: https://metaview.ai/lennyFramer—Builder better websites faster: https://framer.com/lenny—Episode transcript: https://www.lennysnewsletter.com/p/dr-becky-on-the-surprising-overlap—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Dr. Becky Kennedy:• X: https://x.com/GoodInside• LinkedIn: https://www.linkedin.com/in/drbecky• Instagram: https://www.instagram.com/drbeckyatgoodinside• TikTok: https://www.tiktok.com/@drbeckyatgoodinside• Website: https://www.goodinside.com—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Dr. Becky Kennedy(05:14) Connecting parenting and leadership(08:40) The power of repair(11:05) Connecting before correcting(17:45) Good Inside framework at work(22:08) The most generous interpretation (MGI)(25:46) Curiosity over judgment(27:07) Understanding behavior change(31:08) What potty training can teach us about workplace behavior(34:40) Naming your intention(35:41) Sturdy leadership(40:52) How to set boundaries well(46:33) The role of leadership and consensus(50:50) The importance of being “locatable”(52:40) A powerful story of betrayal and realization(57:12) Building resilience over happiness(01:00:34) The power of the phrase “I believe you, and I believe in you.”(01:09:08) The Good Inside community and resources(01:16:22) AI corner(01:19:52) Good Inside's mission(01:22:26) Lightning round and final thoughts—Referenced:• Shreyas Doshi on pre-mortems, the LNO framework, the three levels of product work, why most execution problems are strategy problems, and ROI vs. opportunity cost thinking: https://www.lennysnewsletter.com/p/episode-3-shreyas-doshi• Radical Candor: From theory to practice with author Kim Scott: https://www.lennysnewsletter.com/p/radical-candor-from-theory-to-practice• From ChatGPT to Instagram to Uber: The quiet architect behind the world's most popular products | Peter Deng: https://www.lennysnewsletter.com/p/the-quiet-architect-peter-deng• Punch: https://en.wikipedia.org/wiki/Punch_(play)• Figma: https://www.figma.com• Andrew Hogan on LinkedIn: https://www.linkedin.com/in/ahhogan• Replit: https://replit.com• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• Lovable: https://lovable.dev• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (co-founder and CEO): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Claude: https://claude.ai• ChatGPT: https://chatgpt.com• Secrets We Keep on Netflix: https://www.netflix.com/title/81697668• K Pop Demon Hunters on Netflix: https://www.netflix.com/title/81498621• Liberty puzzles: https://libertypuzzles.com—Recommended books:• Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity: https://www.amazon.com/Radical-Candor-Revised-Kick-Ass-Humanity/dp/1250235375• Good Inside: A Practical Guide to Resilient Parenting Prioritizing Connection Over Correction: https://www.amazon.com/Good-Inside-Guide-Becoming-Parent/dp/0063159481• Leave Me Alone!: A Good Inside Story About Deeply Feeling Kids: https://www.amazon.com/Leave-Me-Alone-Inside-Feeling/dp/1250413117• The Power of Moments: Why Certain Experiences Have Extraordinary Impact: https://www.amazon.com/Power-Moments-Certain-Experiences-Extraordinary/dp/1501147765/• The Messy Middle: Finding Your Way Through the Hardest and Most Crucial Part of Any Bold Venture: https://www.amazon.com/Messy-Middle-Finding-Through-Hardest/dp/0735218072• Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration: https://www.amazon.com/Creativity-Inc-Expanded-Overcoming-Inspiration/dp/0593594649—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

Redefining AI - Artificial Intelligence with Squirro
Spotlight Fourteen Video Preview: The Great AI Reshuffle - Who Wins When Systems Change with Sangeet Paul Choudary

Redefining AI - Artificial Intelligence with Squirro

Play Episode Listen Later Jan 29, 2026 3:31


Spotlight Fourteen History does not repeat it rhymes. Spotlight fourteen is taken from the upcoming Redefining Episode on The Great AI Shuffle with Sangeet Paul Choudary. Sangeet Paul Choudary, author of Reshuffle, breaks down how AI is fundamentally transforming workflows, organizational structures, and business strategy. Moving beyond the idea of AI as just an intelligence tool, he explains why AI's real power lies in restructuring systems and unlocking entirely new sources of value.In this upcoming episode, Choudary explores what it means to build AI-native companies, why incumbents must rethink their identities, and how examples like Figma versus Adobe illustrate the coming shift. He also predicts a market correction and narrative reset around AI over the next 3–4 years, offering guidance for leaders on capital allocation, AI investments, and long-term strategy.The conversation dives into AI's role in regulated industries, its impact on sales and go-to-market strategies, and what executives must do now to stay competitive in an AI-driven economy.Topics include:AI-native companies, future of work, workflows, organizational design, enterprise AI, strategy, regulation, sales transformation, and innovation leadership.Who is Sangeet Paul ChoudarySangeet Choudary is the best-selling co-author of Platform Revolution and the author of the new book Reshuffle that was awarded the 2025 Thinkers50 Strategy Award for The most impactful idea in the field of strategy. He has advised CEOs at more than 40 Fortune 500 companies as well as pre-IPO tech firms. He is currently a Senior Fellow at the University of California, Berkeley, and has presented at leading global forums, including the G20 Summit, the World50 Summit, and the World Economic Forum.

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
Anthropic's Cloud App Integrations and Hiring Challenges

AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning

Play Episode Listen Later Jan 27, 2026 11:34


In this episode, we explore Anthropic's new interactive Cloud apps and integrations with popular workplace tools like Slack, Canva, and Figma. We also discuss how Anthropic is facing unique challenges in hiring engineers because their own AI models are now outperforming human applicants in technical assessments.Chapters00:00 Anthropic's Cloud Updates01:45 Interactive Cloud Apps04:49 Model Context Protocol & Co-Work08:17 Agent Permissions & Security Concerns13:42 AI Outperforms Human Applicants

Scrum Master Toolbox Podcast
BONUS Thinking Like an Architect in the Age of AI-Assisted Coding With Brian Childress

Scrum Master Toolbox Podcast

Play Episode Listen Later Jan 24, 2026 30:58


BONUS: Thinking Like an Architect in the Age of AI-Assisted Coding How can engineers leverage AI to write better code—and think like architects to build systems that truly scale? In this episode, Brian Childress, a CTO and software architect with over 15 years of experience, shares hard-won lessons from teams using AI coding tools daily, and explains why the real challenge isn't just writing code—it's designing systems that scale with users, features, and teams. The Complexity Trap: When AI Multiplies Our Problems "Most engineering projects and software engineers themselves lean more towards complexity, and I find that that complexity really is multiplied when we bring in the power of AI and its ability to write just tons and tons and tons of code."   Brian has observed a troubling pattern: AI tools can generate deeply nested components with complex data flows that technically work but are nearly impossible to understand or maintain. When teams don't guide AI through architectural decisions, they end up with code that becomes "a little too complex for us to understand what is actually going on here." The speed at which AI produces code makes understanding the underlying problem even more critical—we can solve problems quickly, but we must ensure we're solving them the right way. In this segment, we mention our longer AI Assisted Coding podcast series. Check that out for further insights and different perspectives on how our software community is learning to make better use of AI Assisted Coding tools.  Vibe Coding Has Its Place—But Know Its Limits "Vibe coding is incredibly powerful for designers and product owners who want to prompt until they get something that really demonstrates what they're trying to do."   Brian sees value across the entire spectrum from vibe coding to architect-driven development. Vibe coding allows teams to move from wireframes and Figma prototypes to actual working code much faster, enabling quicker validation with real customers. The key distinction is knowing when to use each approach:   Vibe coding works well for rapid prototyping and testing whether something has value Architect thinking becomes essential when building production systems that need to scale and be maintained What Does "Thinking Like an Architect" Actually Mean? "When I'm thinking more like an architect, I'm thinking more around how bigger components, higher level components start to fit together."   The architect mindset shifts focus from "how do I work within a framework" to "what is the problem I'm really solving?" Brian emphasizes that technology is actually the easiest part of what engineers do—you can Google or AI your way to a solution. The harder work is ensuring that the solution addresses the real customer need. An architect asks: How can I simplify? How can I explain this to someone else, technical or non-technical? The better you can explain it, the better you understand it. AI as Your Thought Partner "What it really forces us to do is to be able to explain ourselves better. I find most software engineers will hide behind complexity because they don't understand the problem."   Brian uses AI as a collaborative thought partner rather than just a code generator. He explains the problem, shares his thought process, and then strategizes back and forth—looking for questions that challenge his thinking. This approach forces engineers to communicate clearly instead of hiding behind technical jargon. The AI becomes like having a colleague with an enormous corpus of knowledge who can see solutions you might never have encountered in your career. Simplicity Through Four Shapes "I basically use four shapes to be able to diagram anything, and if I can't do that, then we still have too much complexity. It's a square, a triangle, a circle, and a line."   When helping colleagues shift from code-writing to architect-thinking, Brian insists on dead simplicity. If you can diagram a system—from customer-facing problems down to code component breakdowns, data flow, and integrations—using only these four basic shapes, you've reached true understanding. This simplification creates that "light bulb moment" where engineers suddenly get it and can translate understanding into code while in flow state. Making AI Work Culturally: Leading by Example "For me as a leader, as a CTO, I need to show my team this is how I'm using it, this is where I'm messing up with it, showing that it's okay."   Brian addresses the cultural challenge head-on: mid-level and senior engineers often resist AI tools, fearing job displacement or having to support "AI slop." His approach is to frame AI as a new tool to learn—just like Google and Stack Overflow were in years past—rather than a threat. He openly shares his experiments, including failures, demonstrating that it's acceptable to laugh at garbage code while learning from how it was generated. The Guardrails That Make AI Safe "If we have all of that—the guardrails, the ability to test, automation—then AI just helps us to create the code in the right way, following our coding standards."   The same engineering practices that protect against human errors protect against AI mistakes: automated testing, deployment guardrails, coding standards, and code review. Brian sees an opportunity for AI to help teams finally accomplish what they've always wanted but never had time for—comprehensive documentation and thorough automated test suites. Looking Ahead: More Architects, More Experiments, More Failures "I'm going to see more engineers acting like architects, more engineers thinking in ways of how do I construct this system, how do I move data around, how do I scale."   Brian's 2-3 year prediction: engineers will increasingly think architecturally because AI removes the need to deeply understand framework nuances. We'll have more time for safeguards, automated testing, and documentation. But expect both sides of the spectrum to intensify—more engineers embracing AI tools, and more resistance and high-profile failures from CEOs vibe-coding production apps into security incidents. Resources for Learning Brian recommends staying current through YouTube channels focused on AI and developer tools. His top recommendations for developer-focused AI content:   IndyDevDan NetworkChuck AI Jason   His broader advice: experiment with everything, document what you learn as you go, and be willing to fail publicly. The engineers who thrive will be those actively experimenting and learning.   About Brian Childress   Brian Childress is a CTO and software architect with over 15 years of experience working across highly regulated industries including healthcare, finance, and consumer SaaS products. He brings a non-traditional background to technology leadership, having built his expertise through dedication and continuous learning rather than formal computer science education. Brian is passionate about helping engineers think architecturally and leverage AI tools effectively while maintaining simplicity in system design.   You can link with Brian Childress on LinkedIn.

Marketing Against The Grain
233M Views in 3 Days: The David Beckham AI Workflow

Marketing Against The Grain

Play Episode Listen Later Jan 20, 2026 43:19


Get PJ's free AI Video Production Stack + Workflow: https://clickhubspot.com/whs Ep. 393 233 million views in just three days — can AI-generated ads really replace million-dollar productions? Kipp, Kieran, and guest, PJ Accetturo, of Genre.ai, dive into the wild world of AI-powered commercial workflows and the viral David Beckham ad that's turning heads across the industry. Learn more about AI-driven creative teams, the tools behind photorealistic video production, and the emerging future—where hyper-niche stories thrive and challenger brands outsmart the incumbents. Mentions PJ Accetturo https://www.linkedin.com/in/pj-accetturo-b3b693129/ Genre.ai https://www.genre.ai/ Figma https://www.figma.com/ Nano Banana Pro https://gemini.google/overview/image-generation/ Freepik https://www.freepik.com/ai/image-generator Veo 3.1 https://gemini.google/overview/video-generation/ Kling https://klingai.com/global/ ElevenLabs https://elevenlabs.io/ Get our guide to build your own Custom GPT: https://clickhubspot.com/customgpt We're creating our next round of content and want to ensure it tackles the challenges you're facing at work or in your business. To understand your biggest challenges we've put together a survey and we'd love to hear from you! https://bit.ly/matg-research Resource [Free] Steal our favorite AI Prompts featured on the show! Grab them here: https://clickhubspot.com/aip We're on Social Media! Follow us for everyday marketing wisdom straight to your feed YouTube: ​​https://www.youtube.com/channel/UCGtXqPiNV8YC0GMUzY-EUFg  Twitter: https://twitter.com/matgpod  TikTok: https://www.tiktok.com/@matgpod  Join our community https://landing.connect.com/matg Thank you for tuning into Marketing Against The Grain! Don't forget to hit subscribe and follow us on Apple Podcasts (so you never miss an episode)! https://podcasts.apple.com/us/podcast/marketing-against-the-grain/id1616700934   If you love this show, please leave us a 5-Star Review https://link.chtbl.com/h9_sjBKH and share your favorite episodes with friends. We really appreciate your support. Host Links: Kipp Bodnar, https://twitter.com/kippbodnar   Kieran Flanagan, https://twitter.com/searchbrat  ‘Marketing Against The Grain' is a HubSpot Original Podcast // Brought to you by Hubspot Media // Produced by Darren Clarke.