Podcasts about plg

  • 418PODCASTS
  • 1,701EPISODES
  • 44mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Mar 13, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about plg

Show all podcasts related to plg

Latest podcast episodes about plg

Unchurned
How Slido Cut Support Tickets by 70% With AI ft. Jo Massie (Slido)

Unchurned

Play Episode Listen Later Mar 13, 2026 18:41


小人物上籃
小人物上籃-霹靂鍵盤#218 籃球棒球雙殺韓國!WBC 狂熱與 TPBL 註冊大限的終極盤點 03/09/2026

小人物上籃

Play Episode Listen Later Mar 11, 2026 123:14


經過WBC預賽高潮迭起的一周,相信小人物聽眾們也跟三位主持人一樣,既為Team Taiwan小將們喝采、也為他們感到揪心吧!即使無緣前進邁阿密,擊敗韓國的精采一戰,還是這屆WBC非常令人難忘的記憶。雖然運動迷肯定得聊棒球,但霹靂鍵盤仍然是台灣籃球節目,而這周我們還是有看籃球的! PLG,勇士挾著「三月宇宙邦」的氣勢、洋基靠著禁區壓倒的態勢,先後擊敗獵鷹,突然陷入連敗的獵鷹除了少翟蒙而戰力大減,還面對哪些課題? TPBL,領頭的雲豹也在主場連吞兩敗,即使克羅馬狀態極佳也沒有搞定戰局,他們有什麼需要擔心的地方?同時間第2名到第5名球隊的相互糾纏,場場勝負都影響排名,更不用說排名落後的戰神與海神顯然也沒放棄,都讓例行賽後半段依然充滿變數。 本周的理性會客室,隨著3/9的TPBL註冊大限來到,7隊也決定了本季最終名單,哪些選擇令人意外?哪位壓線加盟的洋將能為戰局帶來衝擊?最後,「註冊大限」存在的用意與意義是什麼?這個突如其來的提問,有標準答案嗎?延後或拿掉會更好嗎?歡迎聽眾分享自己的想法! 成為

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

All speakers are announced at AIE EU, schedule coming soon. Join us there or in Miami with the renowned organizers of React Miami! Singapore CFP also open!We've called this out a few times over in AINews, but the overwhelming consensus in the Valley is that “the IDE is Dead”. In November it was just a gut feeling, but now we actually have data: even at the canonical “VSCode Fork” company, people are officially using more agents than tab autocomplete (the first wave of AI coding):Cursor has launched cloud agents for a few months now, and this specific launch is around Computer Use, which has come a long way since we first talked with Anthropic about it in 2024, and which Jonas productized as Autotab:We also take the opportunity to do a live demo, talk about slash commands and subagents, and the future of continual learning and personalized coding models, something that Sam previously worked on at New Computer. (The fact that both of these folks are top tier CEOs of their own startups that have now joined the insane talent density gathering at Cursor should also not be overlooked).Full Episode on YouTube!please like and subscribe!Timestamps00:00 Agentic Code Experiments00:53 Why Cloud Agents Matter02:08 Testing First Pillar03:36 Video Reviews Second Pillar04:29 Remote Control Third Pillar06:17 Meta Demos and Bug Repro13:36 Slash Commands and MCPs18:19 From Tab to Team Workflow31:41 Minimal Web UI Philosophy32:40 Why No File Editor34:38 Full Stack Cursor Debate36:34 Model Choice and Auto Routing38:34 Parallel Agents and Best Of N41:41 Subagents and Context Management44:48 Grind Mode and Throughput Future01:00:24 Cloud Agent Onboarding and MemoryTranscriptEP 77 - CURSOR - Audio version[00:00:00]Agentic Code ExperimentsSamantha: This is another experiment that we ran last year and didn't decide to ship at that time, but may come back to LM Judge, but one that was also agentic and could write code. So it wasn't just picking but also taking the learnings from two models or and models that it was looking at and writing a new diff.And what we found was that there were strengths to using models from different model providers as the base level of this process. Basically you could get almost like a synergistic output that was better than having a very unified like bottom model tier.Jonas: We think that over the coming months, the big unlock is not going to be one person with a model getting more done, like the water flowing faster and we'll be making the pipe much wider and so paralyzing more, whether that's swarms of agents or parallel agents, both of those are things that contribute to getting much more done in the same amount of time.Why Cloud Agents Matterswyx: This week, one of the biggest launches that Cursor's ever done is cloud agents. I think you, you had [00:01:00] cloud agents before, but this was like, you give cursor a computer, right? Yeah. So it's just basically they bought auto tab and then they repackaged it. Is that what's going on, or,Jonas: that's a big part of it.Yeah. Cloud agents already ran in their own computers, but they were sort of site reading code. Yeah. And those computers were not, they were like blank VMs typically that were not set up for the Devrel X for whatever repo the agents working on. One of the things that we talk about is if you put yourself in the model shoes and you were seeing tokens stream by and all you could do was cite read code and spit out tokens and hope that you had done the right thing,swyx: no chanceJonas: I'd be so bad.Like you obviously you need to run the code. And so that I think also is probably not that contrarian of a take, but no one has done that yet. And so giving the model the tools to onboard itself and then use full computer use end-to-end pixels in coordinates out and have the cloud computer with different apps in it is the big unlock that we've seen internally in terms of use usage of this going from, oh, we use it for little copy changes [00:02:00] to no.We're really like driving new features with this kind of new type of entech workflow. Alright, let's see it. Cool.Live Demo TourJonas: So this is what it looks like in cursor.com/agents. So this is one I kicked off a while ago. So on the left hand side is the chat. Very classic sort of agentic thing. The big new thing here is that the agent will test its changes.So you can see here it worked for half an hour. That is because it not only took time to write the tokens of code, it also took time to test them end to end. So it started Devrel servers iterate when needed. And so that's one part of it is like model works for longer and doesn't come back with a, I tried some things pr, but a I tested at pr that's ready for your review.One of the other intuition pumps we use there is if a human gave you a PR asked you to review it and you hadn't, they hadn't tested it, you'd also be annoyed because you'd be like, only ask me for a review once it's actually ready. So that's what we've done withTesting Defaults and Controlsswyx: simple question I wanted to gather out front.Some prs are way smaller, [00:03:00] like just copy change. Does it always do the video or is it sometimes,Jonas: Sometimes.swyx: Okay. So what's the judgment?Jonas: The model does it? So we we do some default prompting with sort. What types of changes to test? There's a slash command that people can do called slash no test, where if you do that, the model will not test,swyx: but the default is test.Jonas: The default is to be calibrated. So we tell it don't test, very simple copy changes, but test like more complex things. And then users can also write their agents.md and specify like this type of, if you're editing this subpart of my mono repo, never tested ‘cause that won't work or whatever.Videos and Remote ControlJonas: So pillar one is the model actually testing Pillar two is the model coming back with a video of what it did.We have found that in this new world where agents can end-to-end, write much more code, reviewing the code is one of these new bottlenecks that crop up. And so reviewing a video is not a substitute for reviewing code, but it is an entry point that is much, much easier to start with than glancing at [00:04:00] some giant diff.And so typically you kick one off you, it's done you come back and the first thing that you would do is watch this video. So this is a, video of it. In this case I wanted a tool tip over this button. And so it went and showed me what that looks like in, in this video that I think here, it actually used a gallery.So sometimes it will build storybook type galleries where you can see like that component in action. And so that's pillar two is like these demo videos of what it built. And then pillar number three is I have full remote control access to this vm. So I can go heat in here. I can hover things, I can type, I have full control.And same thing for the terminal. I have full access. And so that is also really useful because sometimes the video is like all you need to see. And oftentimes by the way, the video's not perfect, the video will show you, is this worth either merging immediately or oftentimes is this worth iterating with to get it to that final stage where I am ready to merge in.So I can go through some other examples where the first video [00:05:00] wasn't perfect, but it gave me confidence that we were on the right track and two or three follow-ups later, it was good to go. And then I also have full access here where some things you just wanna play around with. You wanna get a feel for what is this and there's no substitute to a live preview.And the VNC kind of VM remote access gives you that.swyx: Amazing What, sorry? What is VN. AndJonas: just the remote desktop. Remote desktop. Yeah.swyx: Sam, any other details that you always wanna call out?Samantha: Yeah, for me the videos have been super helpful. I would say, especially in cases where a common problem for me with agents and cloud agents beforehand was almost like under specification in my requests where our plan mode and going really back and forth and getting detailed implementation spec is a way to reduce the risk of under specification, but then similar to how human communication breaks down over time, I feel like you have this risk where it's okay, when I pull down, go to the triple of pulling down and like running this branch locally, I'm gonna see that, like I said, this should be a toggle and you have a checkbox and like, why didn't you get that detail?And having the video up front just [00:06:00] has that makes that alignment like you're talking about a shared artifact with the agent. Very clear, which has been just super helpful for me.Jonas: I can quickly run through some other Yes. Examples.Meta Agents and More DemosJonas: So this is a very front end heavy one. So one question I wasswyx: gonna say, is this only for frontJonas: end?Exactly. One question you might have is this only for front end? So this is another example where the thing I wanted it to implement was a better error message for saving secrets. So the cloud agents support adding secrets, that's part of what it needs to access certain systems. Part of onboarding that is giving access.This is cloud is working onswyx: cloud agents. Yes.Jonas: So this is a fun thing isSamantha: it can get super meta. ItJonas: can get super meta, it can start its own cloud agents, it can talk to its own cloud agents. Sometimes it's hard to wrap your mind around that. We have disabled, it's cloud agents starting more cloud agents. So we currently disallow that.Someday you might. Someday we might. Someday we might. So this actually was mostly a backend change in terms of the error handling here, where if the [00:07:00] secret is far too large, it would oh, this is actually really cool. Wow. That's the Devrel tools. That's the Devrel tools. So if the secret is far too large, we.Allow secrets above a certain size. We have a size limit on them. And the error message there was really bad. It was just some generic failed to save message. So I was like, Hey, we wanted an error message. So first cool thing it did here, zero prompting on how to test this. Instead of typing out the, like a character 5,000 times to hit the limit, it opens Devrel tools, writes js, or to paste into the input 5,000 characters of the letter A and then hit save, closes the Devrel tools, hit save and gets this new gets the new error message.So that looks like the video actually cut off, but here you can see the, here you can see the screenshot of the of the error message. What, so that is like frontend backend end-to-end feature to, to get that,swyx: yeah.Jonas: Andswyx: And you just need a full vm, full computer run everything.Okay. Yeah.Jonas: Yeah. So we've had versions of this. This is one of the auto tab lessons where we started that in 2022. [00:08:00] No, in 2023. And at the time it was like browser use, DOM, like all these different things. And I think we ended up very sort of a GI pilled in the sense that just give the model pixels, give it a box, a brain in a box is what you want and you want to remove limitations around context and capabilities such that the bottleneck should be the intelligence.And given how smart models are today, that's a very far out bottleneck. And so giving it its full VM and having it be onboarded with Devrel X set up like a human would is just been for us internally a really big step change in capability.swyx: Yeah I would say, let's call it a year ago the models weren't even good enough to do any of this stuff.SoSamantha: even six months ago. Yeah.swyx: So yeah what people have told me is like round about Sonder four fire is when this started being good enough to just automate fully by pixel.Jonas: Yeah, I think it's always a question of when is good enough. I think we found in particular with Opus 4 5, 4, 6, and Codex five three, that those were additional step [00:09:00] changes in the autonomy grade capabilities of the model to just.Go off and figure out the details and come back when it's done.swyx: I wanna appreciate a couple details. One 10 Stack Router. I see it. Yeah. I'm a big fan. Do you know any, I have to name the 10 Stack.Jonas: No.swyx: This just a random lore. Some buddy Sue Tanner. My and then the other thing if you switch back to the video.Jonas: Yeah.swyx: I wanna shout out this thing. Probably Sam did it. I don't knowJonas: the chapters.swyx: What is this called? Yeah, this is called Chapters. Yeah. It's like a Vimeo thing. I don't know. But it's so nice the design details, like the, and obviously a company called Cursor has to have a beautiful cursorSamantha: and it isswyx: the cursor.Samantha: Cursor.swyx: You see it branded? It's the cursor. Cursor, yeah. Okay, cool. And then I was like, I complained to Evan. I was like, okay, but you guys branded everything but the wallpaper. And he was like, no, that's a cursor wallpaper. I was like, what?Samantha: Yeah. Rio picked the wallpaper, I think. Yeah. The video.That's probably Alexi and yeah, a few others on the team with the chapters on the video. Matthew Frederico. There's been a lot of teamwork on this. It's a huge effort.swyx: I just, I like design details.Samantha: Yeah.swyx: And and then when you download it adds like a little cursor. Kind of TikTok clip. [00:10:00] Yes. Yes.So it's to make it really obvious is from Cursor,Jonas: we did the TikTok branding at the end. This was actually in our launch video. Alexi demoed the cloud agent that built that feature. Which was funny because that was an instance where one of the things that's been a consequence of having these videos is we use best of event where you run head to head different models on the same prompt.We use that a lot more because one of the complications with doing that before was you'd run four models and they would come back with some giant diff, like 700 lines of code times four. It's what are you gonna do? You're gonna review all that's horrible. But if you come back with four 22nd videos, yeah, I'll watch four 22nd videos.And then even if none of them is perfect, you can figure out like, which one of those do you want to iterate with, to get it over the line. Yeah. And so that's really been really fun.Bug Repro WorkflowJonas: Here's another example. That's we found really cool, which is we've actually turned since into a slash command as well slash [00:11:00] repro, where for bugs in particular, the model of having full access to the to its own vm, it can first reproduce the bug, make a video of the bug reproducing, fix the bug, make a video of the bug being fixed, like doing the same pattern workflow with obviously the bug not reproducing.And that has been the single category that has gone from like these types of bugs, really hard to reproduce and pick two tons of time locally, even if you try a cloud agent on it. Are you confident it actually fixed it to when this happens? You'll merge it in 90 seconds or something like that.So this is an example where, let me see if this is the broken one or the, okay, this is the fixed one. Okay. So we had a bug on cursor.com/agents where if you would attach images where remove them. Then still submit your prompt. They would actually still get attached to the prompt. Okay. And so here you can see Cursor is using, its full desktop by the way.This is one of the cases where if you just do, browse [00:12:00] use type stuff, you'll have a bad time. ‘cause now it needs to upload files. Like it just uses its native file viewer to do that. And so you can see here it's uploading files. It's going to submit a prompt and then it will go and open up. So this is the meta, this is cursor agent, prompting cursor agent inside its own environment.And so you can see here bug, there's five images attached, whereas when it's submitted, it only had one image.swyx: I see. Yeah. But you gotta enable that if you're gonna use cur agent inside cur.Jonas: Exactly. And so here, this is then the after video where it went, it does the same thing. It attaches images, removes, some of them hit send.And you can see here, once this agent is up, only one of the images is left in the attachments. Yeah.swyx: Beautiful.Jonas: Okay. So easy merge.swyx: So yeah. When does it choose to do this? Because this is an extra step.Jonas: Yes. I think I've not done a great job yet of calibrating the model on when to reproduce these things.Yeah. Sometimes it will do it of its own accord. Yeah. We've been conservative where we try to have it only do it when it's [00:13:00] quite sure because it does add some amount of time to how long it takes it to work on it. But we also have added things like the slash repro command where you can just do, fix this bug slash repro and then it will know that it should first make you a video of it actually finding and making sure it can reproduce the bug.swyx: Yeah. Yeah. One sort of ML topic this ties into is reward hacking, where while you write test that you update only pass. So first write test, it shows me it fails, then make you test pass, which is a classic like red green.Jonas: Yep.swyx: LikeJonas: A-T-D-D-T-D-Dswyx: thing.No, very cool. Was that the last demo? Is thereJonas: Yeah.Anything I missed on the demos or points that you think? I think thatSamantha: covers it well. Yeah.swyx: Cool. Before we stop the screen share, can you gimme like a, just a tour of the slash commands ‘cause I so God ready. Huh, what? What are the good ones?Samantha: Yeah, we wanna increase discoverability around this too.I think that'll be like a future thing we work on. Yeah. But there's definitely a lot of good stuff nowJonas: we have a lot of internal ones that I think will not be that interesting. Here's an internal one that I've made. I don't know if anyone else at Cursor uses this one. Fix bb.Samantha: I've never heard of it.Jonas: Yeah.[00:14:00]Fix Bug Bot. So this is a thing that we want to integrate more tightly on. So you made it forswyx: yourself.Jonas: I made this for myself. It's actually available to everyone in the team, but yeah, no one knows about it. But yeah, there will be Bug bot comments and so Bug Bot has a lot of cool things. We actually just launched Bug Bot Auto Fix, where you can click a button and or change a setting and it will automatically fix its own things, and that works great in a bunch of cases.There are some cases where having the context of the original agent that created the PR is really helpful for fixing the bugs, because it might be like, oh, the bug here is that this, is a regression and actually you meant to do something more like that. And so having the original prompt and all of the context of the agent that worked on it, and so here I could just do, fix or we used to be able to do fixed PB and it would do that.No test is another one that we've had. Slash repro is in here. We mentioned that one.Samantha: One of my favorites is cloud agent diagnosis. This is one that makes heavy use of the Datadog MCP. Okay. And I [00:15:00] think Nick and David on our team wrote, and basically if there is a problem with a cloud agent we'll spin up a bunch of subs.Like a singleswyx: instance.Samantha: Yeah. We'll take the ideas and argument and spin up a bunch of subagents using the Datadog MCP to explore the logs and find like all of the problems that could have happened with that. It takes the debugging time, like from potentially you can do quick stuff quickly with the Datadog ui, but it takes it down to, again, like a single agent call as opposed to trolling through logs yourself.Jonas: You should also talk about the stuff we've done with transcripts.Samantha: Yes. Also so basically we've also done some things internally. There'll be some versions of this as we ship publicly soon, where you can spit up an agent and give it access to another agent's transcript to either basically debug something that happened.So act as an external debugger. I see. Or continue the conversation. Almost like forking it.swyx: A transcript includes all the chain of thought for the 11 minutes here. 45 minutes there.Samantha: Yeah. That way. Exactly. So basically acting as a like secondary agent that debugs the first, so we've started to push more andswyx: they're all the same [00:16:00] code.It is just the different prompts, but the sa the same.Samantha: Yeah. So basically same cloud agent infrastructure and then same harness. And then like when we do things like include, there's some extra infrastructure that goes into piping in like an external transcript if we include it as an attachment.But for things like the cloud agent diagnosis, that's mostly just using the Datadog MCP. ‘Cause we also launched CPS along with along with this cloud agent launch, launch support for cloud agent cps.swyx: Oh, that was drawn out.Jonas: We won't, we'll be doing a bigger marketing moment for it next week, but, and you can now use CPS andswyx: People will listen to it as well.Yeah,Jonas: they'llSamantha: be ahead of the third. They'll be ahead. And I would I actually don't know if the Datadog CP is like publicly available yet. I realize this not sure beta testing it, but it's been one of my favorites to use. Soswyx: I think that one's interesting for Datadog. ‘cause Datadog wants to own that site.Interesting with Bits. I don't know if you've tried bits.Samantha: I haven't tried bits.swyx: Yeah.Jonas: That's their cloud agentswyx: product. Yeah. Yeah. They want to be like we own your logs and give us our, some part of the, [00:17:00] self-healing software that everyone wants. Yeah. But obviously Cursor has a strong opinion on coding agents and you, you like taking away from the which like obviously you're going to do, and not every company's like Cursor, but it's interesting if you're a Datadog, like what do you do here?Do you expose your logs to FDP and let other people do it? Or do you try to own that it because it's extra business for you? Yeah. It's like an interesting one.Samantha: It's a good question. All I know is that I love the Datadog MCP,Jonas: And yeah, it is gonna be no, no surprise that people like will demand it, right?Samantha: Yeah.swyx: It's, it's like anysystemswyx: of record company like this, it's like how much do you give away? Cool. I think that's that for the sort of cloud agents tour. Cool. And we just talk about like cloud agents have been when did Kirsten loves cloud agents? Do you know, in JuneJonas: last year.swyx: June last year. So it's been slowly develop the thing you did, like a bunch of, like Michael did a post where himself, where he like showed this chart of like ages overtaking tap. And I'm like, wow, this is like the biggest transition in code.Jonas: Yeah.swyx: Like in, in [00:18:00] like the last,Jonas: yeah. I think that kind of got turned out.Yeah. I think it's a very interest,swyx: not at all. I think it's been highlighted by our friend Andre Kati today.Jonas: Okay.swyx: Talk more about it. What does it mean? Yeah. Is I just got given like the cursor tab key.Jonas: Yes. Yes.swyx: That's that'sSamantha: cool.swyx: I know, but it's gonna be like put in a museum.Jonas: It is.Samantha: I have to say I haven't used tab a little bit myself.Jonas: Yeah. I think that what it looks like to code with AI code generally creates software, even if you want to go higher level. Is changing very rapidly. No, not a hot take, but I think from our vendor's point at Cursor, I think one of the things that is probably underappreciated from the outside is that we are extremely self-aware about that fact and Kerscher, got its start in phase one, era one of like tab and auto complete.And that was really useful in its time. But a lot of people start looking at text files and editing code, like we call it hand coding. Now when you like type out the actual letters, it'sswyx: oh that's cute.Jonas: Yeah.swyx: Oh that's cute.Jonas: You're so boomer. So boomer. [00:19:00] And so that I think has been a slowly accelerating and now in the last few months, rapidly accelerating shift.And we think that's going to happen again with the next thing where the, I think some of the pains around tab of it's great, but I actually just want to give more to the agent and I don't want to do one tab at a time. I want to just give it a task and it goes off and does a larger unit of work and I can.Lean back a little bit more and operate at that higher level of abstraction that's going to happen again, where it goes from agents handing you back diffs and you're like in the weeds and giving it, 32nd to three minute tasks, to, you're giving it, three minute to 30 minute to three hour tasks and you're getting back videos and trying out previews rather than immediately looking at diffs every single time.swyx: Yeah. Anything to add?Samantha: One other shift that I've noticed as our cloud agents have really taken off internally has been a shift from primarily individually driven development to almost this collaborative nature of development for us, slack is actually almost like a development on [00:20:00] Id basically.So Iswyx: like maybe don't even build a custom ui, like maybe that's like a debugging thing, but actually it's that.Samantha: I feel like, yeah, there's still so much to left to explore there, but basically for us, like Slack is where a lot of development happens. Like we will have these issue channels or just like this product discussion channels where people are always at cursing and that kicks off a cloud agent.And for us at least, we have team follow-ups enabled. So if Jonas kicks off at Cursor in a thread, I can follow up with it and add more context. And so it turns into almost like a discussion service where people can like collaborate on ui. Oftentimes I will kick off an investigation and then sometimes I even ask it to get blame and then tag people who should be brought in. ‘cause it can tag people in Slack and then other people will comeswyx: in, can tag other people who are not involved in conversation. Yes. Can just do at Jonas if say, was talking to,Samantha: yeah.swyx: That's cool. You should, you guys should make a big good deal outta that.Samantha: I know. It's a lot to, I feel like there's a lot more to do with our slack surface area to show people externally. But yeah, basically like it [00:21:00] can bring other people in and then other people can also contribute to that thread and you can end up with a PR again, with the artifacts visible and then people can be like, okay, cool, we can merge this.So for us it's like the ID is almost like moving into Slack in some ways as well.swyx: I have the same experience with, but it's not developers, it's me. Designer salespeople.Samantha: Yeah.swyx: So me on like technical marketing, vision, designer on design and then salespeople on here's the legal source of what we agreed on.And then they all just collaborate and correct. The agents,Jonas: I think that we found when these threads is. The work that is left, that the humans are discussing in these threads is the nugget of what is actually interesting and relevant. It's not the boring details of where does this if statement go?It's do we wanna ship this? Is this the right ux? Is this the right form factor? Yeah. How do we make this more obvious to the user? It's like those really interesting kind of higher order questions that are so easy to collaborate with and leave the implementation to the cloud agent.Samantha: Totally. And no more discussion of am I gonna do this? Are you [00:22:00] gonna do this cursor's doing it? You just have to decide. You like it.swyx: Sometimes the, I don't know if there's a, this probably, you guys probably figured this out already, but since I, you need like a mute button. So like cursor, like we're going to take this offline, but still online.But like we need to talk among the humans first. Before you like could stop responding to everything.Jonas: Yeah. This is a design decision where currently cursor won't chime in unless you explicitly add Mention it. Yeah. Yeah.Samantha: So it's not always listening.Yeah.Jonas: I can see all the intermediate messages.swyx: Have you done the recursive, can cursor add another cursor or spawn another cursor?Samantha: Oh,Jonas: we've done some versions of this.swyx: Because, ‘cause it can add humans.Jonas: Yes. One of the other things we've been working on that's like an implication of generating the code is so easy is getting it to production is still harder than it should be.And broadly, you solve one bottleneck and three new ones pop up. Yeah. And so one of the new bottlenecks is getting into production and we have a like joke internally where you'll be talking about some feature and someone says, I have a PR for that. Which is it's so easy [00:23:00] to get to, I a PR for that, but it's hard still relatively to get from I a PR for that to, I'm confident and ready to merge this.And so I think that over the coming weeks and months, that's a thing that we think a lot about is how do we scale up compute to that pipeline of getting things from a first draft An agent did.swyx: Isn't that what Merge isn't know what graphite's for, likeJonas: graphite is a big part of that. The cloud agent testingswyx: Is it fully integrated or still different companiesJonas: working on I think we'll have more to share there in the future, but the goal is to have great end-to-end experience where Cursor doesn't just help you generate code tokens, it helps you create software end-to-end.And so review is a big part of that, that I think especially as models have gotten much better at writing code, generating code, we've felt that relatively crop up more,swyx: sorry this is completely unplanned, but like there I have people arguing one to you need ai. To review ai and then there is another approach, thought school of thought where it's no, [00:24:00] reviews are dead.Like just show me the video. It's it like,Samantha: yeah. I feel again, for me, the video is often like alignment and then I often still wanna go through a code review process.swyx: Like still look at the files andSamantha: everything. Yeah. There's a spectrum of course. Like the video, if it's really well done and it does like fully like test everything, you can feel pretty competent, but it's still helpful to, to look at the code.I make hep pay a lot of attention to bug bot. I feel like Bug Bot has been a great really highly adopted internally. We often like, won't we tell people like, don't leave bug bot comments unaddressed. ‘cause we have such high confidence in it. So people always address their bug bot comments.Jonas: Once you've had two cases where you merged something and then you went back later, there was a bug in it, you merged, you went back later and you were like, ah, bug Bot had found that I should have listened to Bug Bot.Once that happens two or three times, you learn to wait for bug bot.Samantha: Yeah. So I think for us there's like that code level review where like it's looking at the actual code and then there's like the like feature level review where you're looking at the features. There's like a whole number of different like areas.There'll probably eventually be things like performance level review, security [00:25:00] review, things like that where it's like more more different aspects of how this feature might affect your code base that you want to potentially leverage an agent to help with.Jonas: And some of those like bug bot will be synchronous and you'll typically want to wait on before you merge.But I think another thing that we're starting to see is. As with cloud agents, you scale up this parallelism and how much code you generate. 10 person startups become, need the Devrel X and pipelines that a 10,000 person company used to need. And that looks like a lot of the things I think that 10,000 person companies invented in order to get that volume of software to production safely.So that's things like, release frequently or release slowly, have different stages where you release, have checkpoints, automated ways of detecting regressions. And so I think we're gonna need stacks merg stack diffs merge queues. Exactly. A lot of those things are going to be importantswyx: forward with.I think the majority of people still don't know what stack stacks are. And I like, I have many friends in Facebook and like I, I'm pretty friendly with graphite. I've just, [00:26:00] I've never needed it ‘cause I don't work on that larger team and it's just like democratization of no, only here's what we've already worked out at very large scale and here's how you can, it benefits you too.Like I think to me, one of the beautiful things about GitHub is that. It's actually useful to me as an individual solo developer, even though it's like actually collaboration software.Jonas: Yep.swyx: And I don't think a lot of Devrel tools have figured that out yet. That transition from like large down to small.Jonas: Yeah. Kers is probably an inverse story.swyx: This is small down toJonas: Yeah. Where historically Kers share, part of why we grew so quickly was anyone on the team could pick it up and in fact people would pick it up, on the weekend for their side project and then bring it into work. ‘cause they loved using it so much.swyx: Yeah.Jonas: And I think a thing that we've started working on a lot more, not us specifically, but as a company and other folks at Cursor, is making it really great for teams and making it the, the 10th person that starts using Cursor in a team. Is immediately set up with things like, we launched Marketplace recently so other people can [00:27:00] configure what CPS and skills like plugins.So skills and cps, other people can configure that. So that my cursor is ready to go and set up. Sam loves the Datadog, MCP and Slack, MCP you've also been using a lot butSamantha: also pre-launch, but I feel like it's so good.Jonas: Yeah, my cursor should be configured if Sam feels strongly that's just amazing and required.swyx: Is it automatically shared or you have to go and.Jonas: It depends on the MCP. So some are obviously off per user. Yeah. And so Sam can't off my cursor with my Slack MCP, but some are team off and those can be set up by admins.swyx: Yeah. Yeah. That's cool. Yeah, I think, we had a man on the pod when cursor was five people, and like everyone was like, okay, what's the thing?And then it's usually something teams and org and enterprise, but it's actually working. But like usually at that stage when you're five, when you're just a vs. Code fork it's like how do you get there? Yeah. Will people pay for this? People do pay for it.Jonas: Yeah. And I think for cloud agents, we expect.[00:28:00]To have similar kind of PLG things where I think off the bat we've seen a lot of adoption with kind of smaller teams where the code bases are not quite as complex to set up. Yes. If you need some insane docker layer caching thing for builds not to take two hours, that's going to take a little bit longer for us to be able to support that kind of infrastructure.Whereas if you have front end backend, like one click agents can install everything that they need themselves.swyx: This is a good chance for me to just ask some technical sort of check the box questions. Can I choose the size of the vm?Jonas: Not yet. We are planning on adding that. Weswyx: have, this is obviously you want like LXXL, whatever, right?Like it's like the Amazon like sort menu.Jonas: Yes, exactly. We'll add that.swyx: Yeah. In some ways you have to basically become like a EC2, almost like you rent a box.Jonas: You rent a box. Yes. We talk a lot about brain in a box. Yeah. So cursor, we want to be a brain in a box,swyx: but is the mental model different? Is it more serverless?Is it more persistent? Is. Something else.Samantha: We want it to be a bit persistent. The desktop should be [00:29:00] something you can return to af even after some days. Like maybe you go back, they're like still thinking about a feature for some period of time. So theswyx: full like sus like suspend the memory and bring it back and then keep going.Samantha: Exactly.swyx: That's an interesting one because what I actually do want, like from a manna and open crawl, whatever, is like I want to be able to log in with my credentials to the thing, but not actually store it in any like secret store, whatever. ‘cause it's like this is the, my most sensitive stuff.Yeah. This is like my email, whatever. And just have it like, persist to the image. I don't know how it was hood, but like to rehydrate and then just keep going from there. But I don't think a lot of infra works that way. A lot of it's stateless where like you save it to a docker image and then it's only whatever you can describe in a Docker file and that's it.That's the only thing you can cl multiple times in parallel.Jonas: Yeah. We have a bunch of different ways of setting them up. So there's a dockerfile based approach. The main default way is actually snapshottingswyx: like a Linux vmJonas: like vm, right? You run a bunch of install commands and then you snapshot more or less the file system.And so that gets you set up for everything [00:30:00] that you would want to bring a new VM up from that template basically.swyx: Yeah.Jonas: And that's a bit distinct from what Sam was talking about with the hibernating and re rehydrating where that is a full memory snapshot as well. So there, if I had like the browser open to a specific page and we bring that back, that page will still be there.swyx: Was there any discussion internally and just building this stuff about every time you shoot a video it's actually you show a little bit of the desktop and the browser and it's not necessary if you just show the browser. If, if you know you're just demoing a front end application.Why not just show the browser, right? Like it Yeah,Samantha: we do have some panning and zooming. Yeah. Like it can decide that when it's actually recording and cutting the video to highlight different things. I think we've played around with different ways of segmenting it and yeah. There's been some different revs on it for sure.Jonas: Yeah. I think one of the interesting things is the version that you see now in cursor.com actually is like half of what we had at peak where we decided to unshift or unshipped quite a few things. So two of the interesting things to talk about, one is directly an answer to your [00:31:00] question where we had native browser that you would have locally, it was basically an iframe that via port forwarding could load the URL could talk to local host in the vm.So that gets you basically, so inswyx: your machine's browser,likeJonas: in your local browser? Yeah. You would go to local host 4,000 and that would get forwarded to local host 4,000 in the VM via port forward. We unshift that like atswyx: Eng Rock.Jonas: Like an Eng Rock. Exactly. We unshift that because we felt that the remote desktop was sufficiently low latency and more general purpose.So we build Cursor web, but we also build Cursor desktop. And so it's really useful to be able to have the full spectrum of things. And even for Cursor Web, as you saw in one of the examples, the agent was uploading files and like I couldn't upload files and open the file viewer if I only had access to the browser.And we've thought a lot about, this might seem funny coming from Cursor where we started as this, vs. Code Fork and I think inherited a lot of amazing things, but also a lot [00:32:00] of legacy UI from VS Code.Minimal Web UI SurfacesJonas: And so with the web UI we wanted to be very intentional about keeping that very minimal and exposing the right sum of set of primitive sort of app surfaces we call them, that are shared features of that cloud.Environment that you and the agent both use. So agent uses desktop and controls it. I can use desktop and controlled agent runs terminal commands. I can run terminal commands. So that's how our philosophy around it. The other thing that is maybe interesting to talk about that we unshipped is and we may, both of these things we may reship and decide at some point in the future that we've changed our minds on the trade offs or gotten it to a point where, putswyx: it out there.Let users tell you they want it. Exactly. Alright, fine.Why No File EditorJonas: So one of the other things is actually a files app. And so we used to have the ability at one point during the process of testing this internally to see next to, I had GID desktop and terminal on the right hand side of the tab there earlier to also have a files app where you could see and edit files.And we actually felt that in some [00:33:00] ways, by restricting and limiting what you could do there, people would naturally leave more to the agent and fall into this new pattern of delegating, which we thought was really valuable. And there's currently no way in Cursor web to edit these files.swyx: Yeah. Except you like open up the PR and go into GitHub and do the thing.Jonas: Yeah.swyx: Which is annoying.Jonas: Just tell the agent,swyx: I have criticized open AI for this. Because Open AI is Codex app doesn't have a file editor, like it has file viewer, but isn't a file editor.Jonas: Do you use the file viewer a lot?swyx: No. I understand, but like sometimes I want it, the one way to do it is like freaking going to no, they have a open in cursor button or open an antigravity or, opening whatever and people pointed that.So I was, I was part of the early testers group people pointed that and they were like, this is like a design smell. It's like you actually want a VS. Code fork that has all these things, but also a file editor. And they were like, no, just trust us.Jonas: Yeah. I think we as Cursor will want to, as a product, offer the [00:34:00] whole spectrum and so you want to be able to.Work at really high levels of abstraction and double click and see the lowest level. That's important. But I also think that like you won't be doing that in Slack. And so there are surfaces and ways of interacting where in some cases limiting the UX capabilities makes for a cleaner experience that's more simple and drives people into these new patterns where even locally we kicked off joking about this.People like don't really edit files, hand code anymore. And so we want to build for where that's going and not where it's beenswyx: a lot of cool stuff. And Okay. I have a couple more.Full Stack Hosting Debateswyx: So observations about the design elements about these things. One of the things that I'm always thinking about is cursor and other peers of cursor start from like the Devrel tools and work their way towards cloud agents.Other people, like the lovable and bolts of the world start with here's like the vibe code. Full cloud thing. They were already cloud edges before anyone else cloud edges and we will give you the full deploy platform. So we own the whole loop. We own all the infrastructure, we own, we, we have the logs, we have the the live site, [00:35:00] whatever.And you can do that cycle cursor doesn't own that cycle even today. You don't have the versal, you don't have the, you whatever deploy infrastructure that, that you're gonna have, which gives you powers because anyone can use it. And any enterprise who, whatever you infra, I don't care. But then also gives you limitations as to how much you can actually fully debug end to end.I guess I'm just putting out there that like is there a future where there's like full stack cursor where like cursor apps.com where like I host my cursor site this, which is basically a verse clone, right? I don't know.Jonas: I think that's a interesting question to be asking, and I think like the logic that you laid out for how you would get there is logic that I largely agree with.swyx: Yeah. Yeah.Jonas: I think right now we're really focused on what we see as the next big bottleneck and because things like the Datadog MCP exist, yeah. I don't think that the best way we can help our customers ship more software. Is by building a hosting solution right now,swyx: by the way, these are things I've actually discussed with some of the companies I just named.Jonas: Yeah, for sure. Right now, just this big bottleneck is getting the code out there and also [00:36:00] unlike a lovable in the bolt, we focus much more on existing software. And the zero to one greenfield is just a very different problem. Imagine going to a Shopify and convincing them to deploy on your deployment solution.That's very different and I think will take much longer to see how that works. May never happen relative to, oh, it's like a zero to one app.swyx: I'll say. It's tempting because look like 50% of your apps are versal, superb base tailwind react it's the stack. It's what everyone does.So I it's kinda interesting.Jonas: Yeah.Model Choice and Auto Routingswyx: The other thing is the model select dying. Right now in cloud agents, it's stuck down, bottom left. Sure it's Codex High today, but do I care if it's suddenly switched to Opus? Probably not.Samantha: We definitely wanna give people a choice across models because I feel like it, the meta change is very frequently.I was a big like Opus 4.5 Maximalist, and when codex 5.3 came out, I hard, hard switch. So that's all I use now.swyx: Yeah. Agreed. I don't know if, but basically like when I use it in Slack, [00:37:00] right? Cursor does a very good job of exposing yeah. Cursors. If people go use it, here's the model we're using.Yeah. Here's how you switch if you want. But otherwise it's like extracted away, which is like beautiful because then you actually, you should decide.Jonas: Yeah, I think we want to be doing more with defaults.swyx: Yeah.Jonas: Where we can suggest things to people. A thing that we have in the editor, the desktop app is auto, which will route your request and do things there.So I think we will want to do something like that for cloud agents as well. We haven't done it yet. And so I think. We have both people like Sam, who are very savvy and want know exactly what model they want, and we also have people that want us to pick the best model for them because we have amazing people like Sam and we, we are the experts.Yeah. We have both the traffic and the internal taste and experience to know what we think is best.swyx: Yeah. I have this ongoing pieces of agent lab versus model lab. And to me, cursor and other companies are example of an agent lab that is, building a new playbook that is different from a model lab where it's like very GP heavy Olo.So obviously has a research [00:38:00] team. And my thesis is like you just, every agent lab is going to have a router because you're going to be asked like, what's what. I don't keep up to every day. I'm not a Sam, I don't keep up every day for using you as sample the arm arbitrator of taste. Put me on CRI Auto.Is it free? It's not free.Jonas: Auto's not free, but there's different pricing tiers. Yeah.swyx: Put me on Chris. You decide from me based on all the other people you know better than me. And I think every agent lab should basically end up doing this because that actually gives you extra power because you like people stop carrying or having loyalty with one lab.Jonas: Yeah.Best Of N and Model CouncilsJonas: Two other maybe interesting things that I don't know how much they're on your radar are one the best event thing we mentioned where running different models head to head is actually quite interesting becauseswyx: which exists in cursor.Jonas: That exists in cur ID and web. So the problem is where do you run them?swyx: Okay.Jonas: And so I, I can share my screen if that's interesting. Yeahinteresting.swyx: Yeah. Yeah. Obviously parallel agents, very popal.Jonas: Yes, exactly. Parallel agentsswyx: in you mind. Are they the same thing? Best event and parallel agents? I don't want to [00:39:00] put words in your mouth.Jonas: Best event is a subset of parallel agents where they're running on the same prompt.That would be my answer. So this is what that looks like. And so here in this dropdown picker, I can just select multiple models.swyx: Yeah.Jonas: And now if I do a prompt, I'm going to do something silly. I am running these five models.swyx: Okay. This is this fake clone, of course. The 2.0 yeah.Jonas: Yes, exactly. But they're running so the cursor 2.0, you can do desktop or cloud.So this is cloud specifically where the benefit over work trees is that they have their own VMs and can run commands and won't try to kill ports that the other one is running. Which are some of the pains. These are allswyx: called work trees?Jonas: No, these are all cloud agents with their own VMs.swyx: Okay. ButJonas: When you do it locally, sometimes people do work trees and that's been the main way that people have set out parallel so far.I've gotta say.swyx: That's so confusing for folks.Jonas: Yeah.swyx: No one knows what work trees are.Jonas: Exactly. I think we're phasing out work trees.swyx: Really.Jonas: Yeah.swyx: Okay.Samantha: But yeah. And one other thing I would say though on the multimodel choice, [00:40:00] so this is another experiment that we ran last year and the decide to ship at that time but may come back to, and there was an interesting learning that's relevant for, these different model providers. It was something that would run a bunch of best of ends but then synthesize and basically run like a synthesizer layer of models. And that was other agents that would take LM Judge, but one that was also agentic and could write code. So it wasn't just picking but also taking the learnings from two models or, and models that it was looking at and writing a new diff.And what we found was that at the time at least, there were strengths to using models from different model providers as the base level of this process. Like basically you could get almost like a synergistic output that was better than having a very unified, like bottom model tier. So it was really interesting ‘cause it's like potentially, even though even in the future when you have like maybe one model as ahead of the other for a little bit, there could be some benefit from having like multiple top tier models involved in like a [00:41:00] model swarm or whatever agent Swarm that you're doing, that they each have strengths and weaknesses.Yeah.Jonas: Andre called this the council, right?Samantha: Yeah, exactly. We actually, oh, that's another internal command we have that Ian wrote slash council. Oh, and they some, yeah.swyx: Yes. This idea is in various forms everywhere. And I think for me, like for me, the productization of it, you guys have done yeah, like this is very flexible, but.If I were to add another Yeah, what your thing is on here it would be too much. I what, let's say,Samantha: Ideally it's all, it's something that the user can just choose and it all happens under the hood in a way where like you just get the benefit of that process at the end and better output basically, but don't have to get too lost in the complexity of judging along the way.Jonas: Okay.Subagents for ContextJonas: Another thing on the many agents, on different parallel agents that's interesting is an idea that's been around for a while as well that has started working recently is subagents. And so this is one other way to get agents of the different prompts and different goals and different models, [00:42:00] different vintages to work together.Collaborate and delegate.swyx: Yeah. I'm very like I like one of my, I always looking for this is the year of the blah, right? Yeah. I think one of the things on the blahs is subs. I think this is of but I haven't used them in cursor. Are they fully formed or how do I honestly like an intro because do I form them from new every time?Do I have fixed subagents? How are they different for slash commands? There's all these like really basic questions that no one stops to answer for people because everyone's just like too busy launching. We have toSamantha: honestly, you could, you can see them in cursor now if you just say spin up like 50 subagents to, so cursor definesswyx: what Subagents.Yeah.Samantha: Yeah. So basically I think I shouldn't speak for the whole subagents team. This is like a different team that's been working on this, but our thesis or thing that we saw internally is that like they're great for context management for kind of long running threads, or if you're trying to just throw more compute at something.We have strongly used, almost like a generic task interface where then the main agent can define [00:43:00] like what goes into the subagent. So if I say explore my code base, it might decide to spin up an explore subagent and or might decide to spin up five explore subagent.swyx: But I don't get to set what those subagent are, right?It's all defined by a model.Samantha: I think. I actually would have to refresh myself on the sub agent interface.Jonas: There are some built-in ones like the explore subagent is free pre-built. But you can also instruct the model to use other subagents and then it will. And one other example of a built-in subagent is I actually just kicked one off in cursor and I can show you what that looks like.swyx: Yes. Because I tried to do this in pure prompt space.Jonas: So this is the desktop app? Yeah. Yeah. And that'sswyx: all you need to do, right? Yeah.Jonas: That's all you need to do. So I said use a sub agent to explore and I think, yeah, so I can even click in and see what the subagent is working on here. It ran some fine command and this is a composer under the hood.Even though my main model is Opus, it does smart routing to take, like in this instance the explorer sort of requires reading a ton of things. And so a faster model is really useful to get an [00:44:00] answer quickly, but that this is what subagent look like. And I think we wanted to do a lot more to expose hooks and ways for people to configure these.Another example of a cus sort of builtin subagent is the computer use subagent in the cloud agents, where we found that those trajectories can be long and involve a lot of images obviously, and execution of some testing verification task. We wanted to use that models that are particularly good at that.So that's one reason to use subagents. And then the other reason to use subagents is we want contexts to be summarized reduced down at a subagent level. That's a really neat boundary at which to compress that rollout and testing into a final message that agent writes that then gets passed into the parent rather than having to do some global compaction or something like that.swyx: Awesome. Cool. While we're in the subagents conversation, I can't do a cursor conversation and not talk about listen stuff. What is that? What is what? He built a browser. He built an os. Yes. And he [00:45:00] experimented with a lot of different architectures and basically ended up reinventing the software engineer org chart.This is all cool, but what's your take? What's, is there any hole behind the side? The scenes stories about that kind of, that whole adventure.Samantha: Some of those experiments have found their way into a feature that's available in cloud agents now, the long running agent mode internally, we call it grind mode.And I think there's like some hint of grind mode accessible in the picker today. ‘cause you can do choose grind until done. And so that was really the result of experiments that Wilson started in this vein where he I think the Ralph Wigga loop was like floating around at the time, but it was something he also independently found and he was experimenting with.And that was what led to this product surface.swyx: And it is just simple idea of have criteria for completion and do not. Until you complete,Samantha: there's a bit more complexity as well in, in our implementation. Like there's a specific, you have to start out by aligning and there's like a planning stage where it will work with you and it will not get like start grind execution mode until it's decided that the [00:46:00] plan is amenable to both of you.Basically,swyx: I refuse to work until you make me happy.Jonas: We found that it's really important where people would give like very underspecified prompt and then expect it to come back with magic. And if it's gonna go off and work for three minutes, that's one thing. When it's gonna go off and work for three days, probably should spend like a few hours upfront making sure that you have communicated what you actually want.swyx: Yeah. And just to like really drive from the point. We really mean three days that No, noJonas: human. Oh yeah. We've had three day months innovation whatsoever.Samantha: I don't know what the record is, but there's been a long time with the grantsJonas: and so the thing that is available in cursor. The long running agent is if you wanna think about it, very abstractly that is like one worker node.Whereas what built the browser is a society of workers and planners and different agents collaborating. Because we started building the browser with one worker node at the time, that was just the agent. And it became one worker node when we realized that the throughput of the system was not where it needed to be [00:47:00] to get something as large of a scale as the browser done.swyx: Yeah.Jonas: And so this has also become a really big mental model for us with cloud, cloud agents is there's the classic engineering latency throughput trade-offs. And so you know, the code is water flowing through a pipe. The, we think that over the coming months, the big unlock is not going to be one person with a model getting more done, like the water flowing faster and we'll be making the pipe much wider and so ing more, whether that's swarms of agents or parallel agents, both of those are things that contribute to getting.Much more done in the same amount of time, but any one of those tasks doesn't necessarily need to get done that quickly. And throughput is this really big thing where if you see the system of a hundred concurrent agents outputting thousands of tokens a second, you can't go back like that.Just you see a glimpse of the future where obviously there are many caveats. Like no one is using this browser. IRL. There's like a bunch of things not quite right yet, but we are going to get to systems that produce real production [00:48:00] code at the scale much sooner than people think. And it forces you to think what even happens to production systems. Like we've broken our GitHub actions recently because we have so many agents like producing and pushing code that like CICD is just overloaded. ‘cause suddenly it's like effectively weg grew, cursor's growing very quickly anyway, but you grow head count, 10 x when people run 10 x as many agents.And so a lot of these systems, exactly, a lot of these systems will need to adapt.swyx: It also reminds me, we, we all, the three of us live in the app layer, but if you talk to the researchers who are doing RL infrastructure, it's the same thing. It's like all these parallel rollouts and scheduling them and making sure as much throughput as possible goes through them.Yeah, it's the same thing.Jonas: We were talking briefly before we started recording. You were mentioning memory chips and some of the shortages there. The other thing that I think is just like hard to wrap your head around the scale of the system that was building the browser, the concurrency there.If Sam and I both have a system like that running for us, [00:49:00] shipping our software. The amount of inference that we're going to need per developer is just really mind-boggling. And that makes, sometimes when I think about that, I think that even with, the most optimistic projections for what we're going to need in terms of buildout, our underestimating, the extent to which these swarm systems can like churn at scale to produce code that is valuable to the economy.And,swyx: yeah, you can cut this if it's sensitive, but I was just Do you have estimates of how much your token consumption is?Jonas: Like per developer?swyx: Yeah. Or yourself. I don't need like comfy average. I just curious. ISamantha: feel like I, for a while I wasn't an admin on the usage dashboard, so I like wasn't able to actually see, but it was a,swyx: mine has gone up.Samantha: Oh yeah.swyx: But I thinkSamantha: it's in terms of how much work I'm doing, it's more like I have no worries about developers losing their jobs, at least in the near term. ‘cause I feel like that's a more broad discussion.swyx: Yeah. Yeah. You went there. I didn't go, I wasn't going there.I was just like how much more are you using?Samantha: There's so much stuff to be built. And so I feel like I'm basically just [00:50:00] trying to constantly I have more ambitions than I did before. Yes. Personally. Yes. So can't speak to the broader thing. But for me it's like I'm busier than ever before.I'm using more tokens and I am also doing more things.Jonas: Yeah. Yeah. I don't have the stats for myself, but I think broadly a thing that we've seen, that we expect to continue is J'S paradox. Whereswyx: you can't do it in our podcast without seeingJonas: it. Exactly. We've done it. Now we can wrap. We've done, we said the words.Phase one tab auto complete people paid like 20 bucks a month. And that was great. Phase two where you were iterating with these local models. Today people pay like hundreds of dollars a month. I think as we think about these highly parallel kind of agents running off for a long times in their own VM system, we are already at that point where people will be spending thousands of dollars a month per human, and I think potentially tens of thousands and beyond, where it's not like we are greedy for like capturing more money, but what happens is just individuals get that much more leverage.And if one person can do as much as 10 people, yeah. That tool that allows ‘em to do that is going to be tremendously valuable [00:51:00] and worth investing in and taking the best thing that exists.swyx: One more question on just the cursor in general and then open-ended for you guys to plug whatever you wanna put.How is Cursor hiring these days?Samantha: What do you mean by how?swyx: So obviously lead code is dead. Oh,Samantha: okay.swyx: Everyone says work trial. Different people have different levels of adoption of agents. Some people can really adopt can be much more productive. But other people, you just need to give them a little bit of time.And sometimes they've never lived in a token rich place like cursor.And once you live in a token rich place, you're you just work differently. But you need to have done that. And a lot of people anyway, it was just open-ended. Like how has agentic engineering, agentic coding changed your opinions on hiring?Is there any like broad like insights? Yeah.Jonas: Basically I'm asking this for other people, right? Yeah, totally. Totally. To hear Sam's opinion, we haven't talked about this the two of us. I think that we don't see necessarily being great at the latest thing with AI coding as a prerequisite.I do think that's a sign that people are keeping up and [00:52:00] curious and willing to upscale themselves in what's happening because. As we were talking about the last three months, the game has completely changed. It's like what I do all day is very different.swyx: Like it's my job and I can't,Jonas: Yeah, totally.I do think that still as Sam was saying, the fundamentals remain important in the current age and being able to go and double click down. And models today do still have weaknesses where if you let them run for too long without cleaning up and refactoring, the coke will get sloppy and there'll be bad abstractions.And so you still do need humans that like have built systems before, no good patterns when they see them and know where to steer things.Samantha: I would agree with that. I would say again, cursor also operates very quickly and leveraging ag agentic engineering is probably one reason why that's possible in this current moment.I think in the past it was just like people coding quickly and now there's like people who use agents to move faster as well. So it's part of our process will always look for we'll select for kind of that ability to make good decisions quickly and move well in this environment.And so I think being able to [00:53:00] figure out how to use agents to help you do that is an important part of it too.swyx: Yeah. Okay. The fork in the road, either predictions for the end of the year, if you have any, or PUDs.Jonas: Evictions are not going to go well.Samantha: I know it's hard.swyx: They're so hard. Get it wrong.It's okay. Just, yeah.Jonas: One other plug that may be interesting that I feel like we touched on but haven't talked a ton about is a thing that the kind of these new interfaces and this parallelism enables is the ability to hop back and forth between threads really quickly. And so a thing that we have,swyx: you wanna show something or,Jonas: yeah, I can show something.A thing that we have felt with local agents is this pain around contact switching. And you have one agent that went off and did some work and another agent that, that did something else. And so here by having, I just have three tabs open, let's say, but I can very quickly, hop in here.This is an example I showed earlier, but the actual workflow here I think is really different in a way that may not be obvious, where, I start t

Product-Led Podcast
How Netlify Became the Obvious Choice in their Market

Product-Led Podcast

Play Episode Listen Later Mar 6, 2026 57:49


Chris Bach, founder of Netlify, joins Wes Bush and Esben Friis-Jensen to break down how Netlify became a default choice in modern web development. Chris shares how Netlify started as a bet on a new web architecture that moved beyond monolithic applications, and why bottom-up adoption through developers was not optional, but the only viable go-to-market path. They dig into what many founders skip: building a clear worldview of how the market is evolving, then reverse-engineering what needs to exist for that future to become real. Chris explains how this approach shaped Netlify's early product decisions, its ecosystem strategy, and the narrative that helped attract users, partners, and investors. The conversation also tackles a common founder dilemma: product-led vs. sales-led. Chris offers a simple filter, if you cannot deliver a “magic moment” quickly for an individual user, PLG may be the wrong motion. He also argues that trying to do both sales-led and product-led at the same time often leads to doing neither well. Finally, Chris shares how his investing approach grew out of ecosystem-building, why learning requires asking “stupid” questions, and how he now thinks about the next wave: agents as the new “user,” and the infrastructure required to support them. Key Highlights 00:00 – Why Netlify Became the “Obvious Choice” Wes introduces Chris and tees up the core theme: building a compelling worldview and executing it until the market sees your product as the default. 00:00:59 – Netlify's Mission: Escape the Monolith Chris explains Netlify's original bet on a new web architecture and why early enterprise use cases were limited without a supporting ecosystem. 00:03:34 – When PLG Works: Start With the “Magic Moment” A practical filter for founders: if an individual user cannot quickly experience value, PLG may be a mismatch. 00:07:31 – Pick a Motion First: Hybrid Comes Later Chris warns against trying to do sales-led and product-led at the same time, especially with limited startup resources. 00:11:17 – The Worldview Advantage: Context Before Product How Netlify spent serious time mapping where the web was headed, then reverse-engineered what they needed to build first. 00:15:41 – Storytelling That Wins: Small Story vs. Big Story Why messaging must change depending on the audience, and how Netlify avoided being boxed in as “just hosting.” 00:25:17 – Category Creation: Why Jamstack MatteredChris shares how coining “Jamstack” worked because it benefited the whole ecosystem, not just Netlify's marketing.00:29:08 – Ecosystem Fuel: Directories, OSS, and Deploy PreviewsTactics that helped win developer mindshare, including community resources and making open source easy to deploy.00:32:31 – The First 20: Targeting Influential Early AdoptersNetlify's early focus was literally a list of 20 key people, then expanding in concentric circles from there.00:35:34 – The Next Shift: Agents, Dynamic Web, and AXChris outlines his view of an AI-generated, on-the-fly web and why “agent experience” becomes a critical product frontier. Resources

DGMG Radio
How to Build a Product Marketing Motion That Works (with Jeff Hardison)

DGMG Radio

Play Episode Listen Later Mar 5, 2026 59:08


#335 | Jeff Hardison, now VP of Product Marketing at Sanity, joined Dave when he was running product marketing at Calendly to break down what the product marketing role should actually look like inside a B2B company. They get into how Jeff structured his team to serve both a PLG motion and an enterprise sales team at the same time, why he hires for specialization instead of making everyone a generalist, and how he thinks about measuring a function that touches almost every team in the company. Jeff also shares his take on positioning and messaging, how to run product launches that actually rally the company, and the two interview questions he uses to figure out if someone will be happy in a product marketing role. Join 50,0000 people who get Dave's Newsletter here: https://www.exitfive.com/newsletterLearn more about Exit Five's private marketing community: https://www.exitfive.com/***Brought to you by:AirOps - The content engineering platform that helps marketers create and maintain high-quality, on-brand content that wins AI search. Go to airops.com/exitfive to start creating content that reflects your expertise, stays true to your brand, and is engineered for performance across human and AI discovery.Customer.io - An AI powered customer engagement platform that help marketers turn first-party data into engaging customer experiences across email, SMS, and push. Learn more at customer.io/exitfive. Convertr - The enterprise lead data management platform that sits between your lead sources and your CRM, automatically validating, enriching, and standardizing every lead before it touches your systems. Check them out at convertr.io/exitfive.Compound Growth Marketing - A full-funnel demand generation agency that helps high-growth cybersecurity, DevOps, and enterprise software companies drive more pipeline through AI SEO, paid media, and go-to-market engineering. Visit compoundgrowthmarketing.com and tell them Dave sent you.***Thanks to my friends at hatch.fm for producing this episode and handling all of the Exit Five podcast production.They give you unlimited podcast editing and strategy for your B2B podcast.Get unlimited podcast editing and on-demand strategy for one low monthly cost. Just upload your episode, and they take care of the rest.Visit hatch.fm to learn more

Revenue Marketing Realtalk
#115 PLG Case Breakdown: Skalierbare Paid Social Engine für 100 Mio. € ARR Company

Revenue Marketing Realtalk

Play Episode Listen Later Mar 4, 2026 34:37


In dieser Folge analysieren Tim und Matthis eine B2B SaaS Company, die 100 Mio. € ARR fast ausschließlich durch PLG macht. Um 2026 um 20 % zu wachsen, will diese Company Paid Social als strategisch wichtigen Marketingkanal erschließen. Tim und Matthis diskutieren, welche Zielgruppen, Märkte und konkreten Strategien dafür notwendig sind. Natürlich geht es auch darum, wie eine Winning Ad Strategie für diese Company aussieht. Diese Folge ist ist ein absoluter Muss für alle B2B SaaS Companies, die Paid Social erschließen wollen oder mit dem Channel kämpfen. Jetzt reinhören!

Product Guru's
Como escalar Produto B2B sem virar fábrica de feature

Product Guru's

Play Episode Listen Later Mar 2, 2026 38:42


Produto B2B não é vender para empresa. É vender para múltiplos decisores, com risco financeiro real, pressão por ROI e zero espaço para erro.Neste episódio do podcast, Paulo Chiodi conversa com Ricardo Kremer, CPTO da Solides, HR Tech com mais de 45 mil clientes e crescimento acelerado, sobre os bastidores de construir, escalar e liderar produto B2B no Brasil.Você vai entender por que “nunca customize seu produto” pode ser uma das decisões mais estratégicas para quem quer crescer de forma sustentável. Em vez de customização, o caminho é parametrização, identificação de padrões e foco em dor financeira real.Falamos sobre as principais diferenças entre B2B e B2C, Product-Led Growth (PLG) no contexto B2B, discovery orientado a impacto no negócio, cultura de experimentação com IA e como estruturar times de produto preparados para escalar.Se você é Product Manager, Founder, Head de Produto, Tech Lead ou atua com SaaS B2B, este episódio vai ajudar você a evitar erros clássicos que travam crescimento.LinkedIn do Ricardo: https://www.linkedin.com/in/ricardo-kremer/Principais tópicos do episódio:• Diferenças reais entre produto B2B e B2C• Por que customização destrói escalabilidade• Parametrização como estratégia de produto• ROI e redução de turnover como proposta de valor• PLG no B2B: educar antes de vender• Discovery focado em onde o cliente perde dinheiro• Como equilibrar experimentação e qualidade em ambientes críticos• Motor A vs Motor B: execução vs inovação• O perfil ideal de PM para B2B• Como fomentar cultura de testes sem comprometer clientesCapítulos:00:00 Introdução e contexto da Solides01:40 Diferenças entre B2B e B2C na prática06:40 Pressão de stakeholders e pedidos de clientes07:30 Parametrização vs customização09:50 PLG no B2B e educação de mercado13:40 Discovery focado em dor financeira17:45 Como estruturar e escalar times de produto B2B21:38 Cultura de experimentação em ambientes críticos25:40 Motor A vs Motor B e gestão de portfólio30:10 Conselhos para PMs que querem crescer no B2B33:30 EncerramentoRecados importantes:

Category Visionaries
Why organic referrals drive 80% of Clockwise's growth after a decade of marketing experiments | Matt Martin

Category Visionaries

Play Episode Listen Later Feb 27, 2026 26:01


Clockwise is pioneering intelligent time management for knowledge workers, addressing the fundamental constraint that limits all knowledge work organizations: how teams allocate their most finite resource. Founded in 2016, the company has spent a decade solving the problem of calendar inefficiency and meeting overload that fragments productive time. In a recent episode of BUILDERS, we sat down with Matt Martin, Co-Founder & CEO of Clockwise, to learn about the company's journey from a three-year build cycle to serving major software organizations through a product-led growth motion, the strategic decisions behind targeting software engineers as their wedge market, and why the time management problem remains largely unsolved despite being obvious to anyone who's worked in a large organization.Topics DiscussedWhy time remains the primary economic constraint in knowledge work despite a decade of tooling evolutionThe three-year pre-launch build period and deliberate four-year path to monetizationTargeting software engineers as the wedge: ROI clarity in heads-down time versus meeting-heavy rolesThe graveyard of calendar productivity startups: UI-focused plays, consumer pivots, and buyer/user misalignmentTransitioning from pure PLG to blended motion with enterprise inbound and pilot programsThe stubborn reality of organic growth: why referrals dominate despite extensive channel experimentationBuilding toward AI-powered personalized time agents that embrace individual complexity//Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.ioThe Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co//Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM

Luźno Przy Kawie
#278 - Spóźniony iPhone Air

Luźno Przy Kawie

Play Episode Listen Later Feb 27, 2026 56:30


Ten odcinek nagraliśmy już 11 lutego… a publikujemy dopiero teraz. Czasem tak bywa – za dużo rzeczy naraz, za mało przestrzeni na montaż. Ale treści się nie starzeją aż tak szybko, więc nadrabiamy i lecimy z tematem.Głównym bohaterem odcinka jest iPhone Air, a właściwie wrażenia Jaśka z jego użytkowania. Lekki, bardzo wydajny i – co najważniejsze – w zupełności wystarczający do codziennych potrzeb. Czy to idealny kompromis między mocą a mobilnością? Rozkładamy to na czynniki pierwsze.Na początku odcinka trochę motoryzacyjnej rzeczywistości:Serwis w ASO – wymiana oleju, filtrów i tylnej wycieraczki potrafi zaboleć finansowo… ale przynajmniej auto wróciło czyste. Coś za coś Florence w Krakowie – udało mi się zdobyć bilety w normalnej cenie! A pamiętacie, jak kiedyś z Maćkiem mówiliśmy, że rozchodzą się w kilka sekund? Tym razem się udało.Na koniec mały lokalny akcent – życzenia dla Gdyni i jej mieszkańców z okazji 100-lecia miasta. Piękny jubileusz i dobra okazja do chwili refleksji.Dzięki, że jesteście z nami – nawet jeśli czasem publikujemy z poślizgiem. Słyszymy się w kolejnym odcinku!Na stronie kawawbiurze.pl – 20% na kawy z kodem LPK2026!Rozdziały:00:00:00 – Intro00:00:40 – Cześć!00:04:30 – ASO – ale luksus00:13:51 – iPhone Air00:44:43 – Mam bilety!00:47:02 – Sto lecie miasta Gdynia00:50:39 – Kończymy00:56:01 – OutroPozdrawiamyAdam i JanInstagram: @LuznoPrzyKawieTwitter: @LuźnoPrzyKawieFacebook: @LPKpodcastProjekt okładki: Adam BorodoIdentyfikacja wizualna: High5Studio.plGłosy: Adam Borodo, Jan Urbanowicz

In Demand: How to Grow Your SaaS to $100K MRR
EP56: The busy founder's guide to activation

In Demand: How to Grow Your SaaS to $100K MRR

Play Episode Listen Later Feb 24, 2026 56:05


Activation is the most overlooked growth lever in SaaS, especially for PLG-focused companies. While founders obsess over acquisition, pricing, and retention, they often overlook low-hanging fruit with activation.  In this episode of In Demand, Asia and Kim break down what activation actually is, why most teams misunderstand it, and how to improve it using a clear, repeatable process. Asia shares why pop-ups and walkthroughs are not a strategy, why survivor bias is distorting your view of product performance, and how as few as three to five UX interviews can unlock growth. If you have a free trial, self-serve motion, or product-led growth model, this episode walks through a practical framework to improve activation. Got a question you'd like Asia to unpack on the podcast? Record a voicemail here. Links:  DemandMaven https://www.userinterviews.com/ Respondent.io Amplitude Mixpanel Chapters (00:01:00) - Why activation is often an overlooked growth lever in PLG SaaS.(00:04:05) - What activation actually means and how it connects acquisition and retention.(00:11:00) - Why pop-ups, overlays, and onboarding walkthroughs aren't working as well anymore.(00:14:00) - What good trial-to-paid benchmarks look like and why most bootstrappers leave money on the table.(00:19:45) - The process of improving activation, starting with step one, UX interviews with qualified strangers.(00:28:05) - What to pay attention to when doing UX interviews.(00:30:55) - The three levers to improve UX: cognitive overload, uncertainty, and limited attention.(00:36:50) - What steps to take after making initial improvements.(00:42:00) - How to think about later-stage activation.(00:52:45) - Activation starting from your homepage.

Le Backlog
32. Quel est le rôle d'un Product Manager en 2026 ? (Léa Brochard, CPTO @Shipup)

Le Backlog

Play Episode Listen Later Feb 23, 2026 85:25


Quand une boite fusionne CPO et CTO en un seul rôle, ça dit quelque chose sur l'avenir du Product.

Renegade Thinkers Unite: #2 Podcast for CMOs & B2B Marketers

Feature-and-function decks aren't winning anymore. In this episode of Renegade Marketers Unite, Drew sits down with Bob Wright (Firebrick) to break down how B2B CMOs can use positioning to drive growth, shorten sales cycles, and stand out in crowded markets.  They unpack why product-first stories fail, how to get to "one voice" across the company, and what it really means to own a key business problem that buyers care about. In this episode:  The three biggest positioning mistakes: product-first thinking, misalignment, and no owned problem  Creating urgency when "do nothing" is the real competitor  Why "why you, why now" matters more than "how it works"  When and how to rethink positioning after PLG, acquisitions, or expansion  How to stand out in a world of AI sameness  Building positions that sales actually uses If your messaging is drifting into "blah blah blah" territory, this episode will help you reset around problems, not products.  For full show notes and transcripts, visit https://renegademarketing.com/podcasts/ To learn more about CMO Huddles, visit https://cmohuddles.com/

Growthmates
Build an Audience, Then Build a Company | Michael Ridd (Dive.club, InFlight)

Growthmates

Play Episode Listen Later Feb 17, 2026 55:55


In this episode of Growthmates — The Creator's Path, Kate Syuma speaks with Michael Ridd — founder of Inflight, creator of Dive Club, and former creator of Figma Academy.Michael shares how he intentionally shifted from selling his time to building leverage through distribution.—

小人物上籃
小人物上籃-霹靂鍵盤#214 飛沖繩的猿迷是真愛、去蒙古的史東是真壞feat. Luphan陸大 02/09/2026

小人物上籃

Play Episode Listen Later Feb 11, 2026 128:36


過年回家很無聊?親戚聊完就只剩滑手機?打開 SUGO 用「附近」功能,直接找附近的人聊天、揪局!不用左右滑、等配對,上線就能開聊!隨時有人陪你過年不孤單✨快點擊連結下載

B2B SaaS Marketing Snacks
94 - How modern SaaS teams build scalable growth systems - With Alex Laventer

B2B SaaS Marketing Snacks

Play Episode Listen Later Feb 11, 2026 49:05


Are you actually growing your product, or just stacking signups that never turn into usage?A lot of teams get stuck there. More registrations feel good, but it's not the same as real usage, paid adoption, and a pipeline you can trust. And now with AI in the mix, it's easy to create more activity without getting more signal.In this episode of B2B SaaS Marketing Snacks, hosts Stijn Hendrikse and Brian Grav bring on their first guest, Alex Laventer.Alex has spent years in growth roles in B2B SaaS, including leading growth at DataStax and now leading go-to-market work on an AI agent product at IBM.The conversation gets practical fast, what “growth” really means, and how teams split (or combine) growth marketing and product growth.You'll walk away with a clearer way to measure growth, how to set up tracking you can rely on, and where AI can help (and where it tends to distract), including lead scoring and workflow automation.In this episode, you'll learn:Why signups mislead growth conversationsWhere teams lose signal without trackingHow PQLs connect product and marketingPerspective on sales assist with PLGExample: AI-assisted lead scoring workflows By the end, you'll know what to measure, what to ignore, and what to fix next so “growth” stops being a vague label and starts being a real operating system. Resources shared in this episode:BSMS 88 - Why founders overestimate PLG, and what VCs should check before investingBSMS 23 - Product led growth vs. sales led growthThe Foundation of a Successful SaaS GTM (Go-to-Market) Strategy T2D3 CMO MasterclassSubmit and vote on our podcast topicsABOUT B2B SAAS MARKETING SNACKSSince 2020, The B2B SaaS Marketing Snacks Podcast has offered software company founders, investors and leadership a fresh source of insights into building a complete and efficient engine for growth.Meet our Marketing Snacks Podcast Hosts: Stijn Hendrikse: Author of T2D3 Masterclass & Book, Founder of KalungiAs a serial entrepreneur and marketing leader, Stijn has contributed to the success of 20+ startups as a C-level executive, including Chief Revenue Officer of Acumatica, CEO of MightyCall, a SaaS contact center solution, and leading the initial global Go-to-Market for Atera, a B2B SaaS Unicorn. Before focusing on startups, Stijn led global SMB Marketing and B2B Product Marketing for Microsoft's Office platform.Brian Graf: CEO of KalungiAs CEO of Kalungi, Brian provides high-level strategy, tactical execution, and business leadership expertise to drive long-term growth for B2B SaaS. Brian has successfully led clients in all aspects of marketing growth, from positioning and messaging to event support, product announcements, and channel-spend optimizations, generating qualified leads and brand awareness for clients while prioritizing ROI. Before Kalungi, Brian worked in television advertising, specializing in business intelligence and campaign optimization, and earned his MBA at the University of Washington's Foster School of Business with a focus in finance and marketing. Visit Kalungi.com to learn more about growing your B2B SaaS company.  

Category Visionaries
How AskElephant achieved 400% growth with zero marketing spend | Woody Klemetson

Category Visionaries

Play Episode Listen Later Feb 11, 2026 27:10


Woody Klemetson scaled sales from 100 people at Divi to 350 at Bill.com post-acquisition, then walked away to build something harder: infrastructure for hybrid AI-human revenue teams. At AskElephant, he's tackling the problem that every revenue leader faces but few can articulate—how to actually implement AI in revenue operations when your systems weren't built for it. With zero marketing spend, AskElephant hit 400% growth through pure referral motion and converts 85% of pilots to production (versus single digits industry-wide). Woody breaks down why most "AI-ready" companies aren't, how to structure pilots that actually ship, and what it takes to hire sellers who orchestrate agents instead of relying on armies of support staff. Topics Discussed: Post-acquisition culture collision: the cost of moving too fast versus too slow Why "AI readiness" is usually one person at a company, not the organization  The 27-agent CRM system that delivers 5% forecast accuracy without human input  Revenue outcome systems as category evolution: solving for predictability across disconnected tools  Pilot-first GTM that converts at 85% by starting with one-minute-per-day wins  Partner-led distribution through consultants evolving from slideware to implementation  Hiring ops-minded sellers who code: over half of non-engineers using Cursor daily  The PLG expansion coming in 2025 and why traditional demand gen is getting tested alongside door-to-door GTM Lessons For B2B Founders: Culture integration requires explicit deceleration early: Woody's team assumed Bill.com wanted their aggressive startup velocity immediately post-acquisition. They didn't slow down to map cultural differences, causing "whiplash" across 350 people. The specific mistake: not creating space to understand Bill's processes before challenging them. Even when acquired for your approach, the first 90 days should be listening and mapping, not executing. Only after understanding their system can you effectively challenge and merge cultures. This applies whether you're acquiring or being acquired—the cultural work is non-negotiable and front-loaded. Diagnose AI readiness by system documentation, not enthusiasm: Most companies think they're AI-ready because leadership wants AI. Reality check: if your teams haven't documented their systems and processes, AI has nothing to learn from. AskElephant starts some customers with basic dictation—not because it's revolutionary, but because it's the prerequisite to anything meaningful. The diagnostic question: "Walk us through your current customer journey." If the answer is "we have sales stages," you're not ready for automation. You need documented systems before AI can execute them. Start by having AI observe and document before it acts. Build agents incrementally to compound context: AskElephant runs 27 different CRM agents that collectively deliver 5% forecast accuracy. This wasn't built in one sprint—it took 40 hours of training and context-building. Each agent handles a specific job: contact creation, data enrichment, ICP scoring, churn monitoring, stage updates. The misconception founders have: AI should work perfectly from the first prompt. The reality: you build agents brick by brick, each one learning from the previous context layer. This is why their forecasting works—because 27 agents watching different signals together create accuracy that one "smart" agent can't. Pilot conversion at scale requires deliberately small scope: Single-digit pilot-to-production rates happen because teams scope too big. AskElephant's 85% conversion comes from "dream big, implement small." First pilot: automated CRM notes. Then: notes humans wish they'd written. Then: automated field updates. Each step saves minutes, builds trust, proves value. Woody's framework: if you're not saving one minute per person per day in your first pilot, you've scoped wrong. The goal isn't to wow with ambition—it's to ship something that works perfectly, then expand from proven trust. Their customers average 27 hours saved per week per person, but none started there. Revenue outcome systems emerge from tool sprawl failure: Every revenue leader uses 15-20 disconnected tools trying to make revenue predictable. The category insight isn't "operating systems"—it's that companies care about outcomes, not operations. AskElephant's positioning: we focus on the outcome (predictable revenue), not just the operating infrastructure. This distinction matters because it shifts the conversation from technical plumbing to business results. When creating categories, find the frame that makes the buyer's problem visceral and your solution inevitable, even if you're solving similar problems as others in the space. Partner-led GTM turns consultants into distribution: AskElephant's entire growth came through partners: Salesforce/HubSpot consultants becoming AI strategists, sales coaches extending from training to implementation. The unlock: these partners needed a way to deliver lasting value beyond slideware. Previously, a coach would train your team and leave. Now they implement AI systems that hold teams accountable to the training, creating longer engagements and better outcomes. For founders: identify services providers whose business model gets dramatically better by incorporating your product. They become your sales force because you make them more valuable to their clients. Hire for orchestration capability, not pure sales skill: Over half of AskElephant's non-engineering team uses Cursor daily. Woody hires "ops-minded" and "tech-minded" sellers who can manage AI agents alongside human work. The old model: silver-tongued seller + solutions engineer + 27 support people. The new model: one seller orchestrating 27 AI agents. These reps don't build lists, don't create SOWs, don't write product scopes, don't need SEs for demos. But they still need human connection skills—listening, curiosity, presence. The hiring filter: can this person think in systems and implement technical solutions while maintaining high-touch relationships? If they can't code enough to orchestrate agents, they can't scale in this environment. // Sponsors: Front Lines — We help B2B tech companies launch, manage, and grow podcasts that drive demand, awareness, and thought leadership. www.FrontLines.io The Global Talent Co. — We help tech startups find, vet, hire, pay, and retain amazing marketing talent that costs 50-70% less than the US & Europe. www.GlobalTalent.co // Don't Miss: New Podcast Series — How I Hire Senior GTM leaders share the tactical hiring frameworks they use to build winning revenue teams. Hosted by Andy Mowat, who scaled 4 unicorns from $10M to $100M+ ARR and launched Whispered to help executives find their next role. Subscribe here: https://open.spotify.com/show/53yCHlPfLSMFimtv0riPyM

Hunters and Unicorns
The AI Reality Check: Why Most Startups Won't Survive the Hype with Paul Klein

Hunters and Unicorns

Play Episode Listen Later Feb 11, 2026 40:38


Today we tackle the noise surrounding the AI movement with Paul Klein, CEO and Founder of Browserbase. With a career spanning early-stage Twilio to raising $70 million in under two years for his own infrastructure startup, Paul brings much-needed critical thinking to the "AI bubble" debate. We explore the bridge between old-world sales principles and modern, developer-first GTM strategies. Paul breaks down why Product-Led Growth (PLG) should be viewed as a pipeline engine rather than just a revenue machine and explains the power of the "Logo Flywheel" in creating executive FOMO.

Remarkable Marketing
Dune: B2B Marketing Lessons on Finding Value in Unpopular Places with Madhav Bhandari, Head of Marketing at Storylane

Remarkable Marketing

Play Episode Listen Later Feb 3, 2026 46:26


Some of the most powerful ideas in marketing don't come from marketing at all. They come from stories that refuse to play it safe.That's the lesson of Dune, the sci-fi epic once considered unfilmable and now one of the most successful franchises of the decade. In this episode, we break down its marketing lessons with the help of our special guest Madhav Bhandari, Head of Marketing at Storylane.Together, we explore what B2B marketers can learn from world-building, pattern interruptions, and betting on emerging talent.About our guest, Madhav BhandariMadhav Bhandari is the Head of Marketing at Storylane. He's a a B2B marketer with 12+ years of experience helping startups grow from scrappy beginnings ($2M+ ARR) to category leadership ($20M+ ARR and beyond). Madhav built lean, high-performing marketing engines across both PLG / sales-led companies. His strength and philosophy is doing marketing that stands out. I focus on work that drives action and ties directly to pipeline.Madhav has helped many scale-ups grow beyond $10M ARR, either as a full-time leader or a hands-on advisor. I love taking on this challenge.What B2B Companies Can Learn From Dune:Show the product, don't narrate it. Madhav's first lesson from Dune is about restraint. The film works because it removes exposition and lets the audience experience the world firsthand. He draws a direct parallel to B2B marketing, saying, “ You've seen the B2B website homepages that are just full of jargon.  And I think now is the time to actually show the product.” Too many B2B teams rely on jargon, stock imagery, and abstract claims, forcing buyers to imagine value. The takeaway is simple: remove the guesswork. Interactive demos, real visuals, and tangible experiences outperform explanations every time. If buyers have to imagine what your product does, you've already added friction.Go where the work is unpopular but important. In Dune, the most valuable resource in the universe lives in the most unremarkable place. Madhav says, “ Unpopular but important projects, that's where the largest customer growth lies.” In marketing, that means resisting the pull of flashy homepage redesigns and brand exercises when the real leverage sits deeper, product pages, conversion paths, and messy parts of the funnel no one wants to own. If everyone wants to work on it, it's probably already optimized. The real upside lives where attention is scarce.Bet on emerging voices, not just famous ones. Dune didn't rely on a single A-list star to succeed, and Madhav has seen the same dynamic play out in B2B. His experience is clear: “ anytime I've gone with… a very popular influencer… that I interviewed, those episodes the way I thought they would perform, didn't really perform that well. Bu what's funny is that the people that are relatively unpopular but have done incredible work are the episodes that did fantastic.” Big names feel safe, but they're expensive and often underdeliver. Audiences respond more to sharp thinking and real experience than borrowed fame. In B2B, the fastest way to build trust is to help your audience discover someone worth listening to, before everyone else does.Quote“ Today, in our world, sameness is risky… The worst that could happen … is it's gonna perform the same as if you would've not done that, and the best case scenario is it's just gonna do insanely well.” Time Stamps[01:03] Meet Madhav Bhandari, Head of Marketing at Storylane01:08 Why Dune?01:51 Role of Head of Marketing at Storylane02:37 Breaking Down Dune10:53 B2B Marketing Takeaways from Dune25:18 Influencer Campaign Strategies28:28 The Power of Brand Awareness31:12 Storylane's Marketing Strategy35:08 Creative Marketing Examples38:37 Content Strategy and Founder Branding45:25 Final Thoughts and TakeawaysLinksConnect with Madhav on LinkedInLearn more about StorylaneAbout Remarkable!Remarkable! is created by the team at Caspian Studios, the premier B2B Podcast-as-a-Service company. Caspian creates both nonfiction and fiction series for B2B companies. If you want a fiction series check out our new offering - The Business Thriller - Hollywood style storytelling for B2B. Learn more at CaspianStudios.com. In today's episode, you heard from Ian Faison (CEO of Caspian Studios) and Meredith Gooderham (Head of Production). Remarkable was produced this week by Jess Avellino, mixed by Scott Goodrich, and our theme song is “Solomon” by FALAK. Create something remarkable. Rise above the noise. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Growthmates
Becoming a Creator and Pivoting Your Career | Ioana Teleanu (ex-Miro, UiPath)

Growthmates

Play Episode Listen Later Feb 3, 2026 62:46


In this episode of Growthmates — The Creator's Path, Kate Syuma sits down with Ioana Teleanu for a deeply honest conversation about identity, burnout, and creating in a world shaped by AI.Ioana shares what happened after years working at UiPath and Miro, building award-winning AI products, teaching AI design, and speaking on global stages — when her professional identity suddenly stopped making sense.—

小人物上籃
小人物上籃-霹靂鍵盤#212 跟Alex攻頂台北101同樣重要的事 feat. 轉播人阿喆 01/26/2026

小人物上籃

Play Episode Listen Later Jan 28, 2026 157:20


The Marketing Movement | Ignite Your B2B Growth
How B2B Influencer Marketing Actually Works in 2026

The Marketing Movement | Ignite Your B2B Growth

Play Episode Listen Later Jan 27, 2026 59:21


Topics Covered Influencer marketing as a modern demand lever in a “feeds are flooded” environment (credibility + distribution vs polish)Building an influencer program as a repeatable system (not one-off posts)Aligning influencer strategy to GTM motion: PLG + sales-led dual motion, fast sales cycle, and audience behavior on LinkedInTalent sourcing: internal creators, power users, frontline thought leaders, executive narrative voices, and “entertainer/evangelism” creatorsUsing influencer content as paid social creative (thought leadership ads) and deciding what to amplifyProgram mechanics: 3-month trials, post cadence, onboarding, briefs, review cycles, and relationship managementIncentives tied to outcomes (PLG signup bonus, ARR percentage via UTM)Measurement options: cost per signup, CPM/efficient reach, ABM-style reach goals, qualitative signals, and attribution constraintsQuality control: “smell test” for AI slop, engagement pods, and meaningful comment engagementActivation workflow: first-hour engagement, “let it cook” windows, reporting, UTM updates for paid vs organic, and distribution trade-offsQuestions This Video Helps AnswerHow do you structure B2B influencer marketing so it drives demand (not just awareness) without becoming random acts of promotion?How should a B2B team align influencer strategy to GTM motion (PLG vs sales-led) and measurement constraints?What's the best place to start: internal creators, power users, or external influencers?How do you choose influencer “types” (executive narrative, frontline education, entertainment/evangelism) based on goals?What contract length and cadence reduces the risk of declaring influencer “doesn't work” too early?How do you turn influencer posts into paid social assets using thought leadership ads?What's a practical incentive structure for creators tied to signups and revenue (UTM-based)?How do you spot inflated performance from AI-generated engagement or engagement pods?When should you promote a post, and when should you leave it organic?How can you evaluate influencer impact using CPM, reach, signups, and qualitative sales signals?Key TakeawaysIf you want results, avoid one-off influencer posts; start with at least a 3-month trial so performance can compound and audience association can form.In crowded feeds, influencer works because it combines trust with distribution; paid amplification (thought leadership ads) can make “small” creators valuable when the story is strong.Start sourcing from internal creators and product power users first; they're cheaper, more credible on use cases, and their content can be promoted to the right audience.Make onboarding and relationships non-negotiable: demo the product, ideate together, and set a clear review cycle so feedback doesn't show up only as late-stage Google Doc edits.Tie incentives to business outcomes and effort: bonus for PLG signups over the contract window, percentage of ARR from UTM-driven revenue, and paid boosts for high-performing posts (which also benefits the creator's audience growth).Don't boost everything: let posts run organically first, then selectively promote what's likely to work in paid (not every organic winner is a paid winner).Quality control requires human judgment: scan comments and engagement patterns for meaningful conversation vs AI slop, pods, or gamed metrics.

CHURN.FM
E300 | Building Retention into Your DNA: Matthew Tharp on Churn Signals, ICP & Cold Email

CHURN.FM

Play Episode Listen Later Jan 23, 2026 38:58


Today on the show, we have Matthew Tharp, CEO of Hunter.io, the all-in-one email outreach platform used by over 4 million people to identify prospects and run cold email campaigns. Previously, Matthew was VP of Worldwide Retention at LogMeIn, where he owned NRR across nine products—giving him a rare masterclass in retention challenges at different stages and scales.In this episode, we uncover why retention isn't a problem you solve when growth stalls—it's DNA you build from day one. Matthew shares the paradox of his career: building a company with 95%+ annual retention that got acquired, versus joining a high-growth PLG business with churn issues that needed solving before scaling further.We explore why over-indexing on either growth or retention creates problems, how to identify the usage patterns that predict churn in the first three weeks, and why every company that tries to fix retention late struggles. The lesson: balance from the beginning beats transformation later.We also discuss how Hunter achieved 3X growth this year by going back to basics—running a rigorous ICP analysis, choosing battles they could win instead of markets where competitors were spending $100M, and layering new customer segments without creating product bloat.Finally, we dig into cold outreach data: why email lists under 100 people dramatically outperform larger ones, why shorter emails force the clarity that drives replies, and how constraints—not scale—are the real performance lever in outbound.As always, I'd love to hear from you. You can email me directly at andrew@churn.fm, and don't forget to follow us on X.Churn FM is sponsored by Vitally, the all-in-one Customer Success Platform.

The Product Market Fit Show
He got 100k signups in 30 days. They all churned. 2 years later, he hit $10M ARR. | Rich White, Founder of Fathom

The Product Market Fit Show

Play Episode Listen Later Jan 22, 2026 42:32 Transcription Available


In this episode, Rich breaks down the wild story of Fathom's launch. He reveals how they secured a prime spot on the Zoom Marketplace and generated 100,000 signups in 30 days—only to realize 99.9% of them were useless. He discusses the pivot to monetization when the market crashed, how to design a product for viral loops, and why staying in private beta for 10 months was the best decision he ever made.Why You Should ListenWhy getting 100,000 signups in a single month nearly killed the company.How to use the "Iceberg Strategy" to build a defensible moat.Why you should attack the "800-pound gorilla" incumbent.How to hit $100k ARR by selling a roadmap that doesn't exist yet.The "Visible Feature" mechanic that drives zero-cost viral growth in B2B.Keywordsstartup podcast, startup podcast for founders, viral growth, product market fit, AI startup, freemium strategy, Zoom marketplace, PLG, B2B sales, Fathom00:00:00 Intro00:03:14 Why Sales Reps Hated Gong00:07:54 Betting on Transcription Costs Going to Zero00:11:52 The 10 Month Private Beta Strategy00:17:46 The Zoom Marketplace Launch00:19:52 100k Signups and Zero Growth00:26:39 Selling a Roadmap to Hit 100k ARR00:33:53 The Viral Loop of Visible Bots00:36:12 Why Enterprise Sales Was a Trap00:39:51 The Moment of True Product Market FitSend me a message to let me know what you think!

Ground Up
178: Activation Is Broken: Why Most SaaS Teams Get It Wrong (and How to Fix It)

Ground Up

Play Episode Listen Later Jan 21, 2026 25:39


Databox is an easy-to-use Analytics Platform for growing businesses. We make it easy to centralize and view your entire company's marketing, sales, revenue, and product data in one place, so you always know how you're performing. Learn More About DataboxSubscribe to our newsletter for episode summaries, benchmark data, and moreRodrigo Fernandez has helped 400+ SaaS companies drive over $1B in self-serve revenue and he's seen one problem kill growth over and over again: no one truly owns activation.In this episode, Rodrigo breaks down:Why “activation” is almost always misdefined (and who should actually define it)How teams confuse activity with value — and what to track insteadThe fatal flaws in bottom-up metrics and AI gimmicksWhat a real product activation journey looks like (solar system analogy and all)Why most PLG stacks are noisy, bloated, and doomed from the startIf you're stuck at $10M and can't see a path to $20M, this might be why.

Growthmates
Why Self-Made Generalists Will Shape the Next Big Thing | Nad Chishtie, Lovable

Growthmates

Play Episode Listen Later Jan 20, 2026 65:44


In the first episode of the new Growthmates season — The Creator's Path, Kate Syuma sits down with Nad Chishtie — Head of Design at Lovable, founding designer, and former game developer who has built products used by over 50 million people across Asia.Nad shares his unconventional path — from dropping out of university during the early days of Gmail, to building games, products, and AI-powered tools that lower the barrier to creation. They discuss why rigid systems don't work for curious builders, how being a generalist became an advantage, and why perfectionism holds creators back.This conversation explores AI as a creative playground, not just a productivity tool — and what the future of digital creation looks like when anyone can build without permission.Listen now on Apple, Spotify, and YouTube: https://www.youtube.com/watch?v=SGntQ4Bz9QM&t=264s —

Luźno Przy Kawie
#275 - Clicks Communicator

Luźno Przy Kawie

Play Episode Listen Later Jan 16, 2026 57:36


Nowy rok, nowy odcinek! Ja i Jasiek wracamy po świątecznej przerwie – z lekkim zimowym luzem i świeżymi tematami technologicznymi na start sezonu.W tym odcinku:

SaaS Metrics School
Where is Your Cost of ARR Trending This Year?

SaaS Metrics School

Play Episode Listen Later Jan 8, 2026 5:15


In episode #342 of SaaS Metric School, Ben breaks down the Cost of ARR metric and explains why it's one of the most practical and revealing go-to-market efficiency metrics for 2026 planning. He covers where the metric originated, how to calculate it correctly, and how to use it to sanity-check forecasts and budgets. Ben walks through the three variations of Cost of ARR (blended, new, and expansion), explains why bookings data—not revenue—is required, and shows how benchmarking by ACV provides far more insight than aggregate benchmarks. Resources Mentioned Benchmarkit.ai for SaaS metrics benchmarks Cost of ARR framework: https://www.thesaascfo.com/saas-cac-ratio/ SaaS Metrics Course: https://www.thesaasacademy.com/the-saas-metrics-foundation What You'll Learn What the Cost of ARR metric is and why it matters for SaaS and AI companies The difference between blended, new, and expansion Cost of ARR Why Cost of ARR must be based on bookings, not revenue How improper CAC allocation distorts Cost of ARR results How to use Cost of ARR to validate 2026 forecasts and budgets Why benchmarking by ACV size is more accurate than company size What top-quartile Cost of ARR performance looks like across ACV ranges Why It Matters Cost of ARR quickly exposes unrealistic bookings forecasts It connects sales and marketing spend directly to ARR outcomes The metric helps right-size go-to-market investment for 2026 ACV-based benchmarks prevent misleading efficiency comparisons Tracking trends over time highlights improving or degrading efficiency Cost of ARR works across PLG, sales-led, SaaS, and AI models

Product-Led Podcast
Disrupting a Red Ocean: Clarify.ai's Strategy to Beat Salesforce and HubSpot

Product-Led Podcast

Play Episode Listen Later Jan 6, 2026 39:23


Most founders are terrified of "Red Oceans" or markets saturated with massive competitors. They think the only way to win is to find a completely untapped "Blue Ocean." In this episode of the ProductLed 100 series, Wes Bush sits down with Patrick Thompson (CEO of Clarify.ai) and Esben Friis-Jensen (Co-Founder of Userflow) to discuss why entering a crowded market is actually the smartest move a founder can make if you have the right strategy. Patrick reveals how he spent six months interviewing potential customers before writing a single line of code for Clarify, an autonomous CRM designed to disrupt the industry giants. Together with Esben, they break down the exact framework for validating problems, the power of business model disruption through pricing wars, and why "feature parity" is not the goal. Whether you are building a new startup or trying to carve out space in a competitive category, this episode offers a masterclass in customer discovery, positioning, and Go-To-Market execution. Key Highlights: 02:15 : Why Patrick spent 6 months on discovery before writing a line of code 06:53 : The "Red Ocean" Advantage: Why crowded markets are easier than Blue Oceans 10:10 : How to differentiate when features are commoditized 12:34 : Using price and ease of use as a wedge against incumbents 18:31 : The 3-Step Framework for building what people want: ICP, Channels, and Business Model 23:12 : Which acquisition channels actually work (Product Hunt vs. Founder-led Marketing) 30:04 : Why complex products still need human onboarding, even in PLG 36:49 : How to operationalize customer feedback for engineering teams Resources:

EUVC
E676 | Poone Mokari, ewake.ai & Pietro Bezza, Connect Ventures: Building the AI Teammate for Software Reliability

EUVC

Play Episode Listen Later Jan 6, 2026 41:14


Welcome back to the EUVC Podcast where we dive deep into the craft of building and backing venture-scale companies in Europe.Modern software doesn't fail quietly.It fails on Black Friday.It fails while the CFO is in a board meeting.It fails when your biggest customer is mid-way through a critical workflow.And when it does, there's one brutal reality:The data is there but nobody has time to interpret it.Today we're exploring one of the most under-discussed yet mission-critical parts of building modern software: reliability in production.Joining Andreas are:

Topline
He Built a SaaS Monster with $1 Million (And Refused to Raise More)

Topline

Play Episode Listen Later Jan 4, 2026 38:53


Wade Foster (CEO) built Zapier into a profitable powerhouse without traditional VC funding—just $1M post-YC, then profitable ever since. On this episode, the co-founder and CEO shares how that capital discipline shaped their ability to pivot hard when AI hit. Wade also dishes on: The GPT-4 moment that shifted Zapier's roadmap A tested formula for AI agents that actually work How to incentivize internal AI adoption   Thanks for tuning in! Catch new episodes every Sunday Subscribe to Topline Newsletter. Tune into Topline Podcast, the #1 podcast for founders, operators, and investors in B2B tech. Join the free Topline Slack channel to connect with 600+ revenue leaders to keep the conversation going beyond the podcast!   Chapters: 00:00 Introduction: Wade Foster and the Age of Agents 02:38 Zapier's Origin: Solving the SaaS Integration Problem 04:14 From Zaps to Agents: The Evolution of Automation 05:07 How GPT-4 Changed Zapier's Internal Strategy 06:43 Unstructured Data and the Rise of Vibe Building 09:56 Why Long-Term Product Roadmaps Are Now Obsolete 13:00 Transitioning from PLG to Enterprise Amidst Competition 17:58 What Actually Works: Defining Successful Agentic Workflows 20:59 Building an AI-Literate Company Culture 26:18 Future Outlook: AI Bubbles vs. Product Reality 27:38 Navigating Board Expectations During Technology Shifts 30:23 Zapier's Capital Efficiency and Fundraising History 33:58 Founder Advice: Prioritizing Long-Term Thinking  

The RevOps Review
First Principles GTM: Scaling PLG, Sales-Assist, and the AI-Powered Revenue Machine with Gaurav Agarwal

The RevOps Review

Play Episode Listen Later Jan 2, 2026 24:45


Jeff sits down with Gaurav Agarwal to unpack how first principles thinking helps leaders build repeatable growth without falling back on stale playbooks. They dig into the mechanics of a revenue machine, generate demand, close demand, grow customers, and how ClickUp has evolved from pure PLG to sales-assist and into true sales-led growth.Gaurav also shares a sharp POV on AI agents: where they drive real productivity, why “more output” can create misalignment and “slop,” and what operators must do to keep teams (and agents) pulling in the same direction. If you're navigating GTM strategy, annual planning, or the AI era of execution, this one's packed with frameworks you'll actually use.

Marketing Trends
The CMO Who Never Becomes Obsolete

Marketing Trends

Play Episode Listen Later Dec 17, 2025 53:20


The most future-ready marketing leaders aren't the ones chasing trends… they're the ones who can reinvent themselves every time the industry changes.Michelle Huff, Chief Marketing Officer at Alteryx, joins Marketing Trends to break down the mindset that kept her relevant through every major tech revolution, from Web1 to cloud, SaaS, PLG, and now AI. She explains how to balance curiosity with focus, why AI is really about automating judgment (not just tasks), and how she's redesigning her marketing org around agents, automation, and new workflows.Michelle also shares early results from Alteryx's AI experiments, how she's rebuilding a 700,000-person community, and why great leaders still start with the end user even as their buyer audiences expand. Key Moments:  00:00 – How to Stay Relevant Through Every Tech Shift03:42 – A Career Spanning Web1, Cloud, SaaS, and AI06:58 – Curiosity Is the Ultimate Career Advantage10:12 – When Leaders Should Tinker and When to Delegate13:28 – Building a Marketing Culture That Experiments16:41 – Why AI Is About Judgment, Not Just Automation20:07 – Inside an AI-Powered SDR Outbound Workflow23:34 – Do AI Agents Replace People or Elevate Them26:58 – Upskilling Teams in an AI-Driven Organization30:17 – Why Most AI Content Fails to Break Through33:36 – How to Stand Out in a Noisy B2B Market36:52 – Why Enterprise Brands Lose Touch With End Users39:48 – How Alteryx Built a 700,000-Person Community43:06 – Turning Community Into Competition and Learning46:32 – Early AI Wins That Drive Real Pipeline Impact  This episode is brought to you by Lightricks. LTX is the all-in-one creative suite for AI-driven video production; built by Lightricks to take you from idea to final 4K render in one streamlined workspace.Powered by LTX-2, our next-generation creative engine, LTX lets you move faster, collaborate seamlessly, and deliver studio-quality results without compromise. Try it today at ltx.studio Mission.org is a media studio producing content alongside world-class clients. Learn more at mission.org. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Grow Your B2B SaaS
S7E21 - How AI Will Rewrite SaaS GTM in 2026: Pricing, Efficiency & Sales Automation with Jacco van der Kooij

Grow Your B2B SaaS

Play Episode Listen Later Dec 16, 2025 20:25


In this episode of the Grow Your B2B SaaS podcast, host Joran welcomes back Jacco van der Kooij, founder of Winning by Design, to unpack how AI-native SaaS companies are changing the rules of growth, pricing, and go-to-market in 2026. The conversation covers why real-time user-level data is becoming the defining competitive advantage, the pitfalls and promise of usage-based pricing for AI products, the existential challenge of inference costs for freemium models, and the enduring importance of subscriptions with smart hybrid elements. It also dives into how AI will replace the majority of sales tasks, the 30 percent of human expertise that remains essential, and why advocacy and community-driven growth loops will shape pipeline generation. From early-stage foundations to scaling to $10 million ARR, Jacco breaks down what founders need to get right now to thrive in the years ahead.Key Timecodes(0:00) - B2B SaaS podcast intro, AI native SaaS, pricing, GTM strategy 2026(1:01) - Jacco van der Kooij intro, Winning by Design(1:14) - 2026 success factors: real-time data, PLG, cohort analytics(2:31) - AI native buyer journey, user-led growth, usage patterns(3:48) - SaaS pricing: usage-based vs subscription, outcome-based pricing(4:23) - AI inference costs, freemium risk, monetization challenges(5:05) - Freemium in AI tools, limits, value gating(5:23) - Consumption-based pricing vs subscription, hybrid pricing(6:12) - Hybrid pricing example, membership + per-resolution fees(7:03) - Efficient growth, GTM efficiency, LTV:CAC, retention, outcomes(8:36) - AI for customer insights, demand gen, lookalike users(9:36) - Ad: B2B SaaS affiliate referral platform, AI-powered recruitment(9:47) - AI and jobs: replace vs enable, workforce impact(11:19) - GTM with AI: 70% sales tasks automated, CRM, scheduling, summaries(12:56) - Trust, human expertise, advocacy, risk mitigation(13:59) - Rebuilding GTM 2026: automation, expert touchpoints, events(15:00) - Growth loop: usage patterns, word of mouth, advocacy pipeline(16:26) - Community-led growth: user conferences, LinkedIn sharing, Clay example(17:02) - SDR strategy: activate users, customer success advocacy(17:11) - Early-stage advice: real-time data system, analytics(17:25) - Data stack recommendation: Snowflake, realtime data lake(17:32) - Scaling to $10M ARR: team alignment, closed-loop GTM(18:04) - Shared system understanding: recurring revenue, training(19:01) - Growth Institute by Winning by Design: courses, community, case studies(19:39) - Where to find: winningbydesign.com, Growth Institute(19:45) - Closing thoughts, optimism, AI era(19:54) - Outro: like, subscribe, sponsor, guest/topic requests(20:17) - Reditus mention, B2B SaaS affiliate program

EUVC
Matthew Wilson (Jack & Jill) & Peter Specht (Creandum): AI Recruiting Agents, a $20M Seed & the New GTM Playbook

EUVC

Play Episode Listen Later Dec 16, 2025 49:52


This week on the EUVC Podcast, Andreas Munk Holm sits down with Matthew Wilson, co-founder of Jack & Jill, and Peter Specht, General Partner at Creandum. Fresh off a $20M seed to take their AI recruiting agents global, they dig into how conviction is built in Europe, from founding insight to investor belief, and what it now takes to scale an agent-native company with speed, precision, and craft.Jack helps candidates find and optimize their careers. Jill helps companies hire brilliantly. Together, the two agents form a high-signal, two-sided network that aims to become the world's most networked AI-powered recruitment agency — without the classical incentive conflicts of human middlemen.Here's what's covered:02:35 | Why Creandum leaned in, conviction on voice-based interfaces and why recruiting is a massive, broken vertical for agent AI03:38 | The founding moment: leaving Omnipresent, 18 months in the wilderness, and the February insight that agents make talent marketplaces finally viable07:07 | Recruiting is broken (and AI made it worse): why first-principles thinking is needed to avoid “more noise, not more signal.”09:15 | Investor conviction: founder/market fit, why this moment is different, and the defensibility of a two-sided agentic marketplace12:22 | The user experience: the “coffee chat” with an AI recruiter: deep voice conversation → matching, prep, coaching, introductions16:30 | Solving the incentives trap: why Jack works 100% for candidates and Jill works 100% for companies (fixing agency conflicts)19:10 | Coaching as core: how AI unlocks career guidance, interview prep, and hands-on support that humans rarely get today22:47 | Building fast in the AI era: talent density, global expansion, and why a 20M seed makes sense for a dual-product marketplace26:35 | Two companies in one: scaling Jack (consumer) + Jill (B2B) simultaneously, across markets, with AI leverage34:02 | The GTM playbook: engineering-led marketing, AI-driven creative testing, instant value, and rethinking B2B buying entirely37:47 | The new AI go-to-market: speed, PLG dominance, virality-by-design, and why distribution now matters more than ever43:52 | Two GTM worlds: viral AI products vs. slow, enterprise-heavy AI deployments (and why both will coexist)47:15 | The “productization” of marketing — why engineering now powers growth, not headcount-heavy marketing orgs50:29 | Final advice (VC POV) — start with a unique insight, not a trend; think in 5–10 year arcs, not quick ARR bumps

The Effortless Podcast
The Structured vs. Unstructured Debate in Business Software - Episode 20: The Effortless Podcast

The Effortless Podcast

Play Episode Listen Later Dec 15, 2025 82:29


In this episode of The Effortless Podcast, Amit Prakash and Dheeraj Pandey dive deep into one of the most important shifts happening in AI today: the convergence of structured and unstructured data, interfaces, and systems.Together, they unpack how conversations—not CRM fields—hold the real ground truth; why schemas still matter in an AI-driven world; and how agents can evolve into true managers, coaches, and chiefs of staff for revenue teams. They explore the cognitive science behind visual vs conversational UI, the future of dynamically generated interfaces, and the product depth required to build enduring AI-native software.Amit and Dheeraj break down the tension between deterministic and probabilistic systems, the limits of prompt-driven workflows, and why the future of enterprise AI is “both-and” rather than “either-or.” It's a masterclass in modern product, data design, and the psychology of building intelligent tools.Key Topics & Timestamps 00:00 – Introduction02:00 – Why conversations—not CRM fields—hold real ground truth05:00 – Reps as labelers and the parallels with AI training pipelines08:00 – Business logic vs world models: defining meaning inside enterprises11:00 – Prompts flatten nuance; schemas restore structure14:00 – SQL schemas as the true model of a business17:00 – CRM overload and the friction of rigid data entry20:00 – AI agents that debrief and infer fields dynamically23:00 – Capturing qualitative signals: champions, pain, intent26:00 – Multi-source context: transcripts, email threads, Slack29:00 – Why structure is required for math, aggregation, forecasting32:00 – Aggregating unstructured data to reveal organizational issues35:00 – Labels, classification, and the limits of LLM-only workflows38:00 – Deterministic (SQL/Python) vs probabilistic (LLMs) systems41:00 – Transitional workflows: humans + AI field entry44:00 – Trust issues and the confusion of the early AI market47:00 – Avoiding “Clippy moments” in agent design50:00 – Latency, voice UX, and expectations for responsiveness53:00 – Human-machine interface for SDRs vs senior reps56:00 – Structured vs unstructured UI: cognitive science insights59:00 – Charts vs paragraphs: parallel vs sequential processing1:02:00 – The “Indian thali” dashboard problem and dynamic UI1:05:00 – Exploration modes, drill-downs, and empty prompts1:08:00 – Dynamic leaves, static trunk: designing hierarchy1:11:00 – Both-and thinking: voice + visual, structured + unstructured1:14:00 – Why “good enough” AI fails without deep product1:17:00 – PLG, SLG, data access, and trust barriers1:20:00 – Closing reflections and the future of AI-native softwareHosts: Amit Prakash – CEO and Founder at AmpUp, former engineer at Google AdSense and Microsoft Bing, with extensive expertise in distributed systems and machine learningDheeraj Pandey – Co-founder and CEO at DevRev, former Co-founder & CEO of Nutanix. A tech visionary with a deep interest in AI, systems, and the future of work.Follow the Hosts:Amit PrakashLinkedIn – Amit Prakash I LinkedInTwitter/X – https://x.com/amitp42Dheeraj PandeyLinkedIn –Dheeraj Pandey | LinkedIn Twitter/X – https://x.com/dheerajShare your thoughts : Have questions, comments, or ideas for future episodes?Email us at EffortlessPodcastHQ@gmail.comDon't forget to Like, Comment, and Subscribe for more conversations at the intersection of AI, technology, and innovation.

Revenue Builders
Comp Plans for Consumption-based Businesses

Revenue Builders

Play Episode Listen Later Dec 14, 2025 10:39


In this short segment of the Revenue Builders Podcast, we revisit the discussion with Jose Fernandez — former Head of Global Sales Development at Google and now CEO of Easy Comp — breaks down how compensation must evolve when companies shift from traditional SaaS licensing to consumption-based models. Drawing from his experience at Google Ads, one of the most successful consumption engines in business history, Jose lays out the structural advantages of consumption models and how GTM, onboarding, forecasting, and comp plans must align to unlock growth.John McMahon and John Kaplan then expand on how consumption changes seller behavior, deal sizing, renewal dynamics, forecast accuracy, and quota mechanics. This is a must-listen for revenue leaders, sellers, and anyone navigating the industry-wide shift toward usage-based pricing.KEY TAKEAWAYS[00:00:46] Companies transitioning to consumption models often copy SaaS licensing structures instead of designing comp that amplifies consumption-driven advantages.[00:01:34] Three core advantages of consumption models: lower barrier to entry, value-aligned spend increases, and product-led expansion.[00:03:07] Aligning GTM roles — new business, onboarding, and account management — enables scale and fairness in comp.[00:03:57] Forecasting in consumption models becomes an analytical discipline, requiring predictive models rather than rep intuition.[00:05:00] High-quality customer fit at acquisition can result in massive upside — one rep earned huge commission from a $15M three-month advertiser.[00:07:02] In consumption, churn can happen in a week — sellers must ensure rapid value realization, not just contract signing.[00:08:00] Sellers often intentionally downsize initial deals to ensure burn-down and protect compensation.[00:08:59] PLG and sales-assisted models blend; comp must account for small initial usage that grows rapidly.[00:09:48] Companies balance advance payments to reps with clawbacks to protect against churn.[00:10:10] Smart sellers can land small, prove value, and convert usage to multi-year, high-value commitments.QUOTES[00:01:10] “Companies take too much inspiration from the old model instead of designing comp that amplifies the advantages of consumption.”[00:01:56] “Customer spend is directly proportional to the value they get — and their understanding of that value.”[00:02:19] “If you have an amazing product, some of that growth is going to be product-led, regardless of the sales team.”[00:03:57] “Forecasting in a consumption model is an analytical exercise — not something you ask an account executive to guess.”[00:07:54] “In consumption, a customer can use it for a week, turn it off like a light switch, and move on.”[00:08:38] “PLG might start with $500 on a credit card and scale into a major enterprise deal.”[00:09:28] “Sometimes comp gives future credit for usage trajectory — but companies will claw it back if churn happens.”[00:10:33] “There's a lot of gold in this full episode — make sure you check it out.”Listen to the full conversation through the link below.https://revenue-builders.simplecast.com/episodes/driving-sales-behavior-with-effective-compensation-plans-with-jose-fernandezEnjoying the podcast? Sign up to receive new episodes straight to your inbox:https://hubs.li/Q02R10xN0Check out John McMahon's book here:Amazon Link: https://a.co/d/1K7DDC4Check out Force Management's Ascender platform here: https://my.ascender.co/Ascender/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The aSaaSins Podcast
From PLG to Enterprise: Tyler Will on Building Modern GTM at Intercom

The aSaaSins Podcast

Play Episode Listen Later Dec 12, 2025 25:09


In this episode of the Thread Podcast, Justin talks with Tyler Will, VP of GTM Strategy & Ops at Intercom, about how modern revenue organizations are evolving in an era defined by AI, PLG-to-enterprise transitions, and go-to-market speed.Tyler shares his journey from economic consulting and Bain, to GTM leadership at LinkedIn, to now scaling RevOps at Intercom. He breaks down the key differences between operating at a 20,000-person giant and a high-velocity SaaS company, why balancing PLG and enterprise sales motions requires intentional system and process design, and how Intercom rebuilt its routing, sales assist, and pricing guardrails to accelerate ACVs and bring clarity back to the customer journey.The conversation digs into how AI is reshaping selling—not by replacing reps, but by giving them time back. From auto-generating QBR decks to enriching data behind the scenes, Tyler explains why AI actually makes sales more human, not less. He also shares why the next generation of RevOps talent will shift from narrow specialists to curious generalists who leverage AI, understand the full GTM workflow, and act as true co-owners of the business.This is a high-signal episode for anyone thinking about PLG evolution, GTM design, AI-powered sales, and how RevOps must evolve to meet the moment.Chapters00:00 — Intro + Tyler's Background Justin sets up the episode; Tyler shares his path from consulting and Bain to LinkedIn to Intercom.02:00 — Early Career Lessons: From Consulting to GTM How economic consulting and strategy work shaped Tyler's analytical and leadership approach.03:30 — Operating at Scale: LinkedIn vs. Intercom Why large enterprise GTM is committee-driven, and how smaller SaaS companies require speed, adaptability, and influence without authority.06:00 — PLG, Sales-Led, and the Middle Ground How Intercom balances self-serve PLG customers with enterprise sales—and why a “Sales Assist” motion has become critical.08:30 — Redesigning Routing, Guardrails & ACV Growth How simplifying and separating motions helped Intercom lift sales-led logos and drive higher ACVs.10:45 — AI as an Amplifier, Not a Replacement Why AI frees reps from low-value tasks (QBR decks, data cleanup) and makes room for more human selling.13:20 — The Real Risk: Overvaluing Human Busywork Why reps aren't losing points for doing things manually—and why AI should elevate the conversation, not eliminate the human.15:00 — The Future of RevOps Careers Why RevOps is shifting from specialists to generalists who use AI, understand systems, and act like business owners.18:00 — What RevOps Leaders Should Learn Next Tyler's advice to aspiring operators—how to become more valuable by being curious across the entire GTM ecosystem.19:30 — Closing Thoughts + Intercom Hiring Tyler encourages RevOps pros to embrace the field and shape the future; Justin wraps the conversation.

The Product Market Fit Show
He killed a viral app with 50k users. 2 years later, he hit $10M ARR and raised $30M from Sequoia. | David Paffenholz (Juicebox)

The Product Market Fit Show

Play Episode Listen Later Dec 11, 2025 51:58 Transcription Available


David had a consumer app with 50,000 users and viral traction—and he shut it down. The retention metrics weren't as good as what he'd seen at Snapchat.That difficult decision cleared the path for Juicebox, AI for recruiting that grew to $10M ARR in 2 years. In this episode, David reveals how he pivoted to AI recruiting, generated millions of views with a simple LinkedIn demo, and ground through months of brutal churn to unlock 10x growth. If you want to know how to execute a flawless PLG strategy, run a hyper-lean team, and secure a $30M Series A from Sequoia, this is the blueprint.Why You Should ListenWhy you should kill some products even if they're going viral.How to launch a B2B product with zero budget.The "manual" playbook for fixing high churn.Why you should keep your team under 25 people even after raising millions.How to land an inbound term sheet from Sequoia.Keywordsstartup podcast, startup podcast for founders, product market fit, finding pmf, PLG strategy, viral marketing, pivoting, AI recruiting, Series A fundraising, Sequoia Capital00:00:00 Intro00:03:15 Learning Growth at Snap00:13:01 Killing a Viral App with 50k Users00:20:34 The 90 Second LinkedIn Video That Launched Juicebox00:26:21 Fixing High Churn with Manual Work00:33:04 Why B2B Products Only Need to be Marginally Better00:42:27 Scaling to $10M ARR with Founder Led Sales00:47:40 Raising a $30M Series A from Sequoia00:50:12 The Moment of True Product Market FitSend me a message to let me know what you think!

小人物上籃
小人物上籃-霹靂鍵盤#205 雲豹贏不停,超有梗小編該加薪了嗎?! feat.雲豹社群編輯Kevin 12/08/2025

小人物上籃

Play Episode Listen Later Dec 10, 2025 168:36


首屆「企業海洋永續貢獻獎」表揚海洋守護者【企業挺海洋】為了讓更多企業力量,導入海洋保育,開啟海洋ESG之路,海委會首度設立企業海洋永續貢獻獎,表揚海洋保育走在最前線的企業海洋委員會官方臉書: https://fstry.pse.is/8ecdmf ------以上為海洋委員會廣告------ —— 以上為 Firstory Podcast 廣告 ——

The Product Market Fit Show
Her VCs said she killed the company. 6 years later, it's worth $1.3B. | Jennifer Smith, Founder of Scribe

The Product Market Fit Show

Play Episode Listen Later Dec 8, 2025 65:09 Transcription Available


Jennifer went from VC to founder and immediately broke every rule in the book. When she pivoted Scribe from an automation tool to a documentation platform, her investors told her she had just killed the company. She ignored them. Instead of polishing her product, she launched a "janky" offline MVP on Product Hunt to test for real market pull. Scribe is now used by 95% of the Fortune 500. In this episode, Jennifer reveals the brutal truth about ignoring "smart" money, why you should run PLG and Enterprise sales simultaneously from Day 1, and how to tell the difference between pushing a boulder up a hill and chasing one down it.Why You Should ListenWhy you sometimes need to ignore your investors to save your startup.The "Boulder Test": The definitive gut check for knowing if you have true Product-Market Fit.How to validate a massive opportunity with zero marketing budget.Why the conventional wisdom about choosing between PLG and Enterprise Sales is wrong.How to turn executive hiring interviews into free mentorship sessions.Keywordsstartup podcast, startup podcast for founders, product market fit, PLG strategies, MVP testing, enterprise sales, go to market strategy, early stage growth, finding pmf, founder stories00:00:00 Intro 00:02:21 1,200 Customer Interviews as a VC 00:22:07 How to Hire for Excellence 00:30:18 The Pivot from Automation to Documentation 00:39:17 Launching a "Janky" MVP on Product Hunt 00:49:09 The Boulder Test for Product-Market Fit 00:52:50 Doing PLG and Enterprise Sales Simultaneously 01:03:12 Ignoring Investors to Save the CompanySend me a message to let me know what you think!

The Marketing Millennials
The Best of Both Worlds: How PLG and SLG Win Together with Gaurav Agarwal and Kyle Coleman of ClickUp | Ep. 371

The Marketing Millennials

Play Episode Listen Later Dec 3, 2025 28:52


How do Product-Led Growth (PLG) and Sales-Led Growth (SLG) actually work together…instead of competing against each other? In this Marketingland 2025 session, ClickUp's COO Gaurav Agarwal and Global VP of Marketing Kyle Coleman break down why the “PLG vs. SLG” debate is a false dichotomy, and how the most successful companies blend both to drive real revenue impact. From navigating budget decisions to building demand, delivering intuitive product experiences, and integrating AI in ways that actually help (instead of over-promising), they dig into the mechanics of modern growth engines. And, should incremental ROI really be your real north star? If you're building, optimizing, or scaling a modern GTM engine, this conversation is for YOU.  Optimizely helps thousands of brands create, personalize, and optimize exceptional digital experiences. See how Optimizely Opal, our AI agent orchestration platform, automates real marketing work and helps teams scale their impact at https://www.optimizely.com/ai/?utm_campaign=PS-GL-11-2025-MARKETING-MILLENNIALS-PODCAST&utm_medium=cpc&utm_source=marketingmillennials&utm_content=opal-agent-orchestration Follow Gaurav: LinkedIn: https://www.linkedin.com/in/gauravragarwal/ Follow Kyle: LinkedIn: https://www.linkedin.com/in/kyletcoleman/ Sign up for The Marketing Millennials newsletter: https://themarketingmillennials.com/ Daniel is a Workweek friend, working to produce amazing podcasts. To find out more, visit: https://workweek.com/

ai roi workweek gtm agarwal gaurav best of both worlds global vp plg slg product led growth plg marketing millennials psgl
InvestTalk
The "6-Figure HSA" Retirement Strategy

InvestTalk

Play Episode Listen Later Dec 2, 2025 45:40


A growing trend is emerging where a Health Savings Account (HSA) is treated not as spending money, but instead as a "Super IRA" for retirement. Could this be the right call for you?Today's Stocks & Topics: Vertiv Holdings Co (VRT), Marker Wrap, Platinum Group Metals Ltd. (PLG), “The "6-Figure HSA" Retirement Strategy”, Liquidity, Leidos Holdings, Inc. (LDOS), The Auto Industry, Emerging Markets Bonds.Our Sponsors:* Check out Incogni: https://incogni.com/investtalk* Check out Invest529: https://www.invest529.com* Check out NordProtect: https://nordprotect.com/investalk* Check out Progressive: https://www.progressive.com* Check out Quince: https://quince.com/INVEST* Check out TruDiagnostic and use my code INVEST for a great deal: https://www.trudiagnostic.comAdvertising Inquiries: https://redcircle.com/brands

Grow Your B2B SaaS
S7E17 - How PLG Will Change in 2026: AI Agents, Onboarding & Hybrid GTM With Roelof Otten

Grow Your B2B SaaS

Play Episode Listen Later Dec 2, 2025 17:49


In this special live episode from SaaS Summit Benelux in Amsterdam, Joran sits down with Roelof Otten, founder of SaaSmeister, to explore How PLG Will Change in 2026: AI Agents, Onboarding & Hybrid GTM. Together, they break down the biggest shifts coming to B2B SaaS go-to-market—from the rise of hybrid motions and the evolution of sales roles to the transformative impact of AI-powered demos, agents, and conversational interfaces.Roelof shares actionable, stage-specific insights for founders at every level. You'll hear why PLG is becoming a company-wide strategy instead of a product feature, how onboarding is expanding beyond the UI, why freemium is harder for AI-native products, and what it really takes to build data tracking that supports growth instead of slowing it down.Whether you're moving from sales-led to product-led, building a hybrid GTM, or preparing your SaaS product for an AI-first future, this episode offers a clear roadmap for navigating the changes ahead and meeting buyers where they want to be in 2026.Tune in to learn how to implement PLG effectively, empower your sales team in a consultative model, integrate AI responsibly, and build growth loops that compound over time.Key Timecodes(0:00) – B2B SaaS, PLG, AI onboarding, AI demos, product-qualified pipeline, GTM 2026, SaaS Summit(0:52) – B2B SaaS podcast(0:58) – Roelof Otten, SaaSmeister, PLG(1:07) – GTM 2026, PLG trends(1:42) – Hybrid GTM, PLG, sales-led(2:36) – AI GTM, AI agents, AI demos(3:12) – Interactive demos, AI sales assistant(3:50) – Buyer enablement, AI demo(4:20) – In-product AI, trial support(4:36) – PLG transformation, sales alignment(5:21) – Consultative sales, upsell, PQLs(5:43) – PLG funnel, activation, expansion(6:00) – Conversational UI, AI UX(6:52) – UX transition(7:25) – AI platform, data layer, models(7:37) – MCP, AI integrations, ChatGPT, Claude(8:10) – AI privacy, security, compliance(8:46) – Build vs buy AI, LLMs(9:22) – PLG first, SaaS trial(9:38) – Reditus, SaaS affiliate(10:22) – AI costs, freemium(10:35) – Freemium strategy, CAC, churn(11:39) – Referrals, partnerships, affiliate growth(12:33) – In-app referrals, incentives(13:06) – Onboarding, nurture, reactivation(13:57) – Signup friction, JTBD, ICP(14:57) – Personalized onboarding(15:14) – Founder-led sales, JTBD, messaging(15:45) – ICP focus, activation metrics(16:39) – Product analytics, event tracking(17:01) – Roelof Otten, SaaSmeister(17:15) – Podcast outro, sponsor, Reditus

SaaS Metrics School
Should Expansion Revenue Be Included or Excluded From LTV

SaaS Metrics School

Play Episode Listen Later Dec 2, 2025 3:34


In episode #333, Ben answers a foundational SaaS metrics question: Should expansion revenue be included in your Lifetime Value (LTV) calculation? Ben walks through the correct LTV formula and highlights how misalignment between LTV and CAC can distort your LTV:CAC ratio. He also covers when expansion should be included. The episode provides a practical framework for SaaS founders, CFOs, and operators to ensure they calculate LTV accurately, compare it properly to CAC, and model unit economics using consistent, reliable inputs. Key Topics Covered The correct LTV formula using average new-customer MRR × subscription gross margin Why the churn input should align with dollar-based metrics using 1 – Gross Revenue Retention (GRR) Why expansion revenue is deliberately excluded from LTV in most SaaS models How including expansion artificially inflates the LTV:CAC ratio The cost mismatch between acquiring new customers (CAC) and generating expansion revenue When PLG motions justify including limited, time-bound expansion revenue in LTV How organic upgrades differ from sales-assisted expansion How SaaS+ businesses must adjust their LTV formula to account for usage revenue The role of gross margin in determining true unit economics The importance of aligning metric definitions when evaluating customer profitability Why This Matters This episode is essential for: SaaS founders calculating LTV for budgeting, pricing, and forecasting CFOs, controllers, and FP&A leaders managing unit economics and CAC payback Finance teams modelling customer profitability and revenue expansion Operators working in PLG environments assessing organic expansion patterns Investors reviewing LTV:CAC ratios in diligence and portfolio monitoring Anyone building SaaS Plus (subscription + usage) revenue models Resources Mentioned Ben's deep dive on SaaS+ LTV: https://www.thesaascfo.com/how-to-calculate-ltv-with-variable-revenue/ SaaS Metrics course: https://www.thesaasacademy.com/the-saas-metrics-foundation

Run The Numbers
Driving revenue without selling | Greg Henry of 1Password

Run The Numbers

Play Episode Listen Later Dec 1, 2025 62:22


In this episode of Run the Numbers, CJ sits down with Greg Henry, CFO of 1Password and one of the most commercially minded finance leaders in tech, to break down why he left the public-company grind at Couchbase for a PLG-driven security business and what he's relearning in the private sphere. Greg explains how forecasting changes when the product does the selling, how to think about comp and pricing in a usage-led world, and the early tells that a model is quietly over- or under-performing. He shares why CFOs should meet far more customers than they do, how finance can drive revenue without stepping on sales, and what it actually takes for a company to plan with clarity instead of reacting. Greg also recounts the near-derailing of the Couchbase IPO, reflects on the “back nine” of his career, and offers grounded advice for aspiring first-time CFOs.—SPONSORS:RightRev automates the revenue recognition process from end to end, gives you real-time insights, and ensures ASC 606 / IFRS 15 compliance—all while closing books faster. For RevRec that auditors actually trust, visit https://www.rightrev.com and schedule a demo.Tipalti automates the entire payables process—from onboarding suppliers to executing global payouts—helping finance teams save time, eliminate costly errors, and scale confidently across 200+ countries and 120 currencies. More than 5,000 businesses already trust Tipalti to manage payments with built-in security and tax compliance. Visit https://www.tipalti.com/runthenumbers to learn more.Aleph automates 90% of manual, error-prone busywork, so you can focus on the strategic work you were hired to do. Minimize busywork and maximize impact with the power of a web app, the flexibility of spreadsheets, and the magic of AI. Get a personalised demo at https://www.getaleph.com/runFidelity Private Shares is the all-in-one equity management platform that keeps your cap table clean, your data room organized, and your equity story clear—so you never risk losing a fundraising round over messy records. Schedule a demo at https://www.fidelityprivateshares.com and mention Mostly Metrics to get 20% off.Metronome is real-time billing built for modern software companies. Metronome turns raw usage events into accurate invoices, gives customers bills they actually understand, and keeps finance, product, and engineering perfectly in sync. That's why category-defining companies like OpenAI and Anthropic trust Metronome to power usage-based pricing and enterprise contracts at scale. Focus on your product — not your billing. Learn more and get started at https://www.metronome.comMercury is business banking built for builders, giving founders and finance pros a financial stack that actually works together. From sending wires to tracking balances and approving payments, Mercury makes it simple to scale without friction. Join the 200,000+ entrepreneurs who trust Mercury and apply online in minutes at https://www.mercury.com—LINKS:Greg on LinkedIn: https://www.linkedin.com/in/greghenry23/1Password: https://1password.com/CJ on LinkedIn: https://www.linkedin.com/in/cj-gustafson-13140948/Mostly metrics: https://www.mostlymetrics.com—RELATED EPISODES:Behind the Earnings Calls: Couchbase CFO Greg Henry on Consumption Models & Analyst Relationshttps://youtu.be/o_pDfz5a-Hw—TIMESTAMPS:00:00:00 Preview and Intro00:02:57 Sponsors – RightRev | Tipalti | Aleph00:07:03 Back in the Private Sphere: Why Greg Joined 1Password00:07:49 Greg's Four-Part Framework for a Great Role00:10:12 Thinking About the “Back Nine” & Legacy00:13:16 Transitioning to PLG & SLG at 1Password00:15:12 Blending PLG Efficiency with Enterprise Sales00:17:12 Sponsors – Fidelity Private Shares | Metronome | Mercury00:20:03 B2C vs. B2B ARPU Contrast00:22:41 Forecasting in PLG vs. Sales-Led Models00:24:18 Building Toward Chunky Enterprise Upside00:25:39 Comp Plans: Complexity, Pitfalls & the Alexander Group00:27:35 Keep Comp Plans Simple & Focused on ARR00:29:10 Why Mid-Year Comp Plan Changes Are Dangerous00:31:04 Governance & Guardrails for SPIFFs00:33:19 Using the CFO Network to Drive Revenue00:34:52 Why CFOs Must Meet Customers Directly00:36:19 Wallet Share & Being a Buyer AND a Seller00:38:08 Why He Avoids 3-Year+ Commitments00:40:20 How Much Discount Is a “Year” Worth?00:42:31 Greg's Structured Annual Planning Framework00:43:50 3–5% Upside/Downside Menu00:44:57 Comp Plans Must Go Out Early00:47:23 January Compensation & System Cutover Challenges00:48:31 Why Roadmap Alignment Must Kick Off Planning00:50:21 Sustain / Differentiate / Durable Growth / World-Class Teams Framework00:52:29 Couchbase IPO Almost Going Sideways00:54:59 How to Actually Become a CFO01:00:11 Legacy Greg Wants to Leave at 1Password#RunTheNumbersPodcast #CFOInsights #SaaSLeadership #PLGvsSLG #FinanceStrategy This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit cjgustafson.substack.com

Lenny's Podcast: Product | Growth | Career
The future of AI-powered sales with Vercel COO, Jeanne DeWitt

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Nov 30, 2025 86:02


Jeanne DeWitt Grosser built world-class GTM teams at Stripe, Google, and, most recently, Vercel, where she serves as COO and oversees marketing, sales, customer success, revenue operations, and field engineering. She transformed Stripe's early sales organization from the ground up and advises founders on GTM strategy.We discuss:1. Why GTM is becoming more strategically important in the AI era2. The rise of the GTM engineer3. A primer on segmentation4. How to build a sales org that engineers and product teams respect5. The changing calculus of build vs. buy for go-to-market tools in the AI era6. Why most customers buy to avoid pain rather than to gain upside—Brought to you by:Datadog—Now home to Eppo, the leading experimentation and feature flagging platform: https://www.datadoghq.com/lennyLovable—Build apps by simply chatting with AI: https://lovable.dev/Stripe—Helping companies of all sizes grow revenue: https://stripe.com/—Transcript: https://www.lennysnewsletter.com/p/what-the-best-gtm-teams-do-differently—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/179503137/my-biggest-takeaways-from-this-conversation—Where to find Jeanne DeWitt Grosser:• X: https://x.com/jdewitt29• LinkedIn: https://www.linkedin.com/in/jeannedewitt—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Jeanne DeWitt Grosser(05:26) Defining go-to-market(08:43) The evolution of go-to-market roles(11:23) The rise of the go-to-market engineer(14:21) Implementing AI in sales processes(15:28) Optimizing sales with AI agents(23:47) Defining sales roles: SDRs and AEs(26:04) When to hire a GTM engineer(29:04) Hiring and scaling sales teams(30:50) The ideal go-to-market engineer(34:24) The go-to-market tool stack(40:39) Advice on building a great sales bot(44:34) Vercel's unfair advantage(46:37) Go-to-market as a product(47:04) Innovative sales tactics at Stripe(52:38) Effective go-to-market tactics(01:00:37) Segmentation strategies(01:09:31) Building a sales org that engineers love(01:14:00) Thoughts on PLG and pricing(01:16:44) Sales compensation and hiring(01:19:24) Lightning round and final thoughts—Referenced:• Vercel: https://vercel.com• Stripe: https://stripe.com• Rosalind Franklin: https://en.wikipedia.org/wiki/Rosalind_Franklin• Ben Salzman on LinkedIn: https://www.linkedin.com/in/bensalzman• SDK: https://ai-sdk.dev/docs/introduction• Gong: https://www.gong.io• Lyft: https://www.lyft.com• Instacart: https://www.instacart.com• DoorDash: https://www.instacart.com• “Sell the alpha, not the feature”: The enterprise sales playbook for $1M to $10M ARR | Jen Abel: https://www.lennysnewsletter.com/p/the-enterprise-sales-playbook-1m-to-10m-arr• A step-by-step guide to crafting a sales pitch that wins | April Dunford (author of Obviously Awesome and Sales Pitch): https://www.lennysnewsletter.com/p/a-step-by-step-guide-to-crafting• Kate Jensen on LinkedIn: https://www.linkedin.com/in/kateearle• Lessons from scaling Stripe | Claire Hughes Johnson (former COO of Stripe): https://www.lennysnewsletter.com/p/lessons-from-scaling-stripe-tactics• Atlassian: atlassian.com—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20Sales: John McMahon on How to Hire, Train & Retain the Best Sales Reps | How Sales Changes in a World of AI | Sales Lessons from Snowflake and MongoDB | How to Create and Drive a Sales Process with Urgency

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Nov 28, 2025 68:30


John McMahon is widely regarded as one of the greatest enterprise-software sales leaders of all time. He's the only person to have served as Chief Revenue Officer at five public software companies: PTC, GeoTel, Ariba, BladeLogic and BMC Software. He helped scale BladeLogic from a startup into a public company — ultimately leading to its ~$880M sale to BMC — and drove GeoTel into a multi-billion dollar acquisition. Today he sits on the boards of top names such as Snowflake and MongoDB, while also mentoring and influencing a who's-who of modern SaaS sales leaders. AGENDA: 03:33 The Art and Science of Sales: Insights from a Veteran 04:29 Adapting Sales Strategies in the Age of AI and PLG 07:47 The Ultimate Framework to do Deal Qualification 14:13 How to Drive Urgency and Maintain Sales Process 20:06 How to Hire the Best Sales Reps 25:11 Step-by-Step Guide to Training Sales Reps 45:22 The Mindset of the Best Sales Reps 54:55 Single Most Important Skill to Win in Sales  

Grow Your B2B SaaS
S7E16 - SaaS GTM in 2026: AI, Hybrid Sales & High-Performance Revenue Engines  with Richard Schenzel

Grow Your B2B SaaS

Play Episode Listen Later Nov 27, 2025 18:00


In this episode of the Grow Your B2B SaaS podcast, recorded live at the SaaS Summit Benelux in Amsterdam, host Joran sat down with Richard Schenzel from AtScale. Richard and his team act as operating partners for B2B SaaS companies, helping them build, structure, and scale sales operations with a strong focus on improving performance.The conversation centered on how go-to-market (GTM) strategy is changing in 2026. From the rise of blended motions and the evolving role of ACV across PLG and sales-led setups, to how AI will reshape the entire funnel—Richard shared a pragmatic view into what will separate the SaaS companies that scale successfully from those that fall behind. He also explained why now is the time for deep introspection, how to audit your GTM machine, and why roles like SDR/BDR must be rethought in an AI-driven world.Key Timestamps(0:00) – The 2026 B2B SaaS GTM Shakeup: AI, PLG vs Sales-Led & ACV Truths(0:00) – Meet Richard Schenzel: The B2B SaaS Sales Ops Performance Architect(0:01) – GTM in 2026: AI-Driven Plays, Blended Motions & ACV Strategy(0:02) – Why 2026 Demands a Full GTM Audit: Blended Motions + ACV Reality(0:02) – PLG vs Sales-Led: How ACV Decides Your Entire GTM Motion(0:03) – The New Era of Efficient SaaS Growth: AI, Margin & Sales Efficiency(0:04) – Bow-Tie Model Power: Where AI Creates Massive GTM ROI(0:04) – Automate Your Sales Engine: AI Intent, Scoring, SDR Workflows & CS(0:05) – The 2026 SDR: Human Connection Beats Sequencing Automation(0:06) – 2026 Headcount Reset: New SDR/BDR, AE & RevOps Roles(0:07) – Train the Machines: Why People Still Win in AI-Driven GTM(0:07) – Ad Break: Reditus – The AI Affiliate Engine for B2B SaaS(0:08) – What Will Make SaaS Winners in 2026: Adapt Fast or Fall Behind(0:09) – The 2026 Mindset Shift: Stop Fixing Yesterday, Pivot Faster(0:09) – The GTM Implementation Blueprint: Mission → Strategy → Tech → People(0:11) – The “If It Ain't Broke” GTM Trap: How to Spot Hidden Failures(0:11) – The Ultimate SaaS GTM Audit: 1–5 Scoring Across Every Function(0:13) – Bow-Tie Data Mastery: Fix GTM Bottlenecks Faster With AI(0:14) – From 0 → 10K MRR: ICP, Feedback Loops & Avoiding Enterprise Traps(0:16) – Scaling to $10M ARR: ICP Alignment, Feature Pruning & $100M Roadmap(0:17) – Evolving Your ICP: Stay True to Your Customer & Your Mission(0:17) – Connect With Richard Schenzel on LinkedIn

The Product Podcast
Lovable Head of Growth on The New AI-Native Growth Playbook | Elena Verna | E279

The Product Podcast

Play Episode Listen Later Nov 26, 2025 43:21 Transcription Available


In this episode, Carlos Gonzalez de Villaumbrosia interviews Elena Verna, Head of Growth at Lovable—the fastest-growing AI startup to ever surpass $100M in ARR, hitting the milestone in just eight months. With a proven track record leading growth at Miro, Amplitude, Superhuman, and Dropbox, Elena brings unparalleled expertise in driving sustainable, product-led growth across both hyper-growth and turnaround environments.Elena shares how building in the fast-moving “vibe coding” category requires a radical shift in how we define product-market fit, structure growth teams, and measure success. From product-led monetization loops to redefining brand as a product responsibility, Elena outlines a bold vision for what growth looks like in the age of AI-native products.What you'll learn:How Lovable ships at record speed, with daily product updates and a 3-tier launch model.How AI-native products redefine activation, retention, and monetization.Why product teams must now own brand experience—not just featuresHow Elena designs feedback, education, and referral loops that turn users into growth engines.The evolving role of activation, retention, and monetization in AI-native PLG.Key Takeaways

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20Sales: Why You Need a CRO Pre-Product | Why Remote Sales Teams Do Not Work | How Snowflake Built a Sales Machine with Chad Peets

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Nov 7, 2025 69:13


Chad Peets is one of the greatest sales leaders and recruiters of the last 25 years. From 2018 to 2023, Chad was a Managing Director at Sutter Hill Ventures. Chad has worked with the world's best CEOs and CROs to build world-class go-to-market organizations. Chad is currently a member of the Board of Directors for Lacework and Luminary Cloud and on the boards of Clumio and Sigma Computing. He previously served as a board member for Astronomer, Transposit, and others. He was an early-stage investor at Snowflake, Sigma, Observe, Lacework, and Clumio. In Today's Discussion with Chad Peet's We Discuss: 1. You Need a CRO Pre-Product: Why does Chad believe that SaaS companies need a CRO pre-product? Should the founder not be the right person to create the sales playbook? What should the founder look for in their first CRO hire? Does any great CRO really want to go back to an early startup and do it again? 2. What Everyone Gets Wrong in Building Sales Teams: Why are most sales reps not performing? How long does it take for sales teams to ramp? How does this change with PLG and enterprise? What are the benchmarks of good vs great for average sales reps? How do founders and VCs most often hurt their sales teams and performance? 3. How to Build a Hiring Machine: What are the single biggest mistakes people make when hiring sales reps and teams? Are sales people money motivated? How to create comp plans that incentivise and align? Why does Chad believe that any sales rep that does not want to be in the office, is not putting their career and development first? Why is it harder than ever to recruit great sales leaders today? 4. Lessons from Scaling Sales at Snowflake: What are the single biggest lessons of what worked from scaling Snowflake's sales team? What did not work? What would he do differently with the team again? What did Snowflake teach Chad about success and culture and how they interplay together?