POPULARITY
After two pivots and nearly running out of runway, Merrill Lutsky found insane growth—scaling Graphite to tens of thousands of daily users and millions in ARR. He reveals exactly how Graphite landed its first massive enterprise customer, doubled revenue overnight by changing pricing, and turned user feedback into momentum. Merrill shares hard-earned lessons on recognizing true product-market fit, leveraging internal tools as a market wedge, and surviving the chaos every early-stage founder faces. If you're building, pivoting, or scaling your startup, this is a must-listen episode.Why You Should ListenHow to Know When It's Time to Pivot—or Keep GoingLanding Your First Enterprise Deal (and Getting to $1M ARR in 10 Months)Why Daily User Feedback is a Secret WeaponFinding the Real “Must-Have” ProductWhy Your First Pricing Model Probably Won't Work (and How to Fix It)Keywordsproduct market fit, developer tools, pivot, enterprise sales, startup growth, code review, pricing strategy, software startups, founder lessons, Graphite dev tool00:00:00 Raising a seed round on Airbnb Wi‑Fi00:10:40 Lessons from the first failed DevTool00:16:00 Why daily‑use products win00:24:30 Pivoting from mobile rollbacks to code review00:31:00 Ex‑Meta engineers spark a viral wait‑list00:40:00 Switching on pricing and racing to $1 M ARR00:48:30 The pricing tweak that doubled revenue00:50:00 Surviving outages, bank collapses and chaos00:52:30 Co‑founder trust makes or breaks startupsSend me a message to let me know what you think!
App Masters - App Marketing & App Store Optimization with Steve P. Young
In this episode, we're joined by Hamza Alsamraee, CEO of NewForm — a top-tier NYC-based app marketing agency producing over 2,500 ads every month.Hamza's journey started at just 16 when he built the largest math page on Instagram, eventually driving 100M+ organic impressions and over $1M in book sales. With a strong background in math and a knack for creative growth, Hamza has led performance-driven campaigns across consumer apps, including scaling Faves to a $1M ARR in just months.We'll dive into how AI-generated content (AIGC) is changing the marketing game — from TikTok-first strategies to AI agents in UGC creation, and why many AI creatives fail to convert.Whether you're just launching or managing multi-million-dollar ad budgets, this episode delivers the math-backed truth on what's working (and what's not) in 2025.You will discover:
App Masters - App Marketing & App Store Optimization with Steve P. Young
In this episode, we're joined by Hamza Alsamraee, CEO of NewForm — a top-tier NYC-based app marketing agency producing over 2,500 ads every month.Hamza's journey started at just 16 when he built the largest math page on Instagram, eventually driving 100M+ organic impressions and over $1M in book sales. With a strong background in math and a knack for creative growth, Hamza has led performance-driven campaigns across consumer apps, including scaling Faves to a $1M ARR in just months.We'll dive into how AI-generated content (AIGC) is changing the marketing game — from TikTok-first strategies to AI agents in UGC creation, and why many AI creatives fail to convert.Whether you're just launching or managing multi-million-dollar ad budgets, this episode delivers the math-backed truth on what's working (and what's not) in 2025.You will discover:
In episode #276, I'm tackling a subscriber question who is just starting their SaaS journey. There are definitely some building blocks to put into place so you have no regrets later. - Selecting your accounting software - Who does the accounting? - Building your accounting foundation - Cash or accrual accounting - Tracking your bookings - What metrics to track? Be in the know! Join our SaaS community. https://www.thesaasacademy.com/offers/ivNjwYDx/checkout Subscribe to Ben's SaaS newsletter: https://mailchi.mp/df1db6bf8bca/the-saas-cfo-sign-up-landing-page
Today's show: Jason and Alex unpack a wild week in tech and the markets — including a fake tweet that moved $4 trillion and Trump's latest tariff talk. Lon jumps in for another round of Founder Fridays Pitch Madness, where four early-stage startups compete for a spot in the Elite Eight. Plus, Alex chats with Sunny Khamkar, CEO of MenuData, about how AI is revolutionizing the food industry. They also dive into how founders should rethink their runway, what Shopify's bold AI memo means for the future, and whether VCs are placing earlier bets to chase that $1M ARR milestone. Don't miss the founder pitches and a fresh take on AI-powered food trend tracking!*Timestamps:(0:00) Jason kicks off the show!(2:11) Market chaos, tariffs, and their effects(5:59) Economic uncertainty: Advice for founders(8:00) Tariffs' potential impact on tech sectors(9:58) Fidelity Private Shares℠ - Visit https://www.fidelityprivateshares.com ! Mention our podcast and receive 20% off your first-year paid subscription.(13:27) SignalFire's billion-dollar fund(17:32) Achieving significant startup revenue and its importance(20:20) Coda - Empower your startup with Coda's Team Plan for free—get 6 months at https://www.Coda.io/twist(21:54) Krea's $87 million raise and subscription model analysis(27:33) Big tech's influence on niche subscription services(30:02) Gusto - Get three months free when you run your first payroll at http://gusto.com/twist(31:09) AI advancements: Midjourney v7 and voice activation(36:43) Shopify's AI Manifesto(41:00) Founder Friday bracket updates(57:16) Founder Q&A: Lessons for first-time founders(1:05:00) MenuData's CEO Sunny Khamkar interview(1:16:10) MenuData's pricing, customer base, and growth plans(1:23:38) AI's vertical impact and Menu Data's strategy*Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcp*Links from the show:Check out MenuData: https://www.menudata.ai/Check out Krea: https://www.krea.ai/Check out Tobi Lutke's ”AI Manifesto” https://x.com/tobi/status/1909251946235437514Check out Pitchfire: https://www.pitchfire.com/Check out CREATANT: https://www.creatant.com/Check out Vidala Labs: https://vidalalabs.com/Check out Kippy: https://kippy.ai/Follow Sunny Khamkar:LinkedIn: https://www.linkedin.com/in/sunny-khamkar-84b905a*Follow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelm*Follow Lon:X: https://x.com/LonsLinkedIn: https://www.linkedin.com/in/lonharris*Follow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanis*Thank you to our partners:(9:58) Fidelity Private Shares℠ - Visit https://www.fidelityprivateshares.com ! Mention our podcast and receive 20% off your first-year paid subscription.(20:20) Coda - Empower your startup with Coda's Team Plan for free—get 6 months at https://www.Coda.io/twist(30:02) Gusto - Get three months free when you run your first payroll at http://gusto.com/twist*Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland*Check out Jason's suite of newsletters: https://substack.com/@calacanis*Follow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.com*Subscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
Z47 invested in WizCommerce in 2022. Since then, Divyaanshu Makkar and team have scaled to $1mn in revenue from the B2B Commerce SaaS mid-market. In this episode of #ZeroToInfinity, Divyaanshu Makkar, Co-founder & CEO of B2B Commerce SaaS startup WizCommerce -, joins Sudipto Sannigrahi and Ashwin Pandian to unpack:
Today's show: Jason and Alex dive into the startup world's latest chaos — from Nasdaq's slide and Trump's tariffs to Klarna's IPO pause and Rippling's $16B target. They break down what founders really need to know: tightening CIO budgets, longer sales cycles, but easier hiring. Plus, TikTok's ban gets another delay (with Bezos possibly lurking?). In Office Hours, they chat with two sharp founders: Luke Belbina from PodEngine, who's building AI for podcasters and doubling revenue, and David Moscatelli from Abacus, who's bringing secure, on-prem AI to banks and just hit $1M ARR. Don't miss it!
In this episode I sit down with Wade Foster, the founder and CEO of Zapier. We explore Zapier's unique path, from building its first integrations on nights and weekends in Missouri to scaling with minimal funding through Y Combinator. Wade shares how “don't hire till it hurts” became a guiding mantra, why Zapier embraced remote work early on, and how the rise of AI is unlocking new automation possibilities through Zapier Agents. Whether you're curious about seedstrapping, enterprise workflows, or the frontier of AI benchmarks, this conversation delivers practical insights for building a lean, innovative company.00:00:00 - “Don't Hire Till It Hurts” Mantra00:01:24 - The First Zap & Early Users00:03:42 - Joining YC & Minimal Funding00:05:09 - Profitability & Lean Hiring00:06:57 - SEO for Marketplace Growth00:08:10 - AI Agents & Unstructured Data00:10:28 - Enterprise Playbook & GTM00:13:14 - ARC Prize & Future of AGI00:17:29 - Remote Culture at Zapier00:20:29 - Seedstrapping Advice in AI00:25:09 - Reimagining Processes w/ AI00:28:54 - Final Insights & Future
Noah Greenberg grew a content-distribution product from zero to $1M ARR in just one year (and to $4M in 2 years) by focusing on a single channel most founders underrate: LinkedIn. He posted insights daily, highlighted key players in his industry, and made it impossible for prospects not to notice him.In this episode, Noah reveals the exact step-by-step playbook, including how to structure 3-month pilots for fast feedback, craft DMs that actually get replies, and pick the right content “watering holes” so your future customers come to you eager to sign. If you're a founder trying to figure out your go-to-market approach, you need to see what Noah did.______Why You Should Listen1. Turning LinkedIn into a Free PR Engine – Noah shows how daily micro-posts drive high-value leads without needing to go viral.2. Finding Your First 10 Customers with 3-Month Pilots – Short trials = instant feedback on who'll stay and who'll churn.3. Never Stop Triangulating – How 50 customer conversations per month reveal the right product, price, and packaging.4. Selling without Selling – The “this isn't a pitch” call that makes prospects lean in and ask, “Wait, how do we buy?”5. Earning Credibility at Scale – Noah's “watering hole” posts spark real engagement from decision-makers (and reel in 5-figure deals)._______KeywordsB2B Sales, LinkedIn Strategy, Early-Stage Growth, Founders' Playbook, Bootstrapped Startup, Content Distribution, Sales Prospecting, Pilot Contracts, Outbound Leads, Product-Market FitTimestamps(00:00:00) Intro(00:02:35) Stacker's Origin Story(00:06:00) How to generate warm leads(00:15:53) How to use LinkedIn for lead gen(00:21:42) No One Wants To Be Pitched(00:26:06) How to get feedback on pricing(00:38:08) LinkedIn Go-To-Market Strategy(00:42:20) Breaking Above The NoiseSend me a message to let me know what you think!
It took eWebinar 18 months to go from idea to product launch, then another 36 months to hit $1M ARR; this is the core theme of this season. That's a total of 54 months from incorporation date.You might have read about companies that get to $1M in a year, much less now with AI startups and the world riding that wave. Those are outliers and not the norm. 36 months to $1M is actually the median, even for funded companies.“Normal” is not newsworthy and doesn't get the media spotlight it deserves so you might feel like you're moving too slow even when you're in the right place.On this season finale episode, Melissa and Todd dive into the 12 most shocking surprises on their way to $1M ARR after everything they tried, learned, and failed, which formed the foundation by which they continued to iterate on the business and product.Thank you for tuning into Season 2 of ProfitLed!! Your support is deeply appreciated.If you enjoy this podcast, please consider leaving us a review as that will help us continue to make episodes for you. Takeaways:• Things they thought would have a huge impact, but didn't• Assumptions that made them think they'd get to $1M faster• Things that took a lot more effort and time than anticipated• Why you don't need fancy metrics to prioritize roadmap• Why $1M is not the magic number even though it seems like it isContact ProfitLed Tweet us at @profitledfm. Find show notes of each episode on ProfitLed.fm. Connect with our host Follow Melissa Kwan on LinkedIn where she share stories & lessons from her founder journey weekly. Subscribe to 'your founder next door', Melissa's bi-monthly newsletter on how to build a company without an abundance of resources and friends in high places. Follow @themelissakwan on Instagram and YouTube where she shares short videos of business advice and other truth-bomb sound bites. This podcast was brought to you by eWebinar. Learn more at ewebinar.com and find out how you can turn pre-recorded videos into automated webinars that perform better than a live webinar.Thanks for listening!
Meg Lister is a leading voice in Web3 funding, transforming capital allocation as a Managing Director at Gitcoin Labs to drive financial returns for ecosystems and builders while ensuring long-term, sustainable growth. She specializes in designing multi-layered funding strategies that support every stage of a project's lifecycle—quadratic funding for onboarding, retroactive funding for retention, and direct grants for scaling—empowering Ethereum ecosystem participants to achieve more.Gitcoin (May 2023 – Present): Leads Grants Lab, the core business unit powering Gitcoin's multi-mechanism funding infrastructure. She initially joined to oversee Grants Stack, one of Gitcoin's flagship products, and was quickly promoted to lead its consolidation and strategy.Flipside Crypto: As VP of Product & Ops, she launched a flagship analytics product that secured a $50M Series A, making on-chain data more accessible and actionable for ecosystem builders.WordStream: Scaled the company post-Series A, earning five promotions, launching a $1M ARR business unit, and contributing to its acquisition by Gannett.Meg is reshaping Web3 funding models by creating efficient capital distribution mechanisms that support ecosystem-wide goals, ensuring Ethereum builders and enterprises have the resources needed to innovate, scale, and succeed.Why VC Funding Model Fails Web3 ProjectsExperience driving Web3 capital efficiency: Media mentions:
Meg Lister is a leading voice in Web3 funding, transforming capital allocation as a Managing Director at Gitcoin Labs to drive financial returns for ecosystems and builders while ensuring long-term, sustainable growth. She specializes in designing multi-layered funding strategies that support every stage of a project's lifecycle—quadratic funding for onboarding, retroactive funding for retention, and direct grants for scaling—empowering Ethereum ecosystem participants to achieve more.Gitcoin (May 2023 – Present): Leads Grants Lab, the core business unit powering Gitcoin's multi-mechanism funding infrastructure. She initially joined to oversee Grants Stack, one of Gitcoin's flagship products, and was quickly promoted to lead its consolidation and strategy.Flipside Crypto: As VP of Product & Ops, she launched a flagship analytics product that secured a $50M Series A, making on-chain data more accessible and actionable for ecosystem builders.WordStream: Scaled the company post-Series A, earning five promotions, launching a $1M ARR business unit, and contributing to its acquisition by Gannett.Meg is reshaping Web3 funding models by creating efficient capital distribution mechanisms that support ecosystem-wide goals, ensuring Ethereum builders and enterprises have the resources needed to innovate, scale, and succeed.Why VC Funding Model Fails Web3 ProjectsExperience driving Web3 capital efficiency: Media mentions:
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Anton Osika is the Co-Founder and CEO @ Lovable, the fastest growing startup in Europe. With Lovable, you can turn your idea into an app in seconds with just a prompt. After just 3 months, the company has scaled to $17.5M in ARR. They are adding $2M in net new revenue every single week. Even better, Lovable has 85% Day 30 retention rate, making it more retentive than ChatGPT. In Today's Episode We Discuss: 03:41 How a Side Project Turned into a $200M Company 05:39 Why Talent is 10x More Valuable Than Experience 08:57 How to Use a Waitlist Pre-Launch to 10x Growth 12:29 How to Master a Public Launch: $0 - $1M ARR in a Week 18:02 Why Raise a Large Seed Round 22:22 How Sustainable is Lovable and AI Revenue 25:22 What are Lovable's Biggest Threats: Incumbents or Open Source 27:00 Raising Series A: Should You Always Take the Money 27:46 How to Compete in the US from Europe 28:25 Is Europe as F****** as the World Thinks 29:02 Building in Europe vs. Silicon Valley 31:20 The Future of Foundation Models: Who Wins 33:47 Grok vs OpenAI vs Anthropic: Buy and Short 41:37 Quickfire Round: Insights and Reflections
In 2005 most people didn't even have cellphones yet. Those who did used flip phones. That's when Noah started Olo, a webapp to let people pre-order coffee from nearby shops. Users had to login on web, add a credit card, create pre-made orders and then send a text to a preset number when they wanted to pre-order. It was way, way ahead of its time. Noah and his team 7 years to hit $1M in ARR. In the meantime, they raised a round with 50% dilution the week before the financial crisis, went on live TV to an audience on 6M viewers and had to pivot from a marketplace to B2B SaaS.But overtime smartphone penetration increased, on-demand ordering became a trend, and then, one day, Starbucks launched their app. All of a sudden, every single restaurant in the world wanted a way to let their users pre-order.And there was Noah and his team at Olo.Today, Olo is a public company worth over $1B and generating nearly $300M in sales. Here's the story of how it happened.Why you should listen:How to use guerrilla marketing tactics to get early growth.Why PR can move the needle but not in ways you expect. How to pivot from a marketplace to B2B SaaS.Why it often takes much longer than you might hope to hit an inflection point.Why fundraising was so hard, even though Olo became a $1B+ public company. Why Noah thinks founders should embrace challenges and adversity.KeywordsOlo, Noah Glass, entrepreneurship, product-market fit, restaurant technology, mobile ordering, startup journey, business challenges, marketing strategies, innovationTimestamps:(00:00:00) Intro(00:02:20) Building an app in 2005(00:13:20) The Burn the Boats Moment(00:16:31) Building A Network Business(00:26:08) The Cold Start Problem(00:30:33) A Happy Accident(00:36:55) Going through the 2008 Financial Crisis(00:51:20) Finding Product Market Fit(00:57:20) Blueprint of Values(01:05:11) Best Piece of Advice(01:06:08) A Big MilestoneSend me a message to let me know what you think!
TheAssistant : https://bit.ly/theassistant-cadeauHey je te prends 1 min pour te présenter mon nouveau partenaire que j'aime de ouf : TheAssistant.comJ'te la fais courte :Il y a 6 mois, j'étais débordé. Je passais mes journées à jongler entre factures, DM LinkedIn, et réservations de train.Bref, tout sauf avancer sur mes vrais projets.Et puis j'ai découvert TheAssistant, et là je te promets, ma vie a changé.Dis moi si tu te reconnais là-dedans :→ Tu veux tout faire toi-même parce que t'as peur de déléguer.→ T'as l'impression de perdre ton temps dans des tâches sans valeur.→ Et tu te demandes toujours : 'Comment je peux tout gérer ?'Alors reste avec moi. J'ai la solution pour toi.TheAssistant.com, c'est comme ton super bras droit.Ils gèrent pour toi toutes les tâches qui te bouffent du temps :Tes factures, tes relances clients, ta pré-compta.Ton community management : la planification, réponses aux DM, recherches de prospects.Même les urgences : billets d'avion, recherches de prestataires, ou mails stratégiques.Tu gagnes des heures, mais surtout : tu gagnes en sérénité.Et crois-moi, après avoir testé, tu ne reviendras jamais en arrière.Donc essaie dès maintenant avec 30% de réduction sur ton premier mois. Donc tu n'as pas grand chose à perdre, mais tout à gagner.Merci TheAssistant de soutenir La Chapelle RadioEt bonne écoute à toutes et tous !
Alon was a hacker for the Israeli Defence Forces' cyber department. There he saw the most advanced methods used in cyber warfare. So when he left, he started IntSights-- a company that helped enterprises defend themselves from cyber attacks. He was a first-time founder who didn't even know the word 'unicorn'. He made all the mistakes you could make. But he had real, undeniable pull. He grew to $1M ARR in a year and to $4M a year later. By the end of it, he was doing $30M in revenue and exited for $335M.But when I asked him what it felt like to sell for hundreds of millions, he said it 'felt like emptiness'. Alon is a builder—that's all he wants to do. So he quit post-acqusition and left millions of dollars on the table. Moeny he was guaranteed to make if he just stayed in his role.Instead, he started PointFive to help enterprises reduce wasted spend on cloud infrastructure. He know nothing about the space. One year in, he's raised $36M and grown to millions in ARR—it's even faster than IntSights.Here's the story.Why you should listenWhy Alon felt empty after exiting for $335M.Why he left millions of dollars on the table to start a new startup.Why the mistakes he made in his first startup helped his second one grow way faster.Why he raised $36M in under a year.Why true customer pull comes from solving top of mind problems.KeywordsLinkedIn, video content, cloud cost optimization, engineering responsibilities, startup funding, second-time founders, product market fit, customer traction, entrepreneurshipSend me a message to let me know what you think!
Introducing the Top 100 most influential recruiters on LinkedIn...This week, I am re-joined by Jordan Shlosberg, founder of Atlas, for an unmissable episode of The RAG Podcast.Since we last spoke in September, Atlas – the world's first AI-powered CRM for recruiters – has skyrocketed to nearly £1M ARR in just 9 months! But this episode isn't just about AI: it's about how recruiters can leverage technology, brand building, and social selling to drive real revenue. Here's what we cover: How AI is reshaping recruitment CRMs & why voice tech is the future. The Atlas x Hoxo Top 100 – ranking the most socially engaged recruitment leaders. Why personal branding & social selling aren't just buzzwords – they drive real business growth. How the most successful recruiters are using content to generate inbound leads. The power of consistency – why audience building is a long-term game with massive payoffs.If you're in recruitment and want to understand how AI, branding, and content can transform your business, this one's for you!Check out the ELITE TOP 100 list by Atlas & Hoxo discussed in this podcast here: https://hubs.ly/Q035mrPg0Chapters00:00 Introduction to the RAG Podcast and Atlas01:55 Atlas's growth and AI integration10:36 Customer base and SaaS metrics12:02 Balancing work and personal life16:20 The Atlas and Hoxo partnership20:37 The importance of social selling25:23 The role of opinion in recruitment29:03 The future of recruitment and AI33:37 Navigating recruitment challenges in 202534:58 The power of building an audience36:53 Personal growth and brand building39:14 The importance of consistency in recruitment41:32 Understanding market dynamics43:54 Purpose beyond placements46:21 The role of authenticity in content creation49:47 Crafting engaging content and hooks53:39 The importance of authenticity in communication56:36 AI's role in recruitment and sales01:01:28 Delayed gratification in building relationships01:05:41 The future of recruitment and brand building__________________________________________Episode Sponsor: AtlasYour memory isn't perfect. So Atlas remembers everything for you. Atlas is an end-to-end recruitment platform built for the AI generation. It automates your admin so you can focus on the business tasks that matter. How many conversations do you have every day? With clients. Candidates. Your team. Service providers.Now how many of those conversations can you recall with 100% accuracy? How many hours a week do you spend making notes to try and retain as much as possible? And how much is still getting lost along the way? Traditional CRM systems weren't built for the type of recruitment business you're running right now. They were built to rely on the structured, tagged, categorised, and formal data you could feed it. Manual processes that needed you to input specific information, based on specific questions and answers. But what about all the other conversations you're having every single day? Atlas isn't an ATS or a CRM. It's an Intelligent Business Platform that helps you perform 10X better than you could on your own. How? By removing all your low value tasks, acting as your perfect memory, and providing highly relevant recommendations to impact your performance. Learn more about the power of Atlas – and take advantage of the exclusive offer for...
Today, we're talking with Adam Schoenfeld, CEO at Keyplay, a SaaS solution to help B2B marketers make GTM more efficient. In this episode, Adam talks about: How he built 3 startups to over $1m ARR His process that's allowed him to reliably reach Product Market Fit again and again And the critical mistakes founders always make...and how you can avoid them Links LinkedIn: https://www.linkedin.com/in/adamschoenfeld/ Chapters 00:00 Intro 01:19 Adam Schoenfeld's Journey to Keyplay 01:59 How to Build A Startup 02:33 Building a Product Audience and Discovering Problems 02:50 From PeerSignal to Keyplay 03:17 Monetizing the Discovery Process 04:05 Finding Product Market Fit 07:11 The Role of Design Partners 13:36 Early Product Traction and Validation 17:49 Challenges in Scaling 19:12 Challenges of Founding a SaaS Company 21:34 Adapting Sales Processes and Proving Value 23:29 Understanding Market Changes and Effective Marketing 28:59 Lessons from Past Ventures and Scaling Challenges 34:19 Insights on Product Market Fit and Distribution 36:57 Navigating Acquisitions and Future Outlook Follow LaunchPod on YouTube We have a new YouTube page (https://www.youtube.com/@LaunchPod.byLogRocket)! Watch full episodes of our interviews with PM leaders and subscribe! What does LogRocket do? LogRocket combines frontend monitoring, product analytics, and session replay to help software teams deliver the ideal product experience. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Adam Schoenfeld.
In the Pit with Cody Schneider | Marketing | Growth | Startups
Sub to my email newsletter for growth tactics - https://investorupdate.beehiiv.com/subscribeBrought to you by...Talent Fiber: Hire global talent for your growth marketing positions - https://talentfiber.com/In this podcast episode, I discussed the challenges early-stage founders face in enterprise sales and how traditional marketing strategies often fall short. My guest, Colin, shared a unique solution: hosting intimate dinners with target customers to build relationships and generate leads. By spending just $1,500, founders can create a valuable networking opportunity that can lead to significant revenue growth.We also explored the importance of partnerships with agencies to expand reach without the need for a dedicated sales team. Colin emphasized that these partnerships should focus on educating the agencies about the product, ensuring they can effectively promote it to their clients. This approach not only helps in generating leads but also builds a strong community around the brand, which is essential for long-term success in the health and wellness industry.Timestamps:(0:00) - Intro(0:21) - Hosting Dinners for Target Customers(3:07) - Partnership Arbitrage in E-commerce(5:56) - Impact and Outcomes of Hosting Events(10:09) - Logistics and City Selection for Events(19:03) - Cold Outreach Strategies for Event Invitations(29:03) - Leveraging Agency Partnerships for DistributionHost LinksPersonal email newsletter - https://investorupdate.beehiiv.com/subscribehttps://twitter.com/codyschneiderxxhttps://www.linkedin.com/in/codyxschneider/https://codyschneider.com/https://inthepitpodcast.com/Guest Linkshttps://www.instagram.com/colindougherty_/https://www.camskns.com/
A remarkable growth story unfolds in this episode of the Product-Led Podcast as David, CEO and founder of Submagic, shares how his company scaled from $0 to $1 million ARR in just three months. Learn the strategies behind their success, including timing with the AI boom, a unique affiliate program, and an unwavering focus on customer feedback. Discover the practical steps and tools used by Submagic to build a sustainable business, and how their lean team of 15 now serves over 2 million users globally. Show Notes: [00:09] Introduction: From $0 to $1 Million ARR in 3 Months [00:40] The Perfect Timing: Leveraging the AI Boom [01:16] Submagic's Vision and MVP Process [02:03] Building the Affiliate Strategy with Small TikTok Creators [05:13] Gamification and Incentives for Affiliates [07:00] Scaling Beyond the Affiliate Model [09:12] Focus on Three Pillars: Product, Distribution, and Customer Experience [12:10] Structuring a Lean Team for Growth [13:14] Insights into Customer Engagement and Feedback Loops [16:04] Sustaining 10% Monthly Growth and Competing in the Market [18:06] Upcoming Features: Saving Time with Magic Clips [20:09] Habits for Product-Led Growth: Listening and Prioritizing [21:27] Final Takeaways for Founders About David David is the CEO and founder of Submagic, a bootstrapped SaaS company that focuses on empowering businesses with short-form content creation tools. His leadership blends creative expertise and a relentless focus on customer-centric growth. About Submagic Submagic revolutionizes short-form content creation for businesses with AI-powered tools designed for simplicity, efficiency, and scalability. Connect with David LinkedInSubmagic
Timestamps: 10:22 - Defining and testing the Ideal Customer Profile 13:11 - Does ICP vary from B2B to B2C? 20:40 - Metrics that prove you've hit PMF 30:29 - When is it time to start scaling? About Laurent Decrue & Boris Manhart: Laurent Decrue is the co-founder of the moving company MOVU and the software company Holycode, and the former CEO at Bexio. Currently he is active as CFO and co-CEO at Holycode. He holds an MBA from the University of Basel and previously worked at DeinDeal. Boris Manhart is a serial entrepreneur and the founder of Growth Unltd, a company that helps scale your business. He was previously involved with CodeCheck and Numbrs, and studied for a couple of years at the Universität Zürich. During their chat with Silvan, they discussed how to achieve the all-coveted product-market fit. Naturally, this journey starts with defining and testing your ICP, i.e. your Ideal Customer Profile: You'll create an ICP in your head based on assumptions, at the beginning. That's normal. You might even be more or less right. But you can't rely on your intuition alone; First of all, niche down. Go for a smaller market than the one you had in mind; Then conduct interviews to find out if it's the right market. This is how you can actually test all your customer development assumptions. Once you've more or less figured out your ICP based on real customer data, you can start thinking about metrics. Which ones can tell you if you're really selling your product/service to the right market? Boris and Laurent don't like the common overreliance on figures like Customer Lifetime Value (CLV) or the 1M ARR benchmark. They both prefer to use the Sean Ellis test, which consists of asking people how disappointed they'd be if they could never use your product again. But don't get too hung up on the “40% were very disappointed” metric one sometimes sees floating around: each company is its own unique case. It's often considered that once PMF is reached (or, at least, once you reach it for the first of many times), that's when you should start scaling your company. But you might want to think carefully about how to go about it: during his chat with us, Laurent shared his story of how Holycode made the mistake of hiring 2 new sales people in Switzerland and Germany to move from founder-led sales to employee-led sales, when in fact they should have hired a growth marketing team to help generate more leads — because without generating more leads, what use is having the sales people to convert them? Check out the conversation for more valuable learnings on how to (continuously) reach PMF. The cover portrait was edited by www.smartportrait.io Don't forget to give us a follow on Twitter, Instagram, Facebook and Linkedin, so you can always stay up to date with our latest initiatives. That way, there's no excuse for missing out on live shows, weekly giveaways or founders' dinners.
What's a great ARR per FTE, and is it even realistic to get to $1M? And if you could, why should you?This and more we discuss in todays episode.(00:00) - Introduction (02:32) - Why $1 million, like really? (06:46) - The Mindset Shift (13:00) - What can we automate first? (18:31) - Average quality of talent (25:48) - Investing in Assets for Growth This episode is brought to you by by Everstage - the highest rated Sales Commissions Platform on G2, Gartner Peer Insights, Trustradius with over 2,000+ customer reviews. Some of their customers include leading brands like Diligent, Wiley, Trimble, Postman, Chargebee etc.,You can go to https://www.everstage.com/revenue-formula to check out Everstage and mention Revenue Formula to unlock a personalized Sales Compensation Strategy Session with Everstage's RevOps experts—crafted for enterprise teams to maximize performance.Never miss a new episode, join our newsletter on https://revenueformula.substack.com
Jen Abel is the co-founder of JJELLYFISH, where she and her team have worked with over 300 early-stage founders to learn how to sell, do early customer discovery, and set up a repeatable sales motion on the way to their first $1M ARR. In our conversation, Jen shares:• Why founder-led sales is so crucial early on• The sales process, step by step• How to craft effective outreach messages• Where to find leads• What three channels work best for outreach• What to say on your first call• How to maintain momentum• Strategies for navigating procurement and closing deals• Common pitfalls in the sales process and how to avoid them—Brought to you by:• Brave Search—A smarter way to search• Vanta—Automate compliance. Simplify security• Paragon—Ship every SaaS integration your customers want—Find the transcript at: https://www.lennysnewsletter.com/p/master-founder-led-sales-jen-abel—Where to find Jen Abel:• X: https://x.com/jjen_abel• LinkedIn: https://www.linkedin.com/in/earlystagesale—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Jen's background(02:20) The importance of founder-led sales(08:24) The steps of a sales cycle(12:01) Tactics for effective cold outreach(16:47) Conversion rate vs. win rate(20:20) The time it takes to find product-market fit(23:06) Identifying and engaging prospects(30:58) Nailing the first phone call(34:14) Buying vs. selling(38:08) Testing the questions to ask(41:57) Avoiding common sales questions and securing the second call(43:08) Co-authoring with customers(45:06) Time-boxing service contracts(49:20) Why you should avoid demos on the first call(51:05) Dealing with procurement(54:22) The power of enterprise sales(58:14) Getting a signature(01:00:15) Choosing a focus and overcoming sales challenges(01:02:19) General timelines(01:04:27) Final thoughts and advice(01:13:32) Working with Jen—Referenced:• Wiz: https://www.wiz.io/• JJELLYFISH: https://www.jjellyfish.com/• Clay: https://www.clay.com/• A guide for finding product-market fit in B2B: https://www.lennysnewsletter.com/p/finding-product-market-fit• Airtable: https://www.airtable.com/• Figma: https://www.figma.com/• GitHub: https://github.com/• Vanta: https://www.vanta.com/• Christine Cacioppo on LinkedIn: https://www.linkedin.com/in/ccacioppo/• Glengarry Glen Ross: https://www.imdb.com/title/tt0104348/• A step-by-step guide to crafting a sales pitch that wins | April Dunford (author of Obviously Awesome and Sales Pitch): https://www.lennysnewsletter.com/p/a-step-by-step-guide-to-crafting• Sales Pitch: How to Craft a Story to Stand Out and Win: https://www.amazon.com/Sales-Pitch-Craft-Story-Stand-ebook/dp/B0CHY6BNDN• Sprig: https://sprig.com/• Zip: https://zip.co/—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
David's startup failed. But he had everything going for him: a solid thesis, $16M in funding across 3 rounds, $1.5M in ARR. At a high-level it seemed like everything was going the right way. And yet, it didn't work out.This is what happens to 95% of startups. On thhis show, we mainly speak with the top 5%-- the ones where things went right and everything worked out. But you tend to learn more from failures than successes.On this episode, we go deep with David to see what building Tandym was like, why it ultimately didn't work, and what he would do differently the second time around.Why you should listen:Why you should always start with the model that requires the least capital Why you need to be a number one priority for your customersWhy even hitting $1M ARR doesn't mean you will succeed.Why you need to pivot quickly as soon as things are clearly not working. iKeywordsproduct-market fit, startup journey, fundraising, fintech, brand partnerships, business model, sales challenges, urgency in sales, Tandem, lessons learned, startup, fundraising, product strategy, compliance, revenue growth, entrepreneurship, lessons learned, business pivot, mid-market brands, capital managementTimestamps(00:00:00) Intro(00:03:30) The Origin of Tandym(00:09:26) Taking the Leap(00:11:37) The Business Model(00:17:22) Developing the Product(00:21:05) Struggling to Create Urgency(00:26:50) Raising Rounds & Shifting(00:35:11) First Signs of Problems(00:39:01) The Product that we should've launched(00:42:12) How it All Ended(00:49:56) Final Thoughts & AdviceSend me a message to let me know what you think!
Spending money on marketing is often not an option for bootstrapped startups, at least not enough to make a meaningful impact.Without an abundance of resources, there's no choice but to get creative.While found-led sales can get you off the ground, your network can exhaust pretty quickly, leaving you with the challenge of figuring out new ways to generate demand.On this episode, Melissa and Todd share the 12 most effective zero-dollar marketing strategies that got eWebinar to $1M ARR.Takeaways:• Why paid marketing should be your last resort• How to create evergreen content that lasts forever• 12 $0 strategies you can steal to grow your own businessContact ProfitLed Tweet us at @profitledfm. Find show notes of each episode on ProfitLed.fm. Connect with our host Follow Melissa Kwan on LinkedIn where she share stories & lessons from her founder journey weekly. Follow @themelissakwan on Instagram, TikTok, Twitter and YouTube where she shares short videos of business advice and other truth-bomb sound bites. This podcast was brought to you by eWebinar.Learn more at ewebinar.com and find out how you can turn pre-recorded videos into automated webinars that perform better than a live webinar.Thanks for listening!
In this episode of the SaaS Revolution Show our host Alex Theuma is joined live from the SaaStock Studio at SaaStock Europe 2024 by Amos Haggiag, Co-Founder & CEO at Optibus, who shares his biggest learnings from building Optibus to a global unicorn. "We sell a mission critical product. When a customer chooses Optibus, if Optibus doesn't work, the public transportation stops working - busses don't arrive on time or don't arrive at all, passengers get the wrong information, like, it's a disaster. So the risk is so high that in order to anyone to choose Optibus, it's not enough to just bring like a really good product, you need to also convince [them] that you're not introducing risk, that you know how to work in complexity, you know how to work in different regions, it's very local. And because of that, at some point, we decided to change the organisation into more focus on the local region kind of management." Amos shares: • Optibus' funding journey to a $100M Series D funding round and a $1.3B valuation • The moment they realised they had surpassed $1M ARR (and why he didn't think they had product market fit at the time!) • Scaling their sales team to accommodate for their global customer base • From localised to centralised- the painful process of restructuring • Challenges with delegating tough decisions as CEO • The importance of having a long term focus and more!Check out the other ways SaaStock is serving SaaS founders
Jaspar graduated YC & closed a $11.5M seed round this week. He launched Artisan just 8 months ago. And this is the first venture-backed startup he's ever ran. He started with product-led-growth but struggled. In May, he moved to a sales-first go-to-market & scaled from $200K ARR to $1.3M ARR by September. We go deep and tactical to figure out exactly what he did to grow so fast: from hiring a Chief of Staff as one of his first hires, to living in the same house as his employees, to meeting 50-100 Account Executives for each AE he hires. He shares specific numbers, specific tactics, and specific mistakes he made along the way.Why you should listen- How solving a #1 priority problem is the single biggest reason for fast growth.- How to use LinkedIn, Reddit, and SEO to generate so many leads your AE's are drowning. (Jaspar's AEs do 20 demo calls per day).- How to close yourself by hiring a Chief of Staff early on.- How to find 10/10 Account Executives that closes $300K ARR in their first month.Keywordsstartup growth, sales strategy, marketing tactics, customer success, hiring AEs, product-led growth, sales-led growth, founder insights, business scaling, ARR, product-market fitTimestamps(00:00:00) Intro(00:03:44) Getting leads from Reddit(00:06:00) Launching V1 quickly(00:12:28) Getting leads from blogs & SEO(00:15:33) Getting leads from LinkedIn (00:17:10) Why you should hire a Chief of Staff Early(00:21:30) How to hire the best Account Executives(00:24:46) How to get to $1M ARR(00:26:10) Making Customers Successful(00:29:43) One Piece of AdviceSend me a message to let me know what you think!
In this episode of the SaaS Revolution Show our host Alex Theuma is joined by Job van der Voort, Co-Founder & CEO at Remote, who shares their hypergrowth journey and how they scaled Remote to hundreds of millions in ARR. "I would always say this is a great problem to have. If you grow really, really fast and it's a luxury problem, and if you just embrace it like that, then everything becomes much easier. Just embrace it in the most honest possible way, which is that, wow, we get so much interest that we have to solve these really difficult problems right now. It's for sake of, you know, our success, or at least the direction of success. To me, that was never difficult. What is difficult is how it affects the rest of your life." Job shares: • From $0 to $1M ARR in six months, and $1M ARR to hundreds of millions in three years: the operational and human problems Remote faced during rapid growth • His attitude towards increased responsibility and fighting fires - 'it's a great luxury to have problems to solve' • Why too many people get distracted by 'building for the sake of building' and deprioritise talking to customers and actively marketing the product • How to stay clear-headed during a period of hypergrowth • 'The sun will shine tomorrow', how he stays motivated when times get tough and more!Check out the other ways SaaStock is serving SaaS founders
App Masters - App Marketing & App Store Optimization with Steve P. Young
Discover the ad creative strategy that helped ReciMe go from $500 to $20,000 a DAY on ad spend with a positive ROAS. Ivy Nguyen is the Co-Founder and Head of Growth at ReciMe – the ultimate recipe app for organizing your recipes, which recently achieved $1M ARR. Ivy brings a wealth of experience in growth strategies and paid advertising. Ivy will share invaluable insights on creating a “unicorn” ad, how ReciMe became the #1 grossing food app in Australia (and #3 in the US), and how she rapidly tests ad creatives with a small team. You will discover: -How to create a unicorn ad -Game-changing ad creative ideas -How they went from 8% to 22% free trial start rate Learn More: www.recime.app https://www.linkedin.com/in/ivy-nguyen-66b04b1a2/ App Audits: App 1: iOS Link: https://apps.apple.com/us/app/dandelion-habit-tracker/id6448731173 GP Link: https://play.google.com/store/apps/details?id=me.aravi.dandelion App 2: iOS Link: https://apps.apple.com/us/app/unscripted-photography-poses/id1438843099?platform=iphone GP Link: https://play.google.com/store/apps/details?id=com.unscripted.posing.app&hl=en You can also watch the video: https://youtube.com/live/H3NbHEucY8k Book a consultation call: https://calendly.com/appmasters-alok/30min Work with us to grow your apps faster & cheaper: http://www.appmasters.com/ ********************************************* SPONSORS Tired of App Store limitations? FunnelFox empowers mobile apps with web funnels that offer 100% attribution, low payment commissions, and higher subscription retention. Build and optimize your web funnel with a no-code builder, seamless payments, and A/B testing. Ready to boost your app's revenue? Visit https://funnelfox.io/ to get started. ********************************************* We all have developer horror stories. From language barriers to bad code to developing on time. That's why I recommend using B7Dev.com. They are affordable, fast and more importantly trust worthy. Go to B7Dev.com. ********************************************* Follow us: YouTube: AppMasters.com/YouTube Instagram: @App Masters Twitter: @App Masters TikTok: @stevepyoung Facebook: App Masters *********************************************
Don co-founded Vena & was CEO for 8 years. Last month, Vena became a Centaur crossing $100M in ARR. Don & his team launched Vena in a crowded market, where everyone else was trying to replace excel. So they built a budgeting and planning (FP&A) solution that instead leveraged excel. As Don says, they "disrupted the disruptors".Don knew the problem, he knew the space, he knew the customers— that's why he grew from 0 to $1M in ARR in 18 months. Here's exactly how he did it.Why you should listen How to close the first few customersHow to close 6-figure deals without any proof pointsHaving knowledge and a network in the industry you're entering makes it easier to find success as a founder.In crowded markets, a differentiated approach and the ability to solve pain points for customers are key to standing out.Why there's no feeling like the satisfaction of taking an idea and product to market, working with great people, and seeing others succeed.Keywordsstartup, scaling, Vena Solutions, network, crowded markets, budgeting and planning, differentiation, pain points, revenue intelligence, SaaS, founders, startups, entrepreneurship, entrepreneur, venture capitalTimestamps:(00:00:00) Intro(00:02:36) The Beginning of Vena Solutions(00:06:06) Step 1 of Vena(00:09:35) First Conversations(00:11:09)The Market in Planning(00:17:50) The Market in Workflow(00:20:02) Wealth Fargo(00:22:23) Getting to a Million(00:26:05) The Biggest ROI(00:29:03) Una(00:34:33) Not Retiring(00:35:55) Finding Product Market Fit(00:38:29)One Piece of AdviceSend me a message to let me know what you think!
In this episode of the SaaS Revolution Show our host Alex Theuma is joined by Mikita Mikado, Co-Founder & CEO at Unicorn SaaS PandaDoc, who shares his lessons in scaling past $100M ARR. "It takes a lot of patience to run a business at scale. I'm not sure I have it, I feel like I need to build a lot of it. And this tendency of, like, getting something done instead of having conversations around the topic so that people get an insight or bring their perspective on, like, how something needs to be done. It's a really tough transition and I still struggle with it." Mikita shares: - Getting to $1M ARR and the tedious work that went into building a functioning customer flywheel - From hiring generalists to specialising teammates - How PandaDoc's tenure-defying CROs helped make the company what it is today - The important transition to optionality (and why he struggles with it) - A sneak peek into his SaaStock Europe 2024 speaking session and more.Check out the other ways SaaStock is serving SaaS founders
Ashutosh is one of those rare founders who founded not just one, but two unicorns. He worked at Google for 4 years, left and started Bloomreach, which was last valued at $2.2B. Halfway through that journey, he left to do it all over again. He started Eightfold AI which is the one we're talking about today. In 2021, he raised $220M from Softbank at a $2.1B valuation. When he left Bloomreach, he didn't even have a clear idea of what he was going to build. He just knew he wanted to have more impact and go from 0 to 1 again.It took him about 2 years to figure it out, but once he did, he found true product market fit and he scaled from $1M ARR to $3M ARR the year after & $12M ARR the year after that. Why you should listenWhy founders need to validate ideas with an open mind to not have tunnel vision.Why even unicorn founders don't get it right and often need to pivot to success. How to address problems that are not just today problems, but likely to be problems for a long time. Keywordsunicorn founder, Bloomreach, Eightfold, product-market fit, pivot, HR space, digital marketers, talent, hiring, market need, scalingTimestamps:(00:00:00) Intro(00:01:41) His first unicorn - Bloomreach(00:05:22) Starting Eightfold AI(00:08:50) Not Marrying Yourself to One Idea(00:11:55) Finding real customer problems(00:14:46) Solving Today's Problems vs Future Problems(00:16:47) Hiring From Already Rejected Candidates(00:22:50) Fundraising(00:24:40) Building a V1(00:27:00) Pivoting and then Reverting Back(00:30:43) First Customers(00:32:59) Finding Product Market Fit(00:35:04) One Piece of AdviceSend me a message to let me know what you think!
Pierce launched a consulting business that grew to millions in revenue and dozens of employees. But he noticed his customers kept asking for the same solution. So he launched Knak, an email marketing and landing page builder that integrates to Marketo. Unlike most founders, he didn't go all-in. He kept working on his consulting business full-time until Knak hit $1M in ARR. After he transitioned over to Knak, he bootstrapped it to $5M ARR and then raised a $25M Series A. Here's the story of how Knak found product-market fit. Why you should listenHow to transition from a consulting/services business to a scalable tech startup-- and why it might be more painful than you think. What you need to do to shift from SMB to midmarket and from midmarket to enterprise. How to use enterprise customers to scale to $10M+ without needing thousands of customersHow to balance urgency and patience to build a successful startup Keywordspatience, learning, building a company, failures, product-market fit, consulting business, marketing automation platforms, template product, creation platform, raising funding, urgencyTime Stamps:(00:00:00) Intro(00:08:50) The Story of Knak(00:11:04) Going from Zero to a Million(00:16:16) Pivoting from SMB to midmarket(00:19:42) Adding "obvious" features to get closer to PMF(00:24:07) How to position against incumbents(00:27:35) Turning Into a Enterprise Platform(00:30:30) Finding True Product Market Fit(00:33:34) Bootstrapping to Series A(00:40:02) One Piece of AdviceSend me a message to let me know what you think!
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Saam Motamedi is a General Partner at Greylock, where he has led investments in Abnormal Security (incubated at Greylock), Apiiro Security and Opal Security, as well as AI companies like Adept, Braintrst, Cresta, Predibase, Snorkel, and more. Before Greylock, Saam founded Guru Labs, a machine learning-driven fintech startup, and worked in product management at RelateIQ, one of the first applied AI software companies. In Today's Conversation We Discuss: 1. Seed Today is Frothier than 2021: How does Saam evaluate the seed market today? With seed pricing being so high, how does he reflect on his own price sensitivity? When does he say too much and does not do it? Despite seed pricing being higher than ever before, why does Saam believe it is rational? How has the competition at seed changed in the last few years? 2. Series B and Growth are not a Viable Asset Class Today: Why does Saam believe that you cannot make money at Series B today? Why has pricing gone through the roof? Who is the new competition? When does it make sense to "play the game on the field" vs say this is BS and do something else? What would need to happen in the public markets for Series B to be a viable asset class again? 3. Markets vs Founders: The Billion Dollar Mistake and Lessons: How does Saam prioritise between founder vs market? What have been Saam's biggest lessons when it comes to market sizing and timing? What is Saam's biggest miss? How did it change his approach and company evaluation? Which other VC would Saam most like to swap portfolios with? Why them? 4. Saam Motamedi: AMA: What does Saam know now that he wishes he had known when he got into VC? Saam has had a meteoric rise in Greylock, what advice does Saam have for those younger investors look to really scale within a firm? Sourcing, selecting and servicing: Where is he best? Where is he worst? Why does Saam believe that most VCs do not add value? 20VC: Why We Are in a Bubble & Now is Frothier Than 2021 | Why $1M ARR is a BS Milestone for Series A | Why Seed Pricing is Rational & Large Seed Rounds Have Less Risk | Why Many AI Apps Have BS Revenue & Are Not Sustainable with Saam Motamedi @ Greylock
Welcome to Season 2 of the Dorm Room Disruptors Podcast. In this season we want to dial in one topic that often draws lots of curiosity… Y Combinator. Y Combinator is a startup bootcamp, that runs multiple times a year and is highly exclusive. AirBNB, Reddit, and DoorDash are all alumni from the program. This episode is sponsored by Jetson, the #1 entrepreneurship app. If you are thinking of starting a business try out Jetson Pro here. Lucious is the co-founder and Chief Eating Officer at BiteSight. Before starting his company, Lucious built his first $1M ARR product while moonlighting at a YC company at 19 years old. Lucious also built a treasury management company focused on nonprofits and was an analyst at General Atlantic covering restaurant tech. He is now building Bitesight with the help of his co-founder. BiteSight is a video-based food delivery app that learns what you like and shows you exactly what you are craving. Imagine TikTok meets DoorDash. Since launching the beta in February, BiteSight has onboarded 274 restaurants in SF, and users are growing more than 50% each week, all organically. Chapter 1: Pivoting from QR Code Menus to Food Delivery Chapter 2: Getting Accepted into Y Combinator with a New Idea Chapter 3: Launching Bitesight and Onboarding Restaurants Chapter 4: Growing the Business and Competing with Uber Eats Chapter 5: Expanding to New York and Focusing on Recommendations
Going from zero to $1M in annual recurring revenue is worth celebrating, but it's no guarantee that your startup is on a path to success. How long did it take to reach this milestone? How much did you spend to acquire each customer, and how many have renewed or extended their contracts? When it comes to sales, repeatability equals success. To learn more about the tactics startups use to reach $1 million ARR (and beyond) I interviewed Rehan Jalil, CEO of Securiti. Drawing from his experience as a three-time founder, we talked about setting up an initial sales motion, understanding and validating the problem you're solving, focusing on a specific audience to hone your value proposition, and building trust with early customers. Episode breakdown: “Before we even get to the revenue, it actually is important to understand what problem you're trying to solve.” [2:10] “The only metric is repeatability.” [4:55] “So we coined the term ‘privacy ops'… we wrote a book on it.” [7:35] “Teams have to be very much in sync, which means information has to flow.” [10:39] “By the time you get to MVP, you better have refined this thing. [12:40] “Before you bring in sales teams, it's important that you actually have the content for them.” [15:34] “The bar is very high within the enterprise, and it takes much longer.” [18:29] “If you don't start with conviction, don't do it.” [21:41] “Find people who have a common interest on the problem that you're trying to solve.” [26:08] “If they feel like you helped them… they can actually be advocates.” [28:14] “Just putting things freemium out there doesn't mean people are going to adopt it.” [30:41] “Prioritize finding a viable use case and viable product strategy vs. rushing into something.” [32:36] Thanks for listening! Follow Fund/Build/Scale on LinkedIn and Substack.
In this episode, you will learn 3 key things: Is leadership right for you? The transition from being an IC to a leadership role Tactical tips and insights from KD on leadership --- KD, Kevin Dorsey, known as the Father of Modern Leadership. Here's what KD has done as a sales leader: At Bench, he led the team to their first $1M ARR month, tripling their customer base. At PatientPop, he expanded the team to over 170 members, doubled conversion rates, and tripled the sales pipeline. At ServiceTitan, he successfully grew the team from 20 to more than 70 in just one year. Connect with KD on LinkedIn https://www.linkedin.com/in/kddorsey3/ Resources mentioned in the episode: KD's course: Sales Leadership Accelerator https://www.salesleadershipaccelerator.com/ Books: Radical Candor by Kim Scott Sales Manager Survival Guide by David Brock Cracking the Sales Management Code by Jason Jordan The Connector Manager by Jaime Roca, and Sari Wilde ---
Maggie, Linus, Geoffrey, and the LS crew are reuniting for our second annual AI UX demo day in SF on Apr 28. Sign up to demo here! And don't forget tickets for the AI Engineer World's Fair — for early birds who join before keynote announcements!It's become fashionable for many AI startups to project themselves as “the next Google” - while the search engine is so 2000s, both Perplexity and Exa referred to themselves as a “research engine” or “answer engine” in our NeurIPS pod. However these searches tend to be relatively shallow, and it is challenging to zoom up and down the ladders of abstraction to garner insights. For serious researchers, this level of simple one-off search will not cut it.We've commented in our Jan 2024 Recap that Flow Engineering (simply; multi-turn processes over many-shot single prompts) seems to offer far more performance, control and reliability for a given cost budget. Our experiments with Devin and our understanding of what the new Elicit Notebooks offer a glimpse into the potential for very deep, open ended, thoughtful human-AI collaboration at scale.It starts with promptsWhen ChatGPT exploded in popularity in November 2022 everyone was turned into a prompt engineer. While generative models were good at "vibe based" outcomes (tell me a joke, write a poem, etc) with basic prompts, they struggled with more complex questions, especially in symbolic fields like math, logic, etc. Two of the most important "tricks" that people picked up on were:* Chain of Thought prompting strategy proposed by Wei et al in the “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”. Rather than doing traditional few-shot prompting with just question and answers, adding the thinking process that led to the answer resulted in much better outcomes.* Adding "Let's think step by step" to the prompt as a way to boost zero-shot reasoning, which was popularized by Kojima et al in the Large Language Models are Zero-Shot Reasoners paper from NeurIPS 2022. This bumped accuracy from 17% to 79% compared to zero-shot.Nowadays, prompts include everything from promises of monetary rewards to… whatever the Nous folks are doing to turn a model into a world simulator. At the end of the day, the goal of prompt engineering is increasing accuracy, structure, and repeatability in the generation of a model.From prompts to agentsAs prompt engineering got more and more popular, agents (see “The Anatomy of Autonomy”) took over Twitter with cool demos and AutoGPT became the fastest growing repo in Github history. The thing about AutoGPT that fascinated people was the ability to simply put in an objective without worrying about explaining HOW to achieve it, or having to write very sophisticated prompts. The system would create an execution plan on its own, and then loop through each task. The problem with open-ended agents like AutoGPT is that 1) it's hard to replicate the same workflow over and over again 2) there isn't a way to hard-code specific steps that the agent should take without actually coding them yourself, which isn't what most people want from a product. From agents to productsPrompt engineering and open-ended agents were great in the experimentation phase, but this year more and more of these workflows are starting to become polished products. Today's guests are Andreas Stuhlmüller and Jungwon Byun of Elicit (previously Ought), an AI research assistant that they think of as “the best place to understand what is known”. Ought was a non-profit, but last September, Elicit spun off into a PBC with a $9m seed round. It is hard to quantify how much a workflow can be improved, but Elicit boasts some impressive numbers for research assistants:Just four months after launch, Elicit crossed $1M ARR, which shows how much interest there is for AI products that just work.One of the main takeaways we had from the episode is how teams should focus on supervising the process, not the output. Their philosophy at Elicit isn't to train general models, but to train models that are extremely good at focusing processes. This allows them to have pre-created steps that the user can add to their workflow (like classifying certain features that are specific to their research field) without having to write a prompt for it. And for Hamel Husain's happiness, they always show you the underlying prompt. Elicit recently announced notebooks as a new interface to interact with their products: (fun fact, they tried to implement this 4 times before they landed on the right UX! We discuss this ~33:00 in the podcast)The reasons why they picked notebooks as a UX all tie back to process:* They are systematic; once you have a instruction/prompt that works on a paper, you can run hundreds of papers through the same workflow by creating a column. Notebooks can also be edited and exported at any point during the flow.* They are transparent - Many papers include an opaque literature review as perfunctory context before getting to their novel contribution. But PDFs are “dead” and it is difficult to follow the thought process and exact research flow of the authors. Sharing “living” Elicit Notebooks opens up this process.* They are unbounded - Research is an endless stream of rabbit holes. So it must be easy to dive deeper and follow up with extra steps, without losing the ability to surface for air. We had a lot of fun recording this, and hope you have as much fun listening!AI UX in SFLong time Latent Spacenauts might remember our first AI UX meetup with Linus Lee, Geoffrey Litt, and Maggie Appleton last year. Well, Maggie has since joined Elicit, and they are all returning at the end of this month! Sign up here: https://lu.ma/aiuxAnd submit demos here! https://forms.gle/iSwiesgBkn8oo4SS8We expect the 200 seats to “sell out” fast. Attendees with demos will be prioritized.Show Notes* Elicit* Ought (their previous non-profit)* “Pivoting” with GPT-4* Elicit notebooks launch* Charlie* Andreas' BlogTimestamps* [00:00:00] Introductions* [00:07:45] How Johan and Andreas Joined Forces to Create Elicit* [00:10:26] Why Products > Research* [00:15:49] The Evolution of Elicit's Product* [00:19:44] Automating Literature Review Workflow* [00:22:48] How GPT-3 to GPT-4 Changed Things* [00:25:37] Managing LLM Pricing and Performance* [00:31:07] Open vs. Closed: Elicit's Approach to Model Selection* [00:31:56] Moving to Notebooks* [00:39:11] Elicit's Budget for Model Queries and Evaluations* [00:41:44] Impact of Long Context Windows* [00:47:19] Underrated Features and Surprising Applications* [00:51:35] Driving Systematic and Efficient Research* [00:53:00] Elicit's Team Growth and Transition to a Public Benefit Corporation* [00:55:22] Building AI for GoodFull Interview on YouTubeAs always, a plug for our youtube version for the 80% of communication that is nonverbal:TranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:15]: Hey, and today we are back in the studio with Andreas and Jungwon from Elicit. Welcome.Jungwon [00:00:20]: Thanks guys.Andreas [00:00:21]: It's great to be here.Swyx [00:00:22]: Yeah. So I'll introduce you separately, but also, you know, we'd love to learn a little bit more about you personally. So Andreas, it looks like you started Elicit first, Jungwon joined later.Andreas [00:00:32]: That's right. For all intents and purposes, the Elicit and also the Ought that existed before then were very different from what I started. So I think it's like fair to say that you co-founded it.Swyx [00:00:43]: Got it. And Jungwon, you're a co-founder and COO of Elicit now.Jungwon [00:00:46]: Yeah, that's right.Swyx [00:00:47]: So there's a little bit of a history to this. I'm not super aware of like the sort of journey. I was aware of OTT and Elicit as sort of a nonprofit type situation. And recently you turned into like a B Corp, Public Benefit Corporation. So yeah, maybe if you want, you could take us through that journey of finding the problem. You know, obviously you're working together now. So like, how do you get together to decide to leave your startup career to join him?Andreas [00:01:10]: Yeah, it's truly a very long journey. I guess truly, it kind of started in Germany when I was born. So even as a kid, I was always interested in AI, like I kind of went to the library. There were books about how to write programs in QBasic and like some of them talked about how to implement chatbots.Jungwon [00:01:27]: To be clear, he grew up in like a tiny village on the outskirts of Munich called Dinkelschirben, where it's like a very, very idyllic German village.Andreas [00:01:36]: Yeah, important to the story. So basically, the main thing is I've kind of always been thinking about AI my entire life and been thinking about, well, at some point, this is going to be a huge deal. It's going to be transformative. How can I work on it? And was thinking about it from when I was a teenager, after high school did a year where I started a startup with the intention to become rich. And then once I'm rich, I can affect the trajectory of AI. Did not become rich, decided to go back to college and study cognitive science there, which was like the closest thing I could find at the time to AI. In the last year of college, moved to the US to do a PhD at MIT, working on broadly kind of new programming languages for AI because it kind of seemed like the existing languages were not great at expressing world models and learning world models doing Bayesian inference. Was always thinking about, well, ultimately, the goal is to actually build tools that help people reason more clearly, ask and answer better questions and make better decisions. But for a long time, it seemed like the technology to put reasoning in machines just wasn't there. Initially, at the end of my postdoc at Stanford, I was thinking about, well, what to do? I think the standard path is you become an academic and do research. But it's really hard to actually build interesting tools as an academic. You can't really hire great engineers. Everything is kind of on a paper-to-paper timeline. And so I was like, well, maybe I should start a startup, pursued that for a little bit. But it seemed like it was too early because you could have tried to do an AI startup, but probably would not have been this kind of AI startup we're seeing now. So then decided to just start a nonprofit research lab that's going to do research for a while until we better figure out how to do thinking in machines. And that was odd. And then over time, it became clear how to actually build actual tools for reasoning. And only over time, we developed a better way to... I'll let you fill in some of the details here.Jungwon [00:03:26]: Yeah. So I guess my story maybe starts around 2015. I kind of wanted to be a founder for a long time, and I wanted to work on an idea that stood the test of time for me, like an idea that stuck with me for a long time. And starting in 2015, actually, originally, I became interested in AI-based tools from the perspective of mental health. So there are a bunch of people around me who are really struggling. One really close friend in particular is really struggling with mental health and didn't have any support, and it didn't feel like there was anything before kind of like getting hospitalized that could just help her. And so luckily, she came and stayed with me for a while, and we were just able to talk through some things. But it seemed like lots of people might not have that resource, and something maybe AI-enabled could be much more scalable. I didn't feel ready to start a company then, that's 2015. And I also didn't feel like the technology was ready. So then I went into FinTech and kind of learned how to do the tech thing. And then in 2019, I felt like it was time for me to just jump in and build something on my own I really wanted to create. And at the time, I looked around at tech and felt like not super inspired by the options. I didn't want to have a tech career ladder, or I didn't want to climb the career ladder. There are two kind of interesting technologies at the time, there was AI and there was crypto. And I was like, well, the AI people seem like a little bit more nice, maybe like slightly more trustworthy, both super exciting, but threw my bet in on the AI side. And then I got connected to Andreas. And actually, the way he was thinking about pursuing the research agenda at OTT was really compatible with what I had envisioned for an ideal AI product, something that helps kind of take down really complex thinking, overwhelming thoughts and breaks it down into small pieces. And then this kind of mission that we need AI to help us figure out what we ought to do was really inspiring, right? Yeah, because I think it was clear that we were building the most powerful optimizer of our time. But as a society, we hadn't figured out how to direct that optimization potential. And if you kind of direct tremendous amounts of optimization potential at the wrong thing, that's really disastrous. So the goal of OTT was make sure that if we build the most transformative technology of our lifetime, it can be used for something really impactful, like good reasoning, like not just generating ads. My background was in marketing, but like, so I was like, I want to do more than generate ads with this. But also if these AI systems get to be super intelligent enough that they are doing this really complex reasoning, that we can trust them, that they are aligned with us and we have ways of evaluating that they're doing the right thing. So that's what OTT did. We did a lot of experiments, you know, like I just said, before foundation models really like took off. A lot of the issues we were seeing were more in reinforcement learning, but we saw a future where AI would be able to do more kind of logical reasoning, not just kind of extrapolate from numerical trends. We actually kind of set up experiments with people where kind of people stood in as super intelligent systems and we effectively gave them context windows. So they would have to like read a bunch of text and one person would get less text and one person would get all the texts and the person with less text would have to evaluate the work of the person who could read much more. So like in a world we were basically simulating, like in 2018, 2019, a world where an AI system could read significantly more than you and you as the person who couldn't read that much had to evaluate the work of the AI system. Yeah. So there's a lot of the work we did. And from that, we kind of iterated on the idea of breaking complex tasks down into smaller tasks, like complex tasks, like open-ended reasoning, logical reasoning into smaller tasks so that it's easier to train AI systems on them. And also so that it's easier to evaluate the work of the AI system when it's done. And then also kind of, you know, really pioneered this idea, the importance of supervising the process of AI systems, not just the outcomes. So a big part of how Elicit is built is we're very intentional about not just throwing a ton of data into a model and training it and then saying, cool, here's like scientific output. Like that's not at all what we do. Our approach is very much like, what are the steps that an expert human does or what is like an ideal process as granularly as possible, let's break that down and then train AI systems to perform each of those steps very robustly. When you train like that from the start, after the fact, it's much easier to evaluate, it's much easier to troubleshoot at each point. Like where did something break down? So yeah, we were working on those experiments for a while. And then at the start of 2021, decided to build a product.Swyx [00:07:45]: Do you mind if I, because I think you're about to go into more modern thought and Elicit. And I just wanted to, because I think a lot of people are in where you were like sort of 2018, 19, where you chose a partner to work with. Yeah. Right. And you didn't know him. Yeah. Yeah. You were just kind of cold introduced. A lot of people are cold introduced. Yeah. Never work with them. I assume you had a lot, a lot of other options, right? Like how do you advise people to make those choices?Jungwon [00:08:10]: We were not totally cold introduced. So one of our closest friends introduced us. And then Andreas had written a lot on the OTT website, a lot of blog posts, a lot of publications. And I just read it and I was like, wow, this sounds like my writing. And even other people, some of my closest friends I asked for advice from, they were like, oh, this sounds like your writing. But I think I also had some kind of like things I was looking for. I wanted someone with a complimentary skillset. I want someone who was very values aligned. And yeah, that was all a good fit.Andreas [00:08:38]: We also did a pretty lengthy mutual evaluation process where we had a Google doc where we had all kinds of questions for each other. And I think it ended up being around 50 pages or so of like various like questions and back and forth.Swyx [00:08:52]: Was it the YC list? There's some lists going around for co-founder questions.Andreas [00:08:55]: No, we just made our own questions. But I guess it's probably related in that you ask yourself, what are the values you care about? How would you approach various decisions and things like that?Jungwon [00:09:04]: I shared like all of my past performance reviews. Yeah. Yeah.Swyx [00:09:08]: And he never had any. No.Andreas [00:09:10]: Yeah.Swyx [00:09:11]: Sorry, I just had to, a lot of people are going through that phase and you kind of skipped over it. I was like, no, no, no, no. There's like an interesting story.Jungwon [00:09:20]: Yeah.Alessio [00:09:21]: Yeah. Before we jump into what a list it is today, the history is a bit counterintuitive. So you start with figuring out, oh, if we had a super powerful model, how would we align it? But then you were actually like, well, let's just build the product so that people can actually leverage it. And I think there are a lot of folks today that are now back to where you were maybe five years ago that are like, oh, what if this happens rather than focusing on actually building something useful with it? What clicked for you to like move into a list and then we can cover that story too.Andreas [00:09:49]: I think in many ways, the approach is still the same because the way we are building illicit is not let's train a foundation model to do more stuff. It's like, let's build a scaffolding such that we can deploy powerful models to good ends. I think it's different now in that we actually have like some of the models to plug in. But if in 2017, we had had the models, we could have run the same experiments we did run with humans back then, just with models. And so in many ways, our philosophy is always, let's think ahead to the future of what models are going to exist in one, two years or longer. And how can we make it so that they can actually be deployed in kind of transparent, controllableJungwon [00:10:26]: ways? I think motivationally, we both are kind of product people at heart. The research was really important and it didn't make sense to build a product at that time. But at the end of the day, the thing that always motivated us is imagining a world where high quality reasoning is really abundant and AI is a technology that's going to get us there. And there's a way to guide that technology with research, but we can have a more direct effect through product because with research, you publish the research and someone else has to implement that into the product and the product felt like a more direct path. And we wanted to concretely have an impact on people's lives. Yeah, I think the kind of personally, the motivation was we want to build for people.Swyx [00:11:03]: Yep. And then just to recap as well, like the models you were using back then were like, I don't know, would they like BERT type stuff or T5 or I don't know what timeframe we're talking about here.Andreas [00:11:14]: I guess to be clear, at the very beginning, we had humans do the work. And then I think the first models that kind of make sense were TPT-2 and TNLG and like Yeah, early generative models. We do also use like T5 based models even now started with TPT-2.Swyx [00:11:30]: Yeah, cool. I'm just kind of curious about like, how do you start so early? You know, like now it's obvious where to start, but back then it wasn't.Jungwon [00:11:37]: Yeah, I used to nag Andreas a lot. I was like, why are you talking to this? I don't know. I felt like TPT-2 is like clearly can't do anything. And I was like, Andreas, you're wasting your time, like playing with this toy. But yeah, he was right.Alessio [00:11:50]: So what's the history of what Elicit actually does as a product? You recently announced that after four months, you get to a million in revenue. Obviously, a lot of people use it, get a lot of value, but it would initially kind of like structured data extraction from papers. Then you had kind of like concept grouping. And today, it's maybe like a more full stack research enabler, kind of like paper understander platform. What's the definitive definition of what Elicit is? And how did you get here?Jungwon [00:12:15]: Yeah, we say Elicit is an AI research assistant. I think it will continue to evolve. That's part of why we're so excited about building and research, because there's just so much space. I think the current phase we're in right now, we talk about it as really trying to make Elicit the best place to understand what is known. So it's all a lot about like literature summarization. There's a ton of information that the world already knows. It's really hard to navigate, hard to make it relevant. So a lot of it is around document discovery and processing and analysis. I really kind of want to import some of the incredible productivity improvements we've seen in software engineering and data science and into research. So it's like, how can we make researchers like data scientists of text? That's why we're launching this new set of features called Notebooks. It's very much inspired by computational notebooks, like Jupyter Notebooks, you know, DeepNode or Colab, because they're so powerful and so flexible. And ultimately, when people are trying to get to an answer or understand insight, they're kind of like manipulating evidence and information. Today, that's all packaged in PDFs, which are super brittle. So with language models, we can decompose these PDFs into their underlying claims and evidence and insights, and then let researchers mash them up together, remix them and analyze them together. So yeah, I would say quite simply, overall, Elicit is an AI research assistant. Right now we're focused on text-based workflows, but long term, really want to kind of go further and further into reasoning and decision making.Alessio [00:13:35]: And when you say AI research assistant, this is kind of meta research. So researchers use Elicit as a research assistant. It's not a generic you-can-research-anything type of tool, or it could be, but like, what are people using it for today?Andreas [00:13:49]: Yeah. So specifically in science, a lot of people use human research assistants to do things. You tell your grad student, hey, here are a couple of papers. Can you look at all of these, see which of these have kind of sufficiently large populations and actually study the disease that I'm interested in, and then write out like, what are the experiments they did? What are the interventions they did? What are the outcomes? And kind of organize that for me. And the first phase of understanding what is known really focuses on automating that workflow because a lot of that work is pretty rote work. I think it's not the kind of thing that we need humans to do. Language models can do it. And then if language models can do it, you can obviously scale it up much more than a grad student or undergrad research assistant would be able to do.Jungwon [00:14:31]: Yeah. The use cases are pretty broad. So we do have a very large percent of our users are just using it personally or for a mix of personal and professional things. People who care a lot about health or biohacking or parents who have children with a kind of rare disease and want to understand the literature directly. So there is an individual kind of consumer use case. We're most focused on the power users. So that's where we're really excited to build. So Lissette was very much inspired by this workflow in literature called systematic reviews or meta-analysis, which is basically the human state of the art for summarizing scientific literature. And it typically involves like five people working together for over a year. And they kind of first start by trying to find the maximally comprehensive set of papers possible. So it's like 10,000 papers. And they kind of systematically narrow that down to like hundreds or 50 extract key details from every single paper. Usually have two people doing it, like a third person reviewing it. So it's like an incredibly laborious, time consuming process, but you see it in every single domain. So in science, in machine learning, in policy, because it's so structured and designed to be reproducible, it's really amenable to automation. So that's kind of the workflow that we want to automate first. And then you make that accessible for any question and make these really robust living summaries of science. So yeah, that's one of the workflows that we're starting with.Alessio [00:15:49]: Our previous guest, Mike Conover, he's building a new company called Brightwave, which is an AI research assistant for financial research. How do you see the future of these tools? Does everything converge to like a God researcher assistant, or is every domain going to have its own thing?Andreas [00:16:03]: I think that's a good and mostly open question. I do think there are some differences across domains. For example, some research is more quantitative data analysis, and other research is more high level cross domain thinking. And we definitely want to contribute to the broad generalist reasoning type space. Like if researchers are making discoveries often, it's like, hey, this thing in biology is actually analogous to like these equations in economics or something. And that's just fundamentally a thing that where you need to reason across domains. At least within research, I think there will be like one best platform more or less for this type of generalist research. I think there may still be like some particular tools like for genomics, like particular types of modules of genes and proteins and whatnot. But for a lot of the kind of high level reasoning that humans do, I think that is a more of a winner type all thing.Swyx [00:16:52]: I wanted to ask a little bit deeper about, I guess, the workflow that you mentioned. I like that phrase. I see that in your UI now, but that's as it is today. And I think you were about to tell us about how it was in 2021 and how it may be progressed. How has this workflow evolved over time?Jungwon [00:17:07]: Yeah. So the very first version of Elicit actually wasn't even a research assistant. It was a forecasting assistant. So we set out and we were thinking about, you know, what are some of the most impactful types of reasoning that if we could scale up, AI would really transform the world. We actually started with literature review, but we're like, oh, so many people are going to build literature review tools. So let's start there. So then we focused on geopolitical forecasting. So I don't know if you're familiar with like manifold or manifold markets. That kind of stuff. Before manifold. Yeah. Yeah. I'm not predicting relationships. We're predicting like, is China going to invade Taiwan?Swyx [00:17:38]: Markets for everything.Andreas [00:17:39]: Yeah. That's a relationship.Swyx [00:17:41]: Yeah.Jungwon [00:17:42]: Yeah. It's true. And then we worked on that for a while. And then after GPT-3 came out, I think by that time we realized that originally we were trying to help people convert their beliefs into probability distributions. And so take fuzzy beliefs, but like model them more concretely. And then after a few months of iterating on that, just realize, oh, the thing that's blocking people from making interesting predictions about important events in the world is less kind of on the probabilistic side and much more on the research side. And so that kind of combined with the very generalist capabilities of GPT-3 prompted us to make a more general research assistant. Then we spent a few months iterating on what even is a research assistant. So we would embed with different researchers. We built data labeling workflows in the beginning, kind of right off the bat. We built ways to find experts in a field and like ways to ask good research questions. So we just kind of iterated through a lot of workflows and no one else was really building at this time. And it was like very quick to just do some prompt engineering and see like what is a task that is at the intersection of what's technologically capable and like important for researchers. And we had like a very nondescript landing page. It said nothing. But somehow people were signing up and we had to sign a form that was like, why are you here? And everyone was like, I need help with literature review. And we're like, oh, literature review. That sounds so hard. I don't even know what that means. We're like, we don't want to work on it. But then eventually we were like, okay, everyone is saying literature review. It's overwhelmingly people want to-Swyx [00:19:02]: And all domains, not like medicine or physics or just all domains. Yeah.Jungwon [00:19:06]: And we also kind of personally knew literature review was hard. And if you look at the graphs for academic literature being published every single month, you guys know this in machine learning, it's like up into the right, like superhuman amounts of papers. So we're like, all right, let's just try it. I was really nervous, but Andreas was like, this is kind of like the right problem space to jump into, even if we don't know what we're doing. So my take was like, fine, this feels really scary, but let's just launch a feature every single week and double our user numbers every month. And if we can do that, we'll fail fast and we will find something. I was worried about like getting lost in the kind of academic white space. So the very first version was actually a weekend prototype that Andreas made. Do you want to explain how that worked?Andreas [00:19:44]: I mostly remember that it was really bad. The thing I remember is you entered a question and it would give you back a list of claims. So your question could be, I don't know, how does creatine affect cognition? It would give you back some claims that are to some extent based on papers, but they were often irrelevant. The papers were often irrelevant. And so we ended up soon just printing out a bunch of examples of results and putting them up on the wall so that we would kind of feel the constant shame of having such a bad product and would be incentivized to make it better. And I think over time it has gotten a lot better, but I think the initial version was like really very bad. Yeah.Jungwon [00:20:20]: But it was basically like a natural language summary of an abstract, like kind of a one sentence summary, and which we still have. And then as we learned kind of more about this systematic review workflow, we started expanding the capability so that you could extract a lot more data from the papers and do more with that.Swyx [00:20:33]: And were you using like embeddings and cosine similarity, that kind of stuff for retrieval, or was it keyword based?Andreas [00:20:40]: I think the very first version didn't even have its own search engine. I think the very first version probably used the Semantic Scholar or API or something similar. And only later when we discovered that API is not very semantic, we then built our own search engine that has helped a lot.Swyx [00:20:58]: And then we're going to go into like more recent products stuff, but like, you know, I think you seem the more sort of startup oriented business person and you seem sort of more ideologically like interested in research, obviously, because of your PhD. What kind of market sizing were you guys thinking? Right? Like, because you're here saying like, we have to double every month. And I'm like, I don't know how you make that conclusion from this, right? Especially also as a nonprofit at the time.Jungwon [00:21:22]: I mean, market size wise, I felt like in this space where so much was changing and it was very unclear what of today was actually going to be true tomorrow. We just like really rested a lot on very, very simple fundamental principles, which is like, if you can understand the truth, that is very economically beneficial and valuable. If you like know the truth.Swyx [00:21:42]: On principle.Jungwon [00:21:43]: Yeah. That's enough for you. Yeah. Research is the key to many breakthroughs that are very commercially valuable.Swyx [00:21:47]: Because my version of it is students are poor and they don't pay for anything. Right? But that's obviously not true. As you guys have found out. But you had to have some market insight for me to have believed that, but you skipped that.Andreas [00:21:58]: Yeah. I remember talking to VCs for our seed round. A lot of VCs were like, you know, researchers, they don't have any money. Why don't you build legal assistant? I think in some short sighted way, maybe that's true. But I think in the long run, R&D is such a big space of the economy. I think if you can substantially improve how quickly people find new discoveries or avoid controlled trials that don't go anywhere, I think that's just huge amounts of money. And there are a lot of questions obviously about between here and there. But I think as long as the fundamental principle is there, we were okay with that. And I guess we found some investors who also were. Yeah.Swyx [00:22:35]: Congrats. I mean, I'm sure we can cover the sort of flip later. I think you're about to start us on like GPT-3 and how that changed things for you. It's funny. I guess every major GPT version, you have some big insight. Yeah.Jungwon [00:22:48]: Yeah. I mean, what do you think?Andreas [00:22:51]: I think it's a little bit less true for us than for others, because we always believed that there will basically be human level machine work. And so it is definitely true that in practice for your product, as new models come out, your product starts working better, you can add some features that you couldn't add before. But I don't think we really ever had the moment where we were like, oh, wow, that is super unanticipated. We need to do something entirely different now from what was on the roadmap.Jungwon [00:23:21]: I think GPT-3 was a big change because it kind of said, oh, now is the time that we can use AI to build these tools. And then GPT-4 was maybe a little bit more of an extension of GPT-3. GPT-3 over GPT-2 was like qualitative level shift. And then GPT-4 was like, okay, great. Now it's like more accurate. We're more accurate on these things. We can answer harder questions. But the shape of the product had already taken place by that time.Swyx [00:23:44]: I kind of want to ask you about this sort of pivot that you've made. But I guess that was just a way to sell what you were doing, which is you're adding extra features on grouping by concepts. The GPT-4 pivot, quote unquote pivot that you-Jungwon [00:23:55]: Oh, yeah, yeah, exactly. Right, right, right. Yeah. Yeah. When we launched this workflow, now that GPT-4 was available, basically Elisa was at a place where we have very tabular interfaces. So given a table of papers, you can extract data across all the tables. But you kind of want to take the analysis a step further. Sometimes what you'd care about is not having a list of papers, but a list of arguments, a list of effects, a list of interventions, a list of techniques. And so that's one of the things we're working on is now that you've extracted this information in a more structured way, can you pivot it or group by whatever the information that you extracted to have more insight first information still supported by the academic literature?Swyx [00:24:33]: Yeah, that was a big revelation when I saw it. Basically, I think I'm very just impressed by how first principles, your ideas around what the workflow is. And I think that's why you're not as reliant on like the LLM improving, because it's actually just about improving the workflow that you would recommend to people. Today we might call it an agent, I don't know, but you're not relying on the LLM to drive it. It's relying on this is the way that Elicit does research. And this is what we think is most effective based on talking to our users.Jungwon [00:25:01]: The problem space is still huge. Like if it's like this big, we are all still operating at this tiny part, bit of it. So I think about this a lot in the context of moats, people are like, oh, what's your moat? What happens if GPT-5 comes out? It's like, if GPT-5 comes out, there's still like all of this other space that we can go into. So I think being really obsessed with the problem, which is very, very big, has helped us like stay robust and just kind of directly incorporate model improvements and they keep going.Swyx [00:25:26]: And then I first encountered you guys with Charlie, you can tell us about that project. Basically, yeah. Like how much did cost become a concern as you're working more and more with OpenAI? How do you manage that relationship?Jungwon [00:25:37]: Let me talk about who Charlie is. And then you can talk about the tech, because Charlie is a special character. So Charlie, when we found him was, had just finished his freshman year at the University of Warwick. And I think he had heard about us on some discord. And then he applied and we were like, wow, who is this freshman? And then we just saw that he had done so many incredible side projects. And we were actually on a team retreat in Barcelona visiting our head of engineering at that time. And everyone was talking about this wonder kid or like this kid. And then on our take home project, he had done like the best of anyone to that point. And so people were just like so excited to hire him. So we hired him as an intern and they were like, Charlie, what if you just dropped out of school? And so then we convinced him to take a year off. And he was just incredibly productive. And I think the thing you're referring to is at the start of 2023, Anthropic kind of launched their constitutional AI paper. And within a few days, I think four days, he had basically implemented that in production. And then we had it in app a week or so after that. And he has since kind of contributed to major improvements, like cutting costs down to a tenth of what they were really large scale. But yeah, you can talk about the technical stuff. Yeah.Andreas [00:26:39]: On the constitutional AI project, this was for abstract summarization, where in illicit, if you run a query, it'll return papers to you, and then it will summarize each paper with respect to your query for you on the fly. And that's a really important part of illicit because illicit does it so much. If you run a few searches, it'll have done it a few hundred times for you. And so we cared a lot about this both being fast, cheap, and also very low on hallucination. I think if illicit hallucinates something about the abstract, that's really not good. And so what Charlie did in that project was create a constitution that expressed what are the attributes of a good summary? Everything in the summary is reflected in the actual abstract, and it's like very concise, et cetera, et cetera. And then used RLHF with a model that was trained on the constitution to basically fine tune a better summarizer on an open source model. Yeah. I think that might still be in use.Jungwon [00:27:34]: Yeah. Yeah, definitely. Yeah. I think at the time, the models hadn't been trained at all to be faithful to a text. So they were just generating. So then when you ask them a question, they tried too hard to answer the question and didn't try hard enough to answer the question given the text or answer what the text said about the question. So we had to basically teach the models to do that specific task.Swyx [00:27:54]: How do you monitor the ongoing performance of your models? Not to get too LLM-opsy, but you are one of the larger, more well-known operations doing NLP at scale. I guess effectively, you have to monitor these things and nobody has a good answer that I talk to.Andreas [00:28:10]: I don't think we have a good answer yet. I think the answers are actually a little bit clearer on the just kind of basic robustness side of where you can import ideas from normal software engineering and normal kind of DevOps. You're like, well, you need to monitor kind of latencies and response times and uptime and whatnot.Swyx [00:28:27]: I think when we say performance, it's more about hallucination rate, isn't it?Andreas [00:28:30]: And then things like hallucination rate where I think there, the really important thing is training time. So we care a lot about having our own internal benchmarks for model development that reflect the distribution of user queries so that we can know ahead of time how well is the model going to perform on different types of tasks. So the tasks being summarization, question answering, given a paper, ranking. And for each of those, we want to know what's the distribution of things the model is going to see so that we can have well-calibrated predictions on how well the model is going to do in production. And I think, yeah, there's some chance that there's distribution shift and actually the things users enter are going to be different. But I think that's much less important than getting the kind of training right and having very high quality, well-vetted data sets at training time.Jungwon [00:29:18]: I think we also end up effectively monitoring by trying to evaluate new models as they come out. And so that kind of prompts us to go through our eval suite every couple of months. And every time a new model comes out, we have to see how is this performing relative to production and what we currently have.Swyx [00:29:32]: Yeah. I mean, since we're on this topic, any new models that have really caught your eye this year?Jungwon [00:29:37]: Like Claude came out with a bunch. Yeah. I think Claude is pretty, I think the team's pretty excited about Claude. Yeah.Andreas [00:29:41]: Specifically, Claude Haiku is like a good point on the kind of Pareto frontier. It's neither the cheapest model, nor is it the most accurate, most high quality model, but it's just like a really good trade-off between cost and accuracy.Swyx [00:29:57]: You apparently have to 10-shot it to make it good. I tried using Haiku for summarization, but zero-shot was not great. Then they were like, you know, it's a skill issue, you have to try harder.Jungwon [00:30:07]: I think GPT-4 unlocked tables for us, processing data from tables, which was huge. GPT-4 Vision.Andreas [00:30:13]: Yeah.Swyx [00:30:14]: Yeah. Did you try like Fuyu? I guess you can't try Fuyu because it's non-commercial. That's the adept model.Jungwon [00:30:19]: Yeah.Swyx [00:30:20]: We haven't tried that one. Yeah. Yeah. Yeah. But Claude is multimodal as well. Yeah. I think the interesting insight that we got from talking to David Luan, who is CEO of multimodality has effectively two different flavors. One is we recognize images from a camera in the outside natural world. And actually the more important multimodality for knowledge work is screenshots and PDFs and charts and graphs. So we need a new term for that kind of multimodality.Andreas [00:30:45]: But is the claim that current models are good at one or the other? Yeah.Swyx [00:30:50]: They're over-indexed because of the history of computer vision is Coco, right? So now we're like, oh, actually, you know, screens are more important, OCR, handwriting. You mentioned a lot of like closed model lab stuff, and then you also have like this open source model fine tuning stuff. Like what is your workload now between closed and open? It's a good question.Andreas [00:31:07]: I think- Is it half and half? It's a-Swyx [00:31:10]: Is that even a relevant question or not? Is this a nonsensical question?Andreas [00:31:13]: It depends a little bit on like how you index, whether you index by like computer cost or number of queries. I'd say like in terms of number of queries, it's maybe similar. In terms of like cost and compute, I think the closed models make up more of the budget since the main cases where you want to use closed models are cases where they're just smarter, where no existing open source models are quite smart enough.Jungwon [00:31:35]: Yeah. Yeah.Alessio [00:31:37]: We have a lot of interesting technical questions to go in, but just to wrap the kind of like UX evolution, now you have the notebooks. We talked a lot about how chatbots are not the final frontier, you know? How did you decide to get into notebooks, which is a very iterative kind of like interactive interface and yeah, maybe learnings from that.Jungwon [00:31:56]: Yeah. This is actually our fourth time trying to make this work. Okay. I think the first time was probably in early 2021. I think because we've always been obsessed with this idea of task decomposition and like branching, we always wanted a tool that could be kind of unbounded where you could keep going, could do a lot of branching where you could kind of apply language model operations or computations on other tasks. So in 2021, we had this thing called composite tasks where you could use GPT-3 to brainstorm a bunch of research questions and then take each research question and decompose those further into sub questions. This kind of, again, that like task decomposition tree type thing was always very exciting to us, but that was like, it didn't work and it was kind of overwhelming. Then at the end of 22, I think we tried again and at that point we were thinking, okay, we've done a lot with this literature review thing. We also want to start helping with kind of adjacent domains and different workflows. Like we want to help more with machine learning. What does that look like? And as we were thinking about it, we're like, well, there are so many research workflows. How do we not just build three new workflows into Elicit, but make Elicit really generic to lots of workflows? What is like a generic composable system with nice abstractions that can like scale to all these workflows? So we like iterated on that a bunch and then didn't quite narrow the problem space enough or like quite get to what we wanted. And then I think it was at the beginning of 2023 where we're like, wow, computational notebooks kind of enable this, where they have a lot of flexibility, but kind of robust primitives such that you can extend the workflow and it's not limited. It's not like you ask a query, you get an answer, you're done. You can just constantly keep building on top of that. And each little step seems like a really good unit of work for the language model. And also there was just like really helpful to have a bit more preexisting work to emulate. Yeah, that's kind of how we ended up at computational notebooks for Elicit.Andreas [00:33:44]: Maybe one thing that's worth making explicit is the difference between computational notebooks and chat, because on the surface, they seem pretty similar. It's kind of this iterative interaction where you add stuff. In both cases, you have a back and forth between you enter stuff and then you get some output and then you enter stuff. But the important difference in our minds is with notebooks, you can define a process. So in data science, you can be like, here's like my data analysis process that takes in a CSV and then does some extraction and then generates a figure at the end. And you can prototype it using a small CSV and then you can run it over a much larger CSV later. And similarly, the vision for notebooks in our case is to not make it this like one-off chat interaction, but to allow you to then say, if you start and first you're like, okay, let me just analyze a few papers and see, do I get to the correct conclusions for those few papers? Can I then later go back and say, now let me run this over 10,000 papers now that I've debugged the process using a few papers. And that's an interaction that doesn't fit quite as well into the chat framework because that's more for kind of quick back and forth interaction.Alessio [00:34:49]: Do you think in notebooks, it's kind of like structure, editable chain of thought, basically step by step? Like, is that kind of where you see this going? And then are people going to reuse notebooks as like templates? And maybe in traditional notebooks, it's like cookbooks, right? You share a cookbook, you can start from there. Is this similar in Elizit?Andreas [00:35:06]: Yeah, that's exactly right. So that's our hope that people will build templates, share them with other people. I think chain of thought is maybe still like kind of one level lower on the abstraction hierarchy than we would think of notebooks. I think we'll probably want to think about more semantic pieces like a building block is more like a paper search or an extraction or a list of concepts. And then the model's detailed reasoning will probably often be one level down. You always want to be able to see it, but you don't always want it to be front and center.Alessio [00:35:36]: Yeah, what's the difference between a notebook and an agent? Since everybody always asks me, what's an agent? Like how do you think about where the line is?Andreas [00:35:44]: Yeah, it's an interesting question. In the notebook world, I would generally think of the human as the agent in the first iteration. So you have the notebook and the human kind of adds little action steps. And then the next point on this kind of progress gradient is, okay, now you can use language models to predict which action would you take as a human. And at some point, you're probably going to be very good at this, you'll be like, okay, in some cases I can, with 99.9% accuracy, predict what you do. And then you might as well just execute it, like why wait for the human? And eventually, as you get better at this, that will just look more and more like agents taking actions as opposed to you doing the thing. I think templates are a specific case of this where you're like, okay, well, there's just particular sequences of actions that you often want to chunk and have available as primitives, just like in normal programming. And those, you can view them as action sequences of agents, or you can view them as more normal programming language abstraction thing. And I think those are two valid views. Yeah.Alessio [00:36:40]: How do you see this change as, like you said, the models get better and you need less and less human actual interfacing with the model, you just get the results? Like how does the UX and the way people perceive it change?Jungwon [00:36:52]: Yeah, I think this kind of interaction paradigms for evaluation is not really something the internet has encountered yet, because up to now, the internet has all been about getting data and work from people. So increasingly, I really want kind of evaluation, both from an interface perspective and from like a technical perspective and operation perspective to be a superpower for Elicit, because I think over time, models will do more and more of the work, and people will have to do more and more of the evaluation. So I think, yeah, in terms of the interface, some of the things we have today, you know, for every kind of language model generation, there's some citation back, and we kind of try to highlight the ground truth in the paper that is most relevant to whatever Elicit said, and make it super easy so that you can click on it and quickly see in context and validate whether the text actually supports the answer that Elicit gave. So I think we'd probably want to scale things up like that, like the ability to kind of spot check the model's work super quickly, scale up interfaces like that. And-Swyx [00:37:44]: Who would spot check? The user?Jungwon [00:37:46]: Yeah, to start, it would be the user. One of the other things we do is also kind of flag the model's uncertainty. So we have models report out, how confident are you that this was the sample size of this study? The model's not sure, we throw a flag. And so the user knows to prioritize checking that. So again, we can kind of scale that up. So when the model's like, well, I searched this on Google, I'm not sure if that was the right thing. I have an uncertainty flag, and the user can go and be like, oh, okay, that was actually the right thing to do or not.Swyx [00:38:10]: I've tried to do uncertainty readings from models. I don't know if you have this live. You do? Yeah. Because I just didn't find them reliable because they just hallucinated their own uncertainty. I would love to base it on log probs or something more native within the model rather than generated. But okay, it sounds like they scale properly for you. Yeah.Jungwon [00:38:30]: We found it to be pretty calibrated. It varies on the model.Andreas [00:38:32]: I think in some cases, we also use two different models for the uncertainty estimates than for the question answering. So one model would say, here's my chain of thought, here's my answer. And then a different type of model. Let's say the first model is Llama, and let's say the second model is GPT-3.5. And then the second model just looks over the results and is like, okay, how confident are you in this? And I think sometimes using a different model can be better than using the same model. Yeah.Swyx [00:38:58]: On the topic of models, evaluating models, obviously you can do that all day long. What's your budget? Because your queries fan out a lot. And then you have models evaluating models. One person typing in a question can lead to a thousand calls.Andreas [00:39:11]: It depends on the project. So if the project is basically a systematic review that otherwise human research assistants would do, then the project is basically a human equivalent spend. And the spend can get quite large for those projects. I don't know, let's say $100,000. In those cases, you're happier to spend compute then in the kind of shallow search case where someone just enters a question because, I don't know, maybe I heard about creatine. What's it about? Probably don't want to spend a lot of compute on that. This sort of being able to invest more or less compute into getting more or less accurate answers is I think one of the core things we care about. And that I think is currently undervalued in the AI space. I think currently you can choose which model you want and you can sometimes, I don't know, you'll tip it and it'll try harder or you can try various things to get it to work harder. But you don't have great ways of converting willingness to spend into better answers. And we really want to build a product that has this sort of unbounded flavor where if you care about it a lot, you should be able to get really high quality answers, really double checked in every way.Alessio [00:40:14]: And you have a credits-based pricing. So unlike most products, it's not a fixed monthly fee.Jungwon [00:40:19]: Right, exactly. So some of the higher costs are tiered. So for most casual users, they'll just get the abstract summary, which is kind of an open source model. Then you can add more columns, which have more extractions and these uncertainty features. And then you can also add the same columns in high accuracy mode, which also parses the table. So we kind of stack the complexity on the calls.Swyx [00:40:39]: You know, the fun thing you can do with a credit system, which is data for data, basically you can give people more credits if they give data back to you. I don't know if you've already done that. We've thought about something like this.Jungwon [00:40:49]: It's like if you don't have money, but you have time, how do you exchange that?Swyx [00:40:54]: It's a fair trade.Jungwon [00:40:55]: I think it's interesting. We haven't quite operationalized it. And then, you know, there's been some kind of like adverse selection. Like, you know, for example, it would be really valuable to get feedback on our model. So maybe if you were willing to give more robust feedback on our results, we could give you credits or something like that. But then there's kind of this, will people take it seriously? And you want the good people. Exactly.Swyx [00:41:11]: Can you tell who are the good people? Not right now.Jungwon [00:41:13]: But yeah, maybe at the point where we can, we can offer it. We can offer it up to them.Swyx [00:41:16]: The perplexity of questions asked, you know, if it's higher perplexity, these are the smarterJungwon [00:41:20]: people. Yeah, maybe.Andreas [00:41:23]: If you put typos in your queries, you're not going to get off the stage.Swyx [00:41:28]: Negative social credit. It's very topical right now to think about the threat of long context windows. All these models that we're talking about these days, all like a million token plus. Is that relevant for you? Can you make use of that? Is that just prohibitively expensive because you're just paying for all those tokens or you're just doing rag?Andreas [00:41:44]: It's definitely relevant. And when we think about search, as many people do, we think about kind of a staged pipeline of retrieval where first you use semantic search database with embeddings, get like the, in our case, maybe 400 or so most relevant papers. And then, then you still need to rank those. And I think at that point it becomes pretty interesting to use larger models. So specifically in the past, I think a lot of ranking was kind of per item ranking where you would score each individual item, maybe using increasingly expensive scoring methods and then rank based on the scores. But I think list-wise re-ranking where you have a model that can see all the elements is a lot more powerful because often you can only really tell how good a thing is in comparison to other things and what things should come first. It really depends on like, well, what other things that are available, maybe you even care about diversity in your results. You don't want to show 10 very similar papers as the first 10 results. So I think a long context models are quite interesting there. And especially for our case where we care more about power users who are perhaps a little bit more willing to wait a little bit longer to get higher quality results relative to people who just quickly check out things because why not? And I think being able to spend more on longer contexts is quite valuable.Jungwon [00:42:55]: Yeah. I think one thing the longer context models changed for us is maybe a focus from breaking down tasks to breaking down the evaluation. So before, you know, if we wanted to answer a question from the full text of a paper, we had to figure out how to chunk it and like find the relevant chunk and then answer based on that chunk. And the nice thing was then, you know, kind of which chunk the model used to answer the question. So if you want to help the user track it, yeah, you can be like, well, this was the chunk that the model got. And now if you put the whole text in the paper, you have to like kind of find the chunk like more retroactively basically. And so you need kind of like a different set of abilities and obviously like a different technology to figure out. You still want to point the user to the supporting quotes in the text, but then the interaction is a little different.Swyx [00:43:38]: You like scan through and find some rouge score floor.Andreas [00:43:41]: I think there's an interesting space of almost research problems here because you would ideally make causal claims like if this hadn't been in the text, the model wouldn't have said this thing. And maybe you can do expensive approximations to that where like, I don't know, you just throw out chunk of the paper and re-answer and see what happens. But hopefully there are better ways of doing that where you just get that kind of counterfactual information for free from the model.Alessio [00:44:06]: Do you think at all about the cost of maintaining REG versus just putting more tokens in the window? I think in software development, a lot of times people buy developer productivity things so that we don't have to worry about it. Context window is kind of the same, right? You have to maintain chunking and like REG retrieval and like re-ranking and all of this versus I just shove everything into the context and like it costs a little more, but at least I don't have to do all of that. Is that something you thought about?Jungwon [00:44:31]: I think we still like hit up against context limits enough that it's not really, do we still want to keep this REG around? It's like we do still need it for the scale of the work that we're doing, yeah.Andreas [00:44:41]: And I think there are different kinds of maintainability. In one sense, I think you're right that throw everything into the context window thing is easier to maintain because you just can swap out a model. In another sense, if things go wrong, it's harder to debug where like, if you know, here's the process that we go through to go from 200 million papers to an answer. And there are like little steps and you understand, okay, this is the step that finds the relevant paragraph or whatever it may be. You'll know which step breaks if the answers are bad, whereas if it's just like a new model version came out and now it suddenly doesn't find your needle in a haystack anymore, then you're like, okay, what can you do? You're kind of at a loss.Alessio [00:45:21]: Let's talk a bit about, yeah, needle in a haystack and like maybe the opposite of it, which is like hard grounding. I don't know if that's like the best name to think about it, but I was using one of these chatwitcher documents features and I put the AMD MI300 specs and the new Blackwell chips from NVIDIA and I was asking questions and does the AMD chip support NVLink? And the response was like, oh, it doesn't say in the specs. But if you ask GPD 4 without the docs, it would tell you no, because NVLink it's a NVIDIA technology.Swyx [00:45:49]: It just says in the thing.Alessio [00:45:53]: How do you think about that? Does using the context sometimes suppress the knowledge that the model has?Andreas [00:45:57]: It really depends on the task because I think sometimes that is exactly what you want. So imagine you're a researcher, you're writing the background section of your paper and you're trying to describe what these other papers say. You really don't want extra information to be introduced there. In other cases where you're just trying to figure out the truth and you're giving the documents because you think they will help the model figure out what the truth is. I think you do want, if the model has a hunch that there might be something that's not in the papers, you do want to surface that. I think ideally you still don't want the model to just tell you, probably the ideal thing looks a bit more like agent control where the model can issue a query that then is intended to surface documents that substantiate its hunch. That's maybe a reasonable middle ground between model just telling you and model being fully limited to the papers you give it.Jungwon [00:46:44]: Yeah, I would say it's, they're just kind of different tasks right now. And the task that Elicit is mostly focused on is what do these papers say? But there's another task which is like, just give me the best possible answer and that give me the best possible answer sometimes depends on what do these papers say, but it can also depend on other stuff that's not in the papers. So ideally we can do both and then kind of do this overall task for you more going forward.Alessio [00:47:08]: We see a lot of details, but just to zoom back out a little bit, what are maybe the most underrated features of Elicit and what is one thing that maybe the users surprise you the most by using it?Jungwon [00:47:19]: I think the most powerful feature of Elicit is the ability to extract, add columns to this table, which effectively extracts data from all of your papers at once. It's well used, but there are kind of many different extensions of that that I think users are still discovering. So one is we let you give a description of the column. We let you give instructions of a column. We let you create custom columns. So we have like 30 plus predefined fields that users can extract, like what were the methods? What were the main findings? How many people were studied? And we actually show you basically the prompts that we're using to
Embark on a journey with Oren Kandel as he navigates the thrilling ups and downs of transforming the world of video content creation through AI. From early setbacks to achieving a groundbreaking $1M ARR in record time, discover the resilience and innovation driving the future of digital storytelling. #20MinuteLeaders
In this episode of the SaaS Revolution Show our host Alex Theuma is joined by Martha Bitar, CEO at Flodesk, who shares the top 5 things that helped her bootstrap to $25M ARR. "Someone somewhere told us not to build code for our solution until we had a customer interview that made someone cry because - cry in a good way! - because they were so excited that their problem was finally getting a solution... And we did. At first when I heard it, I was like, of course no one is going to cry because this is not the type of problem that makes people cry. But we eventually did get someone who cried in a demo, and that's when we knew they were ready to build the code." Martha shares:
The SaaS Podcast - SaaS, Startups, Growth Hacking & Entrepreneurship
Brett Martin is the co-founder and president of Kumospace, a virtual office platform that helps remote teams to collaborate in real time. Show Notes: https://saasclub.io/384 Join Email List: Get weekly SaaS learnings, new podcast episodes, and actionable insights right in your inbox: https://saasclub.io/email/ Join Community: SaaS Club is the community for early-stage SaaS founders and entrepreneurs: https://saasclub.co/join Hostinger - AI Website Builder: https://saasclub.io/hostinger (Use SAASPODAST for 10% off) Attio - A Powerful, Flexible and Data-Driven CRM: https://saasclub.io/attio
We are announcing the second edition of our Latent Space demo day event in SF on 2/23: Final Frontiers, a startup and research competition in “The Autonomous Workforce”, ”Beyond Transformers & GPUs”, and “Embodied AI”. RSVP here! The first one was aimed for 15-20 people and ended up blowing up to >200 and covered in the Information - let's see what a year of growth (and competition) does to the local events space in 2024.You can find all Latent Space events here, and of course get in touch with us to host your own AI Engineer meetups like AI Engineering Singapore.In our December 2023 recap we covered the Four Wars of the AI stack. But how do we know when it's time to crown a winner? As we kick off 2024, we wanted to do a recap of the State of AI in 2023 to set a baseline of adoption for different products. Retool had a great report at the end of last year which covered a lot of it. David Hsu, CEO and co-founder of Retool, joined us to go over it together. We also talked about the history of Retool, why they were too embarrassed to present at YC demo day, and how they got to $1M ARR with 3 employees. If you're a founder, there are a lot of nuggets of advice in here!Retool AIIn our modeling of the “Software 3.0 Stack”, we have generally left a pretty wide open gap as to the “user interface” equivalent of the AI stack:Retool AI launched 4 months ago with some nifty features for SQL generation, and its own hosted vector storage service (using pgvector). However, as he explains on the pod, the more interesting potential of Retool is in helping developers build AI infused applications quickly, in combination with its Workflows feature. This moves Retool down the stack from just the UI for internal tooling to the business logic “piping” as well. There are a bunch of dedicated tools in this space like Respell, BuildShip, Flowise, and Ironclad Rivet."We think that practically every internal app is going to be AI infused over the next three years." - David on the podRIP StackOverflow?In July 2023 we talked about the impact of ChatGPT and Copilot:This was then disputed by StackOverflow, who pointed out (very fairly so) that there were privacy-related changes in their analytics instrumentation in 2022. StackOverflow no longer reports traffic, but based on StackOverflow's continuing transparency we can see that organic declines have continued throughout 2023.Retool's report comes over a year after those changes and has some self reported samples from users:* 57.6% of people said they have used StackOverflow less; almost all of them replaced it with ChatGPT and Copilot.* 10.2% said they no longer use StackOverflow.We also saw a lot more tools being released in the dev tools space such as (one of our oldest pod friends) Codeium (which just raised a $65M Series B), SourceGraph (and their newly released Cody), Codium AI (just released AlphaCodium which was picked up by Karpathy), Phind (which beat GPT-4 with OSS models), and Cursor, one of the most beloved products in the dev community at the moment. Intelligence is getting closer and closer to the IDE, and the trend doesn't seem to be reverting. We already said that “You are not too old (to pivot into AI)“, and the advice still stands. When asked to rate “Preference for hiring engineers effective at using ChatGPT/Copilot for coding” on a scale of 1 to 10, where 10 is “Much more likely”, ~40% of companies voted 8-10. Having an AI Engineer skillset is extremely important. 45% of companies between 1,000-4,999 employees said that they increased the difficulty of technical interviews to compensate for these new tools, so the gap between users and non-users will keep widening.Crossing the AI in Production ChasmGeoffrey Moore's “Crossing the Chasm” is one of the most quoted business frameworks. Every market has an initial group of Innovators and Early Adopters, who are willing to suffer through the rough edges of products initially, and eventually crosses into the Early Majority, which expects a full product.In the AI world, ChatGPT and Midjourney / DALL-E have crossed the chasm in the consumer space. Copilot is probably the only tool that did it in the enterprise, having crossed 1M paid users. ~$50B were invested in AI in 2023, and we still only have
The one where Ted Lau at Ballistic Arts chats with Lucas Béland, CEO & Founder of Goloot, about:The evolution of GolootHow a 22-year-old university drop out raised $6M and achieved $1M ARR in the AdTech/MarTech spaceFinding an opportunity in the online advertising space to achieve fast growthSign up for the Marketing News Canada e-newsletter at www.marketingnewscanada.com.–Special Offer for Marketing News Canada ListenersUnbounce can help you easily build landing pages, popups and sticky bars. Highly customizable and no coding required, anyone can get started on converting more traffic into leads, signups and sales. Try it yourself with a free 14-day trial and get 20% off your first three months now! https://unbounce.grsm.io/5cmrgz1mt3r6Thanks to our sponsor Jelly Academy. Jelly Academy has been helping professionals, students and teams across Canada acquire the skills, knowledge and micro certifications they need to jump into a new digital marketing role, get that promotion, and amplify their current marketing roles. Learn more about Jelly Academy's 6 Week online bootcamp here: https://jellyacademy.ca/digital-marketing-6-week-program–Follow Marketing News Canada:Twitter - twitter.com/MarketingNewsC2Facebook - facebook.com/MarketingNewsCanadaLinkedIn - linkedin.com/company/marketing-news-canadaYouTube - youtube.com/channel/UCM8sS33Jyj0xwbnBtRqJdNwWebsite - marketingnewscanada.com Follow Ted Lau: LinkedIn - https://www.linkedin.com/in/ballisticarts/Website - https://www.ballisticarts.com/Follow Lucas Béland & GolootLucas' LinkedIn - https://www.linkedin.com/in/lucas-b%C3%A9land-4ab1b5174/ Goloot's LinkedIn - https://www.linkedin.com/company/golootinc/ Website - https://www.goloot.io/Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
This podcast interview focuses on product innovation, that has the power to run hundreds of engaging webinars without having to be there. My guest is Melissa Kwan, CEO of eWebinar. Melissa is a 3rd time bootstrapped founder. Her previous company, Spacio (real estate tech), was acquired in 2019. eWebinar was the product she always dreamt about because she was drowning in sales demos, onboarding and training webinars every day for 5 years. So, in March 2019 she made the jump and founded the eWebinar. Her mission: to give people their time back so they can do something else more fun because that's what life is about. We don't need to work harder; we need to work more creatively. And this inspired me, and hence I invited Melissa to my podcast. We explore how she built a $1M ARR business from scratch - deliberately going the bootstrapped way. Melissa elaborates on her experiences in building software that helps her customers provide experiences that their customers love to engage with. She talks about how getting her first 200 customers was easy but how growing beyond that became a big challenge - and how she overcame that. Last but not least, she shares some of her secrets to creating a SaaS business without getting into the vicious cycle of desperately chasing funding. Here's one of her quotes: The toughest nut to crack is not getting it launched and adopted with your first 200 customers. The toughest stuff to crack is what happens after you've exhausted your entire network and their friends. That was a huge reality check for me. Because that didn't happen until about a year after launch. Because all I know is sales I didn't realize that what I need for this business is marketing. During this interview, you will learn four things: Why she's (being an automation-passionate) is betting on SEO and LinkedIn content creation instead of cold outreach automation? What's her secret to creating predictable traction (knowing there are no shortcuts)? How doubling her prices became a blessing for her business in many different ways Why she'd opt for bootstrapping her company again if she'd had the choice again? For more information about the guest from this week: Melissa Kwan Website: eWebinar Subscribe to the Daily SaaS Reflection Get my free, 1 min daily reflection on shaping a B2B SaaS business no one can ignore. Subscribe here Yes, it's actually daily. And yes, people actually stay subscribed (Just see what peer B2B SaaS CEOs say) My promise: It's short. To the point. Inspiring. And valuable. Learn more about your ad choices. Visit megaphone.fm/adchoices
Today we talk with the successful and talented Sam Mahmood. At age 25, Sam went from being a Medical Device Sales Rep to starting his own company that he scaled in 5 years to a 7 figure organization. Sam bought 10 multi-family units at the age of 25 in Southern California when everyone said not to. Sam has housed travel nurses for the last 6 years before many of you even knew about Furnished Finder. And lastly, Sam has taken 6 medical device companies from $0-1M ARR (annual recurring revenue) within 3 years. Today, Sam will encourage us on how to improve our MTR brands and increase our revenue. Website Links:https://www.furnishedfinder.com/ https://www.keycheck.com/ https://www.furnishedfinder.com/stats https://www.furnishedfinder.com/Resources/PMResources https://www.travelnursehousing.com/ https://www.furnishedapartments.com/ Sam's Listings On Furnished Finder:https://www.furnishedfinder.com/members/profile?u=Sam%20Mahmood https://www.linkedin.com/in/sam-mahmood-06620484/ WWW.bettermouthtape.comGino Wickman's Book- Tractionhttps://a.co/d/e2oQpuj James Nestor's Book- Breathehttps://a.co/d/igCnto0 The Landlord Diaries is brought to you by Furnished Finder, where you can list your property for only $99 per year and no booking fees!
FOUR ACTIONABLE LEADERSHIP TAKEAWAYS The best product doesn't always win — the team that has the most conversations with customers, investors, and partners does. Every week — test an industry, persona, and problem in your early-stage outbound. Over 12 weeks, you need to rip through 12 market segments to find your ICP. Play laptop closed! See how long you can go without opening your demo to sell a deal. Don't worry about the two, three, or four-year deals. Win customers in year one, and if your product is right, you'll keep them in year two and beyond. PATH TO PRESIDENT'S CLUB Partner @ Glasswing Ventures Growth Advisor @ CloudTruth Growth Advisor @ Talla Co-Founder @ Reprise THE LATEST FROM 30MPC Tactic TV Toolkits & Templates Twitter YouTube THINGS YOU CAN STEAL Prospecting Lavender: Sales Email Frameworks ZoomInfo: 5 Plays, 30MPC Style Woodpecker: Nick's Sales Cadence Orum: 5 Cold Call Objection Talk Tracks Owler: 4 Multi-Channel Prospecting Touchpoints (Try Owler Max) Boomerang: Tactics for Peak Productivity Hireframe: Fast Track your Prospecting Discovery & Demo Clari: How to Sell to the CFO Calendly: Speed up your sales cycle & increase revenue Klue: Dismantling Competitors Sales Process Demandbase: 6 Templates to Accelerate Deals Gong: Master Class Qwilr: Multithreading Power Plays Outreach: 1 Sequence to Create and 5 Templates to Close Accord: Business Case Template Prolifiq: Relationship Mapping Playbook Salesloft: Selling to Power ONE ASK You know we feel a bit awkward asking, but if you made it this far, it would mean the world if you joined our newsletter. It will increase your chances of making President's Club by 227%. Okay maybe not, but we'd still really love you for it :)
The Top Entrepreneurs in Money, Marketing, Business and Life
He owned 3 franchises of the Estonian based ride sharing company Bolt which he sold. Using that money, he starting building Binderr in 2020 and just landed his first customer in March 2023. He still owns 50% of the company and as invested over $2m of his own money. Can he hit $1m in ARR this year building a better Quickbooks for Accountants in Malta?
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Howie Liu is the Founder and CEO @ Airtable, the fastest way to build apps for your business. To date, Howie has raised over $1BN with Airtable with the last round valuing the company at $11BN and an investor base including Benchmark, Thrive, Caffeinated, Greenoaks and Coatue to name a few. In Todays Episode with Howie Liu We Discuss: 1. Scaling into Enterprise: What are the single biggest challenges when moving from PLG to enterprise? Why does Howie believe you have only truly hit enterprise when you sign $1M contracts? How long did it take for Airtable to sign their first $1M ARR contract? How can founders know when is the right time to scale into enterprise? How does the product need to change with the scaling? 2. Enterprises: Do They Really Love AI: Why does Howie believe that enterprises are not jumping on AI yet? When does enterprise interest turn into enterprise buying and purchasing? What are the single biggest barriers to enterprises buying AI solutions today? Post-purchase, what are the biggest implementation challenges for enterprises with AI? 3. The Changing Sales Process: Are we seeing the bundling of tools within large enterprises today? Which categories and vendors are most vulnerable? Which will survive the cuts? What do vendors need to do to prove to CFOs that they need to remain in their budget? How has the customer success process changed over the last year with tightening budgets? 4. Howie Liu: AMA: Airtable famously got Benchmark to lead their Series C, how did this come to be when they famously always only do Series A? Why does Howie believe that it is total BS to suggest post-PMF, everything is good? What does Howie know now that he wishes he had known when he started Airtable?
The Top Entrepreneurs in Money, Marketing, Business and Life
The Top Entrepreneurs in Money, Marketing, Business and Life
A marketing data platform.