US technology company
POPULARITY
Categories
George Lange is a photographer whose pictures have appeared in almost all major magazines, ranging from Entertainment Weekly to Esquire. George has shot advertising photos for many movies and TV shows, including; Seinfeld, The Today Show, Cake Boss, and Jim Carrey's movies. Most recently, he has worked with Norwest Venture Partners, Twilio, the Richard King Mellon Foundation, and the Grammy Award-winning Pittsburgh Symphony Orchestra. George is known for his ability to see everyone - from celebrities to people in all walks of life - in an honest and joyful way. George's new book, Picturing Joy: Stories of Connection, is a lively guide to George's approach to life and the highlights of his career. This book captures his curiosity, energy, and enthusiasm for people and photography. It also distills wisdom gleaned from a lifelong search for everyday beauty that might otherwise go unnoticed. Through more than 80 photos and intimate stories, Picturing Joy invites readers to appreciate life with all their senses and change their perspective by being open to new things. Resources George Lange Websites Sponsors Nikon Z8 Camera Website Charcoal Book Club Frames Magazine Education Resources: Momenta Photographic Workshops Candid Frame Resources Download the free Candid Frame app for your favorite smart device. Click here to download it for . Click here to download Support the work at The Candid Frame by contributing to our Patreon effort. You can do this by visiting or the website and clicking on the Patreon button. You can also provide a one-time donation via . You can follow Ibarionex on and .
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Rebecca Kaden is a Managing Partner @ Union Square Ventures, one of the leading early-stage firms of the last decade with investments in Twitter, Twilio, Coinbase and many more. Nicole Quinn is a General Partner @ Lightspeed where she has led investments or sits on the board of Calm, Cameo and LunchClub to name a few. Eurie Kim is a Managing Partner @ Forerunner Ventures, the leading early-stage consumer fund. Eurie has led investments and sits on the board of Oura, The Farmers Dog, Curology and more. In Today's Roundtable We Discuss: 1. Seed Rounds: Is it even possible for traditional seed funds to play in a world of multi-stage funds investing so aggressively at the seed stage? Is seed immune to the macro environment? Will seed pricing remain as high as ever? What advice does the team have for seed founders approaching a Series A? What do they need? 2. Series A: How is the Series A market looking today? Is there a crunch at the Series A? To what extent are valuations compressed at the Series A? What 3 core elements do companies at the A stage, looking for a Series B next, need to focus on? 3. Series B and Beyond: Is the real crunch at the Series B? Why are down rounds so much better than structured rounds for companies raising? Will we see a wave of M&A in the next 12 months? 4. Crypto, AI and Hot Takes: Why is now the best time to be investing in crypto? Why is investing in AI a lottery right now? What is the most controversial thing that each believes today?
Jeff Lawson is the cofounder and CEO of Twilio, a $10 billion public software communications company.Jeff has an incredible founder story. Prior to Twilio, he co-founded Versity, which he sold for $30 million when he was just 23 years old. He was also the founding CTO of Stubhub and one of the first product managers at AWS. He founded Twilio in 2008. In this episode, Jeff shares his insights into the API economy, scaling successful businesses, and the future of the SaaS business model. Jeff and Auren reflect on the evolution of SaaS and discuss which areas still have yet to be disrupted. Jeff dives into Twilio's fascinating journey, including his takes on raising money in a challenging environment and pitching new technology to investors. He provides a firsthand account of Twilio's journey from startup to billion dollar public company, including how they built a top tier sales organization. He and Auren also discuss API business models, the challenges of M&A, and breaking down internal silos that hold companies back. World of DaaS is brought to you by SafeGraph & Flex Capital. For more episodes, visit safegraph.com/podcasts.You can find Auren Hoffman on X at @auren and Jeff Lawson on X at @jeffiel.
Jeevan Singh (@askjeevansingh) returns to join Ken Johnson (cktricky on Twitter) and Seth Law (sethlaw) as a guest on the podcast! Jeevan is currently with Rippling, was previously the Director of Product Security at Twilio, and before that Segment. He has been a long-time leader in security and development communities, and currently heads up the @owaspvancouver group. Tune in for ways to improve Threat Modeling, DevSecOps, and security programs in general.
In this week's episode of the Scale Your Sales podcast, my guest is Alyson Baber, the VP of Commercial Sales at Outreach. Alyson is a mom, wife, and tenured sales leader with a non-linear career path. Her background spans across chemical engineering, medical sales, tech sales, and advisory roles. Previously, she helped Zoom scale from $50M to $4B in revenue. She's also served as the Head of Sales at Melio, led new business and expansion sales at SendGrid (now Twilio) and various startup teams within Intuit. In this episode, Alyson shares insights from her journey, emphasizing the value of continuous learning and challenges, and the impact of supportive relationships in shaping her career. The episode resonates with the idea that every experience has something to teach us, and challenging times can lead to more effective outcomes. The conversation delves into the significance of customer experience throughout the buyer journey, with Alyson stressing the need for empathetic customer interaction and the responsibility of the entire organization in delivering exceptional customer experience. We also touch on the trends in the tech industry, highlighting the importance of communicating value to customers and the cross-functional nature of customer experience. We discuss: 05:12 – Love growth, learning, and change to expand your skills. 10:16 – Pay it forward, take chances in different roles. 13:18 – Growth is uncomfortable but rewarding; leadership is guiding, not knowing all the answers. 16:04 – Mistakes are opportunities. 25:16 – Customer experience impacts entire sales process. 28:33 – Parents instilled work ethic and resilience. 30:30 – My family time is my family time and I'm there for them. https://www.linkedin.com/in/alysonbaber/ Janice B Gordon is the award-winning Customer Growth Expert and Scale Your Sales Framework founder. She is by LinkedIn Sales 15 Innovating Sales Influencers to Follow 2021, the Top 50 Global Thought Leaders and Influencers on Customer Experience Nov 2020 and 150 Women B2B Thought Leaders You Should Follow in 2021. Janice helps companies worldwide to reimagine revenue growth through customer experience and sales. Book Janice to speak virtually at your next event https://janicebgordon.com LinkedIn: https://www.linkedin.com/in/janice-b-... Twitter: https://twitter.com/JaniceBGordon Scale Your Sales Podcast: https://scaleyoursales.co.uk/podcast More on the blog https://scaleyoursales.co.uk/blog Instagram: https://www.instagram.com/janicebgordon Facebook: https://www.facebook.com/ScaleYourSal...
(0:00) Intro(0:55) Christian's Early Career and Entrepreneurial Journey(3:14) Finding an opportunity in the market(6:12) Joining SoftBank(7:14) Lessons on venture capital(10:50) Company origin stories(12:03) The Birth of Tableau: From Idea to Company(16:20) Getting right in the early days(20:55) Commercializing something out of academia(23:42) Bootstrapping it(28:23) The growth and success of Tableau(32:38) Reflections on the journey and the future(43:57) Patience and basic metrics in business growth(45:33) The role of external investors in business expansion(48:02) The impact of capital on business ambitions and control(49:38) Choosing the right investor(51:39) The importance of maintaining control in fundraising(54:11) Competition in business growth(58:28) The importance of team chemistry in scaling a business(1:02:19) Impact of disruptive hiring practices on business growth(1:21:26) Power of community and innovating marketing(1:26:16) Global expansion Produced: Rashad Assir & Leah ClapperMixed and edited: Justin HrabovskyExecutive Producer: Josh Machiz
Our guest today is Joyce Kim, Chief Marketing Officer of Twilio. With a market cap of more than $10 billion, Twilio is the industry-leading platform that powers customer engagement, from voice and video to SMS and email.Joyce dives into two important trends that will shape the future of marketing — generative AI and first-party data.Specifically, Joyce discusses:- Why do so many CMOs start as engineers?- AI trends that will shape the future of marketing.- The difference between first-party and third-party data.- Effective first-party data strategies you should be doing right now.- Why B2B campaigns can be more challenging than B2C campaigns.- Making sense of generative AI among your marketing tech stack.- Favorite tools and use cases with AI in marketing.- Keeping the human element and authentic voice inside of AI marketing campaigns.Learn more at https://tractionconf.ioLearn more about Twilio at https://www.twilio.com/This episode is brought to you by:Leverage Community-Led Growth to skyrocket your business. From Grassroots To Greatness by author Lloyed Lobo will help you master 13 game-changing rules from some of the most iconic brands in the world, like Apple, Atlassian, CrossFit, Harley-Davidson, HubSpot, Red Bull and many more to attract superfans of your own that will propel you to new heights. Grab your copy today at FromGrassrootsToGreatness.com. Each year the US and Canadian governments provide more than $20 billion in R&D tax credits and innovation incentives to fund businesses. But the application process is cumbersome, prone to costly audits, and receiving the money can take as long as 16 months. Boast automates this process, enabling companies to get more money faster without the paperwork and audit risk. We don't get paid until you do! Find out if you qualify today at https://Boast.AI.Launch Academy is one of the top global tech hubs for international entrepreneurs and a designated organization for Canada's Startup Visa. Since 2012, Launch has worked with more than 6,000 entrepreneurs from over 100 countries, of which 300 have grown their startups to seed and Series A stage and raised over $2 billion in funding. To learn more about Launch's programs or the Canadian Startup Visa, visit https://LaunchAcademy.ca.Content Allies helps B2B companies build revenue-generating podcasts. We recommend them to any B2B company that is looking to launch or streamline its podcast production. Learn more at https://contentallies.com.#product #marketing #innovation #startup #generativeai #AI
In this episode of Enterprising Insights, Sherril Hanson, Senior Analyst at The Futurum Group, joins host Keith Kirkpatrick, Research Director, Enterprise Applications, at The Futurum Group, for a conversation about leveraging technology platforms to deliver excellent customer experiences, focusing specifically on contact center as-a-service offerings, the importance of omnichannel engagement capabilities, mobile applications, and generative AI. We'll also cover some recent news and newsmakers in the customer experience software market. Finally, we'll close out the show with our “Rant or Rave” segment, where we pick one item in the market, and we'll either champion or criticize it. You can watch the video here and subscribe to our YouTube channel if you've not yet done so.
(0:00) Intro(1:01) Why data isn't the new oil(2:42) Data is the new code(7:47) Outsourcing problems(10:34) Reinforcement learning and generative AI(16:52) How execs and founders should think about AI(18:59) AI VS computers, the internet, mobile, and other tech trends(22:38) The next two to three years of AI will define decades(26:05) Atomic bomb analogy(28:30) Alexandr's biggest AI worries(31:17) Undercover AI predictions(35:55) Three components of model development(39:57) Under-appreciating our reliance on Taiwan(41:56) Democratization of AI(43:54) The Turing Trap(45:52) Misconceptions About AI(47:29) Geopolitics and Artificial Intelligence(53:00) The UK AI Summit(54:45) What role does the government play in regulating AI?(1:14:05) The risks of hiring from big brands(1:19:36) The business of Scale AI(1:22:25) Influence of playing violin(1:24:19) Alexandr's childhood with physicist parents(1:27:33) Being weird means being interesting(1:29:09) The future of AI Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
Guillermo Rauch is the CEO of Vercel, a frontend-as-a-service product that was valued at $2.5b in 2021. Vercel serves customers like Uber, Notion and Zapier, and their React framework - Next.js - is used by over 500,000 developers and designers worldwide. Guillermo started his first company at age 11 in Buenos Aires and moved to San Francisco at age 18. In 2013, he sold his company Cloudup to Automattic (the company behind WordPress), and in 2015 he founded Vercel. — In today's episode we discuss: Guillermo's fascinating path into tech Learnings from building Cloudup and selling the company to Automattic (the company behind WordPress) Vercel's origin story and path to product market fit How to make an open source business successful Vercel's unique philosophy on developer experience Insights and predictions on the future of AI — Referenced: Algolia: https://www.algolia.com/ Apache Zookeeper: https://zookeeper.apache.org/ Apache Kafka: https://kafka.apache.org/ AWS: https://www.aws.training/ C++: https://www.techtarget.com/searchdatamanagement/definition/C Clerk: https://clerk-tech.com/ Cloudup: https://cloudup.com/ Commerce Cloud: https://www.salesforce.com/products/commerce/ Contentful: https://www.contentful.com/ Debian: https://www.debian.org/ Fintool: https://www.fintool.com/ Figma: https://www.figma.com/ GitLab: https://about.gitlab.com/ IRC: https://en.wikipedia.org/wiki/Internet_Relay_Chat KDE: https://kde.org/ Linux: https://en.wikipedia.org/wiki/Linux Mozilla: https://www.mozilla.org MooTools (UI library): https://mootools.net/ Next.js: https://nextjs.org/ React Native: https://reactnative.dev/ Red Hat: https://www.redhat.com/ Redpanda: https://redpanda.com/ Resend: https://resend.com/ Rust: https://www.rust-lang.org/ Salesforce: https://www.salesforce.com Servo: https://servo.org/ Shopify: https://www.shopify.com/ Socket.io: https://socket.io/ Symphony: https://symphony.com/ Trilio: https://trilio.io/ Twilio: https://www.twilio.com Vercel: https://vercel.com/ V0.dev: https://v0.dev/ — Where to find Guillermo: Twitter/x: https://twitter.com/rauchg LinkedIn: https://www.linkedin.com/in/rauchg/ Personal website: https://rauchg.com/ — Where to find Todd Jackson: Twitter: https://twitter.com/tjack LinkedIn: https://www.linkedin.com/in/toddj0 — Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter: https://twitter.com/firstround Youtube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast — Timestamps: (02:35) Becoming an “internet celebrity” at age 11 (08:30) Guillermo's first company: Cloudup (11:09) Biggest learnings from Cloudup and WordPress (15:06) The insights behind starting Vercel (17:11) Sources of validation for Vercel (20:29) How Vercel formed its V1 product (23:25) Navigating the early reactions from competitors and users (25:58) The paradox of developers and how it impacted Next.js (31:20) Advice on finding product market fit (34:48) The forces behind a trend towards "Front-end Cloud” (38:35) Why people now pay so much attention to the front-end (40:06) How to make an open source business successful (44:54) Insights on product positioning and category creation (48:52) Vercel's journey through becoming multi-product (51:44) Guillermo's take on the future of AI (53:43) Heuristics for building better product experiences (55:49) AI insights from Vercel's customers (57:37) How AI might change engineering in the next 10-20 years (62:43) Guillermo's favorite advice (65:45) Guillermo's advice to himself of 10 years ago
On the podcast: The ultimate freemium strategy, making low-risk bets with potentially asymmetrical outcomes, and how Aaron bounced back after almost running out of money.Top Takeaways
Skippy rips Doogles on his Twilio position. Doogles covers a recent Wall Street Journal article about mini-millionaires. Skippy discusses the existence of free will according to Stanford professor Robert Sapolsky. Doogles continues his tirade on the state of the American consumer. The episode wraps with Twitter losing authenticity, life lessons and Jamie Dimon selling shares.Join the Skippy and Doogles fan club. You can also get more details about the show at skippydoogles.com, show notes on our Substack, and send comments or questions to skippydoogles@gmail.com.
ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning
Discover Twilio's latest innovation in CustomerAI as they introduce groundbreaking features that can predict customer churn. Join us to learn how this technology is reshaping the way businesses interact with their customers, ensuring a more seamless and personalized experience. Don't miss out on the future of customer engagement! Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai
(0:00) Intro(0:52) How to make a podcast(2:31) The Logan Bartlett Show story(11:07) Different mental milestones(13:55) The crypto era with Zach Weinberg(17:53) Find your niche(22:20) Starting to do interviews(27:14) Importance of consistency(32:09) Describe the Logan Bartlett Show(35:51) Same strategy as working with founders(38:56) Far from a perfect product(41:44) Benefits of YouTube(47:44) Not looking back and agonizing(53:05) Internet comments and feedback Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
“HR Heretics†| How CPOs, CHROs, Founders, and Boards Build High Performing Companies
Colleen McCreary, a seasoned Chief People Officer (Zynga, Credit Karma, EA, Twilio) -turned-venture capitalist at Ribbit Capital, sits down with Kelli Dragovich and Nolan Church. This insider conversation covers tactical advice for navigating Mergers and Acquisitions (she's been through multiple), layoffs, recruiting top talent, and transitioning from an HR leader into the world of the board room. Before the interview, Nolan and Kelli also discuss the recent negative press for Carta's CEO Henry Ward, and offer listeners a behind the scenes understanding of executive departures and settlement packages. If you're looking for HR software that drives performance, check out Lattice https://www.lattice.com/hrheretics – SPONSORS: Lattice | Continuum ✅ Discover HR software that drives performance with Lattice: https://www.lattice.com/hrheretics High performance and great culture should never be at odds; they're better together. With Lattice People Management Platform, companies efficiently run people programs that create enviable cultures where employees want to do their best work. Serving 1000s of customers of all sizes. Learn why companies from Slack to the LA Dodgers choose Lattice. https://www.lattice.com/hrheretics ✅ Hire Fractional Executives with Continuum: HR Heretics listeners get 15% off of their first month using this link (https://bit.ly/40hlRa9) Have you ever had a negative experience hiring executives? Continuum connects executives and senior operators to venture backed tech companies for fractional and full-time roles. You can post any executive-level role to Continuum's marketplace and search through our database of world-class, vetted leaders. There is no hidden cost, you only pay the person you hire. And you can cancel at any time. HR Heretics listeners get 15% off – KEEP UP WITH COLLEEN, NOLAN, + KELLI ON LINKEDIN Colleen:https://www.linkedin.com/in/colleenmccrearychiefpplofficer/ Nolan:https://www.linkedin.com/in/nolan-church/ Kelli: https://www.linkedin.com/in/kellidragovich/ – LINKS: - Ribbit Capital: https://ribbitcap.com/ - Henry Ward's blog post about negative press: https://henrysward.medium.com/what-i-tell-employees-about-negative-press-7c134e7a601c – TIMESTAMPS: (00:00) Episode Preview (01:43) The Carta story: executive terminations and lawsuits (05:15) Rolling Layoffs (13:45) Setting expectations after a layoff event (17:49) Sponsors: Lattice | Continuum (19:23) Employee accountability and RTO debate (21:00) CHRO/CPO's responsibility to be the voice for the company (26:20) Moral challenges and breaking up with a CEO - infamous Zynga story (31:28) Colleen's experience being an advisor for Silicon Valley, the HBO series (34:25) Recruiting lessons from Silicon Valley (36:20) Colleen's tactical lessons from multiple M&As (43:30) Most important ‘gotchas' to watch out for in M&A process (47:08) Transitioning from HR to VC (50:40) Being a long date in the interview process to create opportunities (53:15) Navigating the choice between CPO at a big company, COO, and VC (55:40) Colleen's best hire(s): Mike Vernal and his crew (58:28) Colleen's best interview question – HR Heretics is brought to you by Turpentine www.turpentine.co Producer: Natalie Toren Production: Graham Bessellieu, Michelle Poreh For inquiries about guests or sponsoring the podcast, email Natalie@turpentine.co
In der Nachmittagsfolge begrüßen wir heute Adam Probst, CEO von ZenML, und sprechen mit ihm über die erfolgreiche Erweiterung der Seed-Finanzierungsrunde auf 6,4 Millionen US-Dollar.ZenML entwickelt ein erweiterbares Open-Source-Framework zur Erstellung produktionsreifer Pipelines für maschinelles Lernen. Dabei wird die Komplexität der Infrastruktur für Machine Learning Engineers abstrahiert, ohne sie an einen Anbieter zu binden, da es ein einheitliches Erlebnis auf allen wichtigen Plattformen wie AWS, GCP und Azure bietet. Dies ermöglicht Unternehmen Cloud-übergreifende Workloads effektiv zu verwalten. Darüber hinaus erweitern die bestehenden Integrationen von ZenML mit über 50 ML-Tools, darunter HuggingFace, Weights & Biases und MLflow, die Anpassungsfähigkeit und den Komfort. Dies bietet eine hohe strategische Flexibilität mit Cloud-agnostischen Integrationen. Ein Beispiel für den Wert der Lösung ist die Integration von Orchestrierungstools mit Experiment-Tracking-Tools. Anstatt mit fragmentierten Pipelines zu arbeiten, bietet ZenML einen zentralen Rahmen, der diese Werkzeuge auf kohärente und standardisierte Weise miteinander verbindet. User können mit einem Klick von der lokalen Entwicklung zur Skalierung in die Cloud wechseln. Seit Anfang 2023 hat das Startup außerdem einen vollständig verwalteten Cloud-Service für eine ausgewählte Gruppe von Kunden. Dieser Dienst baut auf dem Open-Source-Kern auf und erweitert dessen Fähigkeiten um umfassende Funktionen wie Single Sign-on, rollenbasierte Zugriffskontrolle und Delivery-Integrationen. ZenML wurde im Jahr 2021 von Adam Probst und Hamza Tahir in München gegründet.Nun hat das Open-Source-Framework eine Erweiterung der Seed-Runde um 3,7 Millionen US-Dollar auf 6,4 Millionen US-Dollar bekannt gegeben. Die Erweiterung wurde von Point Nine angeführt und von dem bestehenden Investor Crane unterstützt. An der Investitionsrunde beteiligten sich Business Angels wie D. Sculley, CEO von Kaggle, Harold Giménez, SVP R&D bei Hashicorp sowie Luke de Oliveira, ehemaliger Direktor für maschinelles Lernen bei Twilio. Das frische Kapital soll die Einführung von ZenML Cloud unterstützen.
Ken Johnson (cktricky) and Seth Law (@sethlaw) welcome Leif Dreizler back on the show! Leif recently became a Senior Manager of Software Engineering at Semgrep (semgrep.dev) , spent the better part of a decade working in product security and security software engineering at Twilio and Segment (segment.io). He also is a podcast co-host for the 404 Security Not Found podcast.
(0:00) Intro(0:38) Taking the CEO job at MongoDB(2:34) First things Dev changed at MongoDB(5:53) When unicorns were actually rare(7:50) Overcoming Monetization Challenges of Open Source(9:54) MongoDB Atlas and the license change?(19:18) What is the job of the CEO?(22:49) Vulnerability is a strength(27:03) The power of self-awareness as a Leader(29:37) Building an A+ culture(32:43) Holding people accountable(35:04) Keeping feedback loops tight(36:22) How hybrid work helps MongoDB thrive(38:09) RIFs(40:20) 3 steps for holding people accountable(42:03) Why you should always be recruiting(43:55) Dev's unique recruiting tactics(45:56) Favorite interview questions(46:53) Hiring internally vs externally(50:53) Finding passion for sales(52:58) The perfect job doesn't exist(55:00) Running BladeLogic(57:18) Ben Horowitz, Mark Andreessen, and John McMahon(1:02:36) How does AI compare to past tech trends?(1:05:47) Conventional Silicon Valley wisdom Dev disagrees with Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
Hey CX Nation,In this week's episode of The CXChronicles Podcast #212 we welcomed Tom Martin, CEO at Glance based in Wakefield, MA. Glance makes it easy to provide personal, human-to-human customer experience in digital environments.Glance's in-the-moment web and mobile cobrowse, screen share, and video solutions increase customer satisfaction, ensure brand loyalty, and drive revenue. The world's largest enterprises trust Glance to empower their customer-facing teams and deliver frictionless, at-the-exact-right-moment customer consultation. With integrations with Salesforce, Twilio, Genesys, ServiceNow, and more, Glance fits seamlessly into even the most sophisticated CRM-CX tech stack.Discover the power of Glance guided CX at https://www.glance.cx/cxchroniclesIn this episode, Tom and Adrian chat through how he has tackled The Four CX Pillars: Team, Tools, Process & Feedback and shares tips & best practices that have worked across his own customer focused business leader journey.**Episode #212 Highlight Reel:**1. Bringing human & digital worlds together to create world class customer experiences 2. Prioritizing customer conversations & identifying solutions to find new customers 3. Creating an omni-channel tech-stack to meet your customers where they already are 4. Building a cohesive, understandable and repeatable story to engage your customers 5. Leveraging feedback to drive growth & continuous improvement out of the gates Huge thanks to Tom for coming on The CXChronicles Podcast and featuring his work and efforts in pushing the customer experience & customer success space into the future.Click here to learn more about Tom MartinClick here to learn more about GlanceIf you enjoy The CXChronicles Podcast, stop by your favorite podcast player and leave us a review today.You know what would be even better?Go tell one of your friends or teammates about CXC's content, CX/CS/RevOps services, our customer & employee focused community & invite them to join the CX Nation!Are you looking to learn more about the world of Customer Experience, Customer Success & Revenue Operations?Click here to grab a copy of my book "The Four CX Pillars To Grow Your Business Now" available on Amazon or the CXC website.For you non-readers, go check out the CXChronicles Youtube channel to see our customer & employee focused video content & short-reel CTAs to improve your CX/CS/RevOps performance today (politely go smash that subscribe button).Contact us anytime to learn more about CXC at INFO@cxchronicles.com and ask us about how we can help your business & team make customer happiness a habit now!Huge thanks to our newest CXCP sponsor Glance. Visit their website today at https://www.glance.cx/cxchroniclesSupport the showContact CXChronicles Today Tweet us @cxchronicles Check out our Instagram @cxchronicles Click here to checkout the CXC website Email us at info@cxchronicles.com Remember To Make Happiness A Habit!!
The major banks are cashing in on high interest rates, but cracks continue to show in the picture of the consumer. (00:21) Emily Flippen and Jason Moser discuss: - What big interest rates mean for big banks and the latest insights from Jamie Dimon. - Pepsi's earnings showing signs that growth might be propped up by price hikes. - Atlassian's $1B acquisition of Loom, the market reaction to the Birkenstock IPO, and Spotify's latest audio push. (19:11) Bloomberg's Zeke Faux talks about the trial of FTX's Sam Bankman Fried with Motley Fool Money's Deidre Woollard. (32:57) Jason and Emily break down two stocks on their radar: Outset Medical and Twilio. Stocks discussed: JPM, WFC, PEP, TEAM, BIRK, SPOT, OM, TWLO Host: Dylan Lewis Guests: Emily Flippen, Jason Moser, Deidre Woollard, Zeke Faux Engineers: Dan Boyd
(0:00) Intro(0:42) Joining Google vs Stripe(14:17) Holding yourself to a high standard(21:03) Making the hire(24:32) Player vs Victim(28:52) Balance of discussing accomplishments(30:51) Org structure in job interviews(34:07) The most successful hires drove me crazy(37:12) Running a team with inclusivity(41:01) Operating principles(48:06) Say the thing you think you cannot say(53:31) Distinguish between management and leadership(58:32) The golden rule is not a great tactic for a leader or manager(1:03:01) Job titles are cheap(1:11:36) Too many companies make their principles so idealistic that they don't resonate with employees(1:18:57) Don't let perfect be the enemy of good(1:23:38) Disappointing people at the rate they can absorb(1:30:07) Manage people out of seats earlier(1:37:03) Being on the board of a company built to last(1:42:38) What makes the Collison's so special?(1:45:08) You pack it, you carry it Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
Jamie Tischart, CTO @ Bettercloud, discusses how to identify inefficiencies within your eng org & strategies for creating more efficient teams! He shares his experience at Bettercloud as one of several concurrent senior hires & why he discovered that coordination is key when hiring multiple new senior hires at once. We also chat about strategies for identifying / removing barriers that are getting in the way of your eng team's success, making / communicating meaningful decisions, identifying inefficiencies that come up within cross-functional relationships, and frameworks for turning communication from your org's greatest area of inefficiency to an asset.ABOUT JAMIE TISCHARTJamie Tischart serves as the Chief Technology Officer, where he is responsible for the broader technology strategy of the business as well as the day-to-day leadership of development, quality and architecture teams for the BetterCloud product suite.Jamie brings nearly three decades of experience leading technology and product teams through periods of rapid growth and product transformation. Most recently, he served as GM and VP, Software Engineering of the Marketing & Growth BU at Twilio where he was responsible for defining the product and technical strategy while focusing on the customer experience and driving highly reliable solutions for Twilio's customers.Previously, Tischart was VP, Technical Operations at SendGrid where he led the company's Global Data Center and DevOps strategies across the business. Prior to that, he served as CTO of the Security-as-a-Service business at McAfee where he led the creation of Intel Security's future generation Cloud solutions. He has also held leadership roles at a number of other prominent technology companies including Intel. Jamie holds an MBA from Aspen University and B.A from St. Lawrence College."The biggest value for us is to move purposely and slowly and hear what is going on before we make decisions. The second was, what is the biggest problem that we are coming in to solve? Like why were each of us hired? It came down to, and this was difficult for a couple of the department heads was, 'Look, the biggest problem isn't in your area. I'm sorry. Yes. Let's put together a plan of how do we improve and what do we focus on in your group? But quite frankly, the focus is going to be in this area.' That was difficult for people to accept.”- Jamie Tischart Check out Jellyfish's Scenario Planner to help you accelerate your development!With Jellyfish's Scenario Planner, you can analyze tradeoffs, and optimize resources - to ensure your highest priority initiatives meet your delivery goals and deadlines!To learn more about how Scenario Planner can help you better accelerate, predict & plan your software delivery
This is a repost of episode 12. Kris Nova passed away in a climbing accident on August 16. I thought it was fitting to repost this conversation I had with her and Dave Fogle about their experiences being homeless.There is a memorial for Kris on the Nivenly site. It's backed by a GitHub repo, if you'd like to share your memories of her.https://nivenly.org/memorials/krisnova/Episode transcriptOriginal show notes:In this episode Rich speaks with Dave Fogle from Civo and Kris Nova from Twilio. Topics include: How Dave's tweet “From homeless to Kubecon in 7 years” brought us together, Dave and Kris's stories about how they became homeless, how they took care of their gear, their love of open source software and how they used it to level up, and how they transitioned from being on the street to tech jobs.Thanks for all of the support that the podcast is getting on Patreon. If you'd like to help keep the podcast sustainable for only $2 a month, you can get more info here. Listening is a great way to support the podcast, too, so thank you.Links:Dave's tweet that got this all startedThank you POP for bringing me into the threadKurt KempleThe Privilege Escalation Foundation (Kris's non-profit)Open Sourcing Mental IllnessLogo by the amazing Emily Griffin.Music by Monplaisir.Thanks for listening. ★ Support this podcast on Patreon ★
ChatGPT-style AI can tackle the drudge work of responding to RFPs faster than humans. Sales teams at Google, Twilio, and others say productivity is spiking. Read this story here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Show notes:https://www.youtube.com/watch?v=GIiaFW874q8(0:00) Intro(1:20) Joining OpenAI(14:31) Are scaling and AI safety intertwined?(20:31) Anthropic Early Days(24:04) Amazon's Investment in Anthropic(24:19) FTX investment in Anthropic(25:50) Anthropic's Business Today(30:51) Dario's Advice For Builders(34:26) Should we pause AI progress?(36:27) Future of AI(37:55) Dario's Biggest AI Safety Concerns(44:57) How Anthropic Deals With AI Bias(49:29) Anthropic's Responsible Scaling Policy(56:36) Testifying in front of Congress(59:25) Will AI destroy humanity?(1:00:00) GPT3 vs GPT4(1:02:26) The memification of a CEO(1:09:30) What are you most surprised by with AI?(1:17:03) Why don't you like the term AGI?(1:21:50) 2024 AI Predictions(1:33:45) Dario's opinion on open-source models(1:38:03) Probability of AI Catastrophe(1:40:52) Misuse of AI(1:14:44) Looking ahead: Dario's optimistic outlook on AI Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
Wade Foster is the Co-founder & CEO at Zapier, a platform for building workflow automations without a developer. Zapier was started during 2011 in Columbia, Missouri, and by 2021, it was valued at $5b, having only raised $1.3m. Prior to founding Zapier, Wade had just two professional jobs, and had never managed or hired anyone. He worked as a PM on a web app used by 20k students, and as an Email Marketing Manager at Veterans United - a role that had a significant influence on Zapier's eventual success. In today's episode, we discuss: The stories and thinking behind Zapier's most unorthodox decisions How Wade thinks about product market fit How Zapier built their powerful distribution engine The fascinating story of Veterans United, and its impact on Zapier How Wade thinks about fundraising Why Wade lives by “don't hire ‘til it hurts” Key lessons on people management Referenced: Basecamp: https://basecamp.com/ Bingo Card Creator: https://www.bingocardcreator.com Bryan Helmig, Co-founder of Zapier: https://www.linkedin.com/in/bryanhelmig John Wooden quote: https://www.thewoodeneffect.com/be-quick-but-dont-hurry/ Mailchimp: https://mailchimp.com/ Mike Knoop, Co-founder of Zapier: https://www.linkedin.com/in/mikeknoop Patrick Mckenzie, creator of Bingo Card Creator: https://www.linkedin.com/in/patrickmckenzie/ PayPal: https://www.paypal.com/ Salesforce: https://www.salesforce.com/ SMBs: https://www.techtarget.com/whatis/definition/SMB-small-and-medium-sized-business-or-small-and-midsized-business Stripe: https://stripe.com/ Thinking in Bets by Annie Duke: https://www.amazon.com.au/Thinking-Bets-Annie-Duke/dp/0735216355 Tony Xu, CEO of DoorDash: https://www.linkedin.com/in/xutony/ Twilio: https://www.twilio.com/ Veterans United Home Loans: https://www.veteransunited.com/ Zapier: https://zapier.com/ Where to find Brett Berson Twitter: https://twitter.com/brettberson LinkedIn: https://www.linkedin.com/in/brett-berson-9986094/ Where to find Wade Foster Twitter: https://twitter.com/wadefoster LinkedIn: https://www.linkedin.com/in/wadefoster Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter: https://twitter.com/firstround Youtube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast Timestamps (05:46) The fascinating story of Veterans United (06:55) Lessons from Veterans United (08:35) The most important things Zapier got right (10:13) How Zapier built their powerful distribution engine (16:56) Why Zapier didn't move to focusing on enterprise (19:06) How Wade thinks about product market fit (24:26) The role of skill vs luck in Zapier's success (26:23) What was hard about building Zapier (30:03) Key lessons on people management (32:35) Rule of thumb: "don't hire ‘til it hurts” (36:42) Zapier's #1 hiring mistake (42:50) How to test for scrappiness in the hiring process (44:31) Do hiring playbooks transfer between companies? (50:01) The 12 year evolution of Zapier's product (53:20) How Zapier makes product decisions (55:40) How Zapier thought about competition (60:11) How to foster intellectual honesty in yourself and your org (65:35) The people who most impacted Wade's worldviews
AI Hustle: News on Open AI, ChatGPT, Midjourney, NVIDIA, Anthropic, Open Source LLMs
In this episode, we dive into Twilio's latest innovation, CustomerAI, and how it can predict customer churn to benefit businesses and enhance user experiences. Join us as we explore the exciting features and potential impact of this cutting-edge technology in the world of customer engagement. Learn how AI is revolutionizing customer relationships and retention strategies. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai
Today we're going to talk about Customer Journey Orchestration with a focus on SMS and messaging to help brands drive greater customer engagement and personalize the customer experience for consumers that quickly grow impatient by being saturated with impersonal messages from the brands they support. In fact, according to a recent report by Twilio, two-thirds of consumers say they'll quite a brand if their experience isn't personalized. I've seen in my own work with leading brands that customer journey orchestration can help this, and to help me discuss this topic, I'd like to welcome Trent Rossini, Managing Director at inQuba. RESOURCES inQuba website: https://www.inquba.com The Agile Brand podcast website: https://www.gregkihlstrom.com/theagilebrandpodcast Sign up for The Agile Brand newsletter here: https://www.gregkihlstrom.com Get the latest news and updates on LinkedIn here: https://www.linkedin.com/company/the-agile-brand/ For consulting on marketing technology, customer experience, and more visit GK5A: https://www.gk5a.com The Agile Brand podcast is brought to you by TEKsystems.Learn more here: https://www.teksystems.com/versionnextnow The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Ari Zelmanow is Head of UX Research for Twilio's Communications business. A retired police detective turned market detective, Ari now investigates human behavior as it relates to shopping, buying, and using products to deliver valuable outcomes for businesses. In this episode, Ari and Roddy dive into how adopting a newsroom room approach enables research teams to build knowledge and drive impact throughout the entire organization. Ari shares his unique framework for growing researchers into trusted consultants for business leadership, and much more. Here, we outline three takeaways and encourage you to download the full transcript below for more. One of the most valuable tools researchers have at their disposal is feedback - from the customers you have to the ones you want to have. Sign up for a free trial with the DISQO CX platform to get started.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Kevin Niparko is the VP of Product @ Twilio. Kevin joined Twilio through the acquisition of Segment where he spent an incredible 8 years in numerous different roles including as Head of Product. Before entering the world of product, Kevin was a Management Associate at the world-renowned, Bridgewater Associates. In Today's Episode with Kevin Niparko We Discuss: 1. From Bridgewater to Head of Product: How Kevin made his way from the world of asset management and analytics to leading product teams? What are 1-2 of Kevin's biggest takeaways from his time at Bridgewater with Ray Dalio? How did the 8 year journey with Segment leading to their $3BN acquisition impact his approach to product? 2. What Makes a Great Product Person: Does Kevin believe that product is more art or science? If he were to put a number on it? What would it be out of 100? Why does Kevin believe that all product people should learn to write? Why does Kevin believe that the best product people are generalists and not specialists? Why does Kevin think that analytics is an insanely good start for product people? 3. How to Hire the Best Product People: How does Kevin approach the hiring process for product hires today? What are the non-obvious traits of hires he looks for? How does he test for them? Does Kevin use case studies? Where do many fall down? What do the best do? 4. Product Reviews: Good vs Great: How often does Kevin do product reviews? Who is invited? How have product reviews changed in a world where the company is now fully remote? What is the difference between good and great product reviews? What is the single best product decision Kevin has made? What did he learn? What is the worst product decision Kevin made? How did that change his approach?
Wired article on Lee Holloway: https://www.wired.com/story/lee-holloway-devastating-decline-brilliant-young-coder/ (0:00) Intro(1:12) The internet was a fad(3:49) What is Cloudflare?(6:42) Project Honeypot(12:53) Meeting Michelle(19:12) Buying into the vision(23:04) The internet was built to support the porn industry(28:48) Borrowing Money(37:18) Transparency(43:08) On Lee Holloway(52:23) Philosophy behind hiring and talent(1:00:45) Prioritize curiosity and empathy(1:02:54) The fairness of an initial offer(1:07:11) Titles are cheap(1:21:59) How has being an English major manifested itself?(1:25:14) Building explosives in high school(1:32:38) Engaging with fans and critics(1:38:32) Going public(1:46:39) Conventional Silicon Valley wisdom Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
If a company wants to offer in-app messaging or voice calls, they don't have to develop the infrastructure themselves when there's something like Twilio at their disposal.
(0:00) Intro(0:39) Doing venture capital and launching a podcast(6:30) How does culture get created within a company?(10:14) Is a company more of a sports team or a family?(11:42) No 100 hour work weeks(16:13) Employee retention(24:19) What you want to know when hiring(30:06) Advantages startups have over big companies(37:20) What should all founders know about letting people go?(45:55) Finding product market fit(54:22) Building relationships with VCs(1:01:00) Hating software acquisitions(1:07:46) Sam Altman, brother of the HR software guy(1:09:43) Thoughts on AI and its future(1:13:08) Saying the same thing over and over again in leadership(1:20:06) Thoughts on San Francisco(1:21:31) Logan Bartlett: VC, podcaster, meme king(1:25:49) What are you investing in now? Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
(0:00) Intro(2:01) Founder vs Investor(9:04) Two constituents that a VC is serving(14:19) Structure of new book(16:14) Balance between numbers and schmoozing(21:11) Liz's absolutes of fundraising(28:31) Beyond the natural absorption of the market(36:21) What is the purpose of a board?(43:27) VCs as an asset class(47:54) Feeling like a babysitter in board meetings(59:34) Would you have bootstrapped it in retrospect?(1:05:47) Should founders always remain CEOs?(1:13:01) Absolutes of fundraising disagreements(1:20:51) Talking to associates(1:26:33) Capital as an effective means of scaling(1:29:23) Sandy Lerner, founder of Cisco(1:37:06) Did Logan learn anything from the founder's perspective? Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
Hey CX Nation,In this week's episode of The CXChronicles Podcast #208 we welcomed Lloyed Lobo, Author, From Grassroots To Greatness, Co-Founder at Boast.AI & Traction. Lloyed Cofounded and helped bootstrap Boast.AI to 8 figures ARR. Boast is a fintech platform that provides businesses with R&D and Innovation funding. He also Cofounded and helped bootstrap Traction to more than 100,000 entrepreneurs and innovators. Traction is a global community that brings leaders behind the fastest-growing companies such as Shopify, Atlassian, Twilio, MailChimp, Github, Intercom, Calendly, Zapier, and more to share learnings on how to build and scale companies via podcasts, meetups, retreats, and conferences.In this episode, Lloyed and Adrian chat through how he has tackled The Four CX Pillars: Team, Tools, Process & Feedback and shares tips & best practices that have worked across his own customer focused business leader journey.**Episode #208 Highlight Reel:**1. Growing up in the slums of Mumbai & understanding the necessity of community 2. Compound interest on consistent actions & performance yields huge gains over time 3. Every company in the world begins as a human to human experience 4. Why luck and risk are two sides of the same coin, you have to keep flipping the coin 5. Spend time understanding the puzzle pieces in your customer journey to find success Huge thanks to Lloyed for coming on The CXChronicles Podcast and featuring his work and efforts in pushing the customer experience & customer success space into the future.Click here to learn more about Lloyed LoboClick here to learn more about Lloyed's new book From Grassroots To Greatness If you enjoy The CXChronicles Podcast, stop by your favorite podcast player and leave us a review today.You know what would be even better?Go tell one of your friends or teammates about CXC's content, CX/CS/RevOps services, our customer & employee focused community & invite them to join the CX Nation!Are you looking to learn more about the world of Customer Experience, Customer Success & Revenue Operations?Click here to grab a copy of my book "The Four CX Pillars To Grow Your Business Now" available on Amazon or the CXC website.For you non-readers, go check out the CXChronicles Youtube channel to see our customer & employee focused video content & short-reel CTAs to improve your CX/CS/RevOps performance today (politely go smash that subscribe button).Contact us anytime to learn more about CXC at INFO@cxchronicles.com and ask us about how we can help your business & team make customer happiness a habit now!Huge thanks to our newest CXCP sponsor Timetoreply. Visit their website today at www.timetoreply.com/cxc. Support the show
Show Notes:Inventing on Principle Talk: https://youtu.be/PUv66718DII?si=FQgWF4018HGpin7ahttps://youtu.be/PUv66718DII?si=FQgWF4018HGpin7a (0:00) Intro(1:56) Essentialism(6:01) No one is a natural great manager(15:01) The Freedman Doctrine(22:42) Distinction between behavior and core values(26:23) Two sides to any coin(32:31) Choosing who to do business with(38:31) Working remote(44:57) Early days of Webflow(57:07) Looking for silver bullets but finding lead bullets(1:05:57) The YC interview and walking out of Oblivion(1:10:01) Joining YC(1:14:57) From Skype calls to brand ambassadors(1:24:02) Product roadmap and development(1:27:24) Graduation vs churn(1:37:10) Taking your time raising money(1:24:17) Big product market fit(1:44:59) Pros and cons of venture capital(1:52:10) Not a zero sum(1:57:01) B2C versus B2B(2:04:22) Immigrating from Russia(2:11:24) Influence of Pixar(2:15:07) Favorite interview question Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson
As alluded to on the pod, LangChain has just launched LangChain Hub: “the go-to place for developers to discover new use cases and polished prompts.” It's available to everyone with a LangSmith account, no invite code necessary. Check it out!In 2023, LangChain has speedrun the race from 2:00 to 4:00 to 7:00 Silicon Valley Time. From the back to back $10m Benchmark seed and (rumored) $20-25m Sequoia Series A in April, to back to back critiques of “LangChain is Pointless” and “The Problem with LangChain” in July, to teaching with Andrew Ng and keynoting at basically every AI conference this fall (including ours), it has been an extreme rollercoaster for Harrison and his growing team creating one of the most popular (>60k stars at time of writing) building blocks for AI Engineers.LangChain's OriginsThe first commit to LangChain shows its humble origins as a light wrapper around Python's formatter.format for prompt templating. But as Harrison tells the story, even his first experience with text-davinci-002 in early 2022 was focused on chatting with data from their internal company Notion and Slack, what is now known as Retrieval Augmented Generation (RAG). As the Generative AI meetup scene came to life post Stable Diffusion, Harrison saw a need for common abstractions for what people were building with text LLMs at the time:* LLM Math, aka Riley Goodside's “You Can't Do Math” REPL-in-the-loop (PR #8)* Self-Ask With Search, Ofir Press' agent pattern (PR #9) (later ReAct, PR #24)* NatBot, Nat Friedman's browser controlling agent (PR #18)* Adapters for OpenAI, Cohere, and HuggingFaceHubAll this was built and launched in a few days from Oct 16-25, 2022. Turning research ideas/exciting usecases into software quickly and often has been in the LangChain DNA from Day 1 and likely a big driver of LangChain's success, to date amassing the largest community of AI Engineers and being the default launch framework for every big name from Nvidia to OpenAI:Dancing with GiantsBut AI Engineering is built atop of constantly moving tectonic shifts: * ChatGPT launched in November (“The Day the AGI Was Born”) and the API released in March. Before the ChatGPT API, OpenAI did not have a chat endpoint. In order to build a chatbot with history, you had to make sure to chain all messages and prompt for completion. LangChain made it easy to do that out of the box, which was a huge driver of usage. * Today, OpenAI has gone all-in on the chat API and is deprecating the old completions models, essentially baking in the chat pattern as the default way most engineers should interact with LLMs… and reducing (but not eliminating) the value of ConversationChains.* And there have been more updates since: Plugins released in API form as Functions in June (one of our top pods ever… reducing but not eliminating the value of OutputParsers) and Finetuning in August (arguably reducing some need for Retrieval and Prompt tooling). With each update, OpenAI and other frontier model labs realign the roadmaps of this nascent industry, and Harrison credits the modular design of LangChain in staying relevant. LangChain has not been merely responsive either: LangChain added Agents in November, well before they became the hottest topic of the AI Summer, and now Agents feature as one of LangChain's top two usecases. LangChain's problem for podcasters and newcomers alike is its sheer scope - it is the world's most complete AI framework, but it also has a sprawling surface area that is difficult to fully grasp or document in one sitting. This means it's time for the trademark Latent Space move (ChatGPT, GPT4, Auto-GPT, and Code Interpreter Advanced Data Analysis GPT4.5): the executive summary!What is LangChain?As Harrison explains, LangChain is an open source framework for building context-aware reasoning applications, available in Python and JS/TS.It launched in Oct 2022 with the central value proposition of “composability”, aka the idea that every AI engineer will want to switch LLMs, and combine LLMs with other things into “chains”, using a flexible interface that can be saved via a schema.Today, LangChain's principal offerings can be grouped as:* Components: isolated modules/abstractions* Model I/O* Models (for LLM/Chat/Embeddings, from OpenAI, Anthropic, Cohere, etc)* Prompts (Templates, ExampleSelectors, OutputParsers)* Retrieval (revised and reintroduced in March)* Document Loaders (eg from CSV, JSON, Markdown, PDF)* Text Splitters (15+ various strategies for chunking text to fit token limits)* Retrievers (generic interface for turning an unstructed query into a set of documents - for self-querying, contextual compression, ensembling)* Vector Stores (retrievers that search by similarity of embeddings)* Indexers (sync documents from any source into a vector store without duplication)* Memory (for long running chats, whether a simple Buffer, Knowledge Graph, Summary, or Vector Store)* Use-Cases: compositions of Components* Chains: combining a PromptTemplate, LLM Model and optional OutputParser* with Router, Sequential, and Transform Chains for advanced usecases* savable, sharable schemas that can be loaded from LangChainHub* Agents: a chain that has access to a suite of tools, of nondeterministic length because the LLM is used as a reasoning engine to determine which actions to take and in which order. Notable 100LOC explainer here.* Tools (interfaces that an agent can use to interact with the world - preset list here. Includes things like ChatGPT plugins, Google Search, WolframAlpha. Groups of tools are bundled up as toolkits)* AgentExecutor (the agent runtime, basically the while loop, with support for controls, timeouts, memory sharing, etc)* LangChain has also added a Callbacks system for instrumenting each stage of LLM, Chain, and Agent calls (which enables LangSmith, LangChain's first cloud product), and most recently an Expression Language, a declarative way to compose chains.LangChain the company incorporated in January 2023, announced their seed round in April, and launched LangSmith in July. At time of writing, the company has 93k followers, their Discord has 31k members and their weekly webinars are attended by thousands of people live.The full-featuredness of LangChain means it is often the first starting point for building any mainstream LLM use case, because they are most likely to have working guides for the new developer. Logan (our first guest!) from OpenAI has been a notable fan of both LangChain and LangSmith (they will be running the first LangChain + OpenAI workshop at AI Eng Summit). However, LangChain is not without its critics, with Aravind Srinivas, Jim Fan, Max Woolf, Mckay Wrigley and the general Reddit/HN community describing frustrations with the value of their abstractions, and many are attempting to write their own (the common experience of adding and then removing LangChain is something we covered in our Agents writeup). Harrison compares this with the timeless ORM debate on the value of abstractions.LangSmithLast month, Harrison launched LangSmith, their LLM observability tool and first cloud product. LangSmith makes it easy to monitor all the different primitives that LangChain offers (agents, chains, LLMs) as well as making it easy to share and evaluate them both through heuristics (i.e. manually written ones) and “LLM evaluating LLM” flows. The top HN comment in the “LangChain is Pointless” thread observed that orchestration is the smallest part of the work, and the bulk of it is prompt tuning and data serialization. When asked this directly our pod, Harrison agreed:“I agree that those are big pain points that get exacerbated when you have these complex chains and agents where you can't really see what's going on inside of them. And I think that's partially why we built Langsmith…” (48min mark)You can watch the full launch on the LangChain YouTube:It's clear that the target audience for LangChain is expanding to folks who are building complex, production applications rather than focusing on the simpler “Q&A your docs” use cases that made it popular in the first place. As the AI Engineer space matures, there will be more and more tools graduating from supporting “hobby” projects to more enterprise-y use cases. In this episode we run through some of the history of LangChain, how it's growing from an open source project to one of the highest valued AI startups out there, and its future. We hope you enjoy it!Show Notes* LangChain* LangChain's Berkshire Hathaway Homepage* Abstractions tweet* LangSmith* LangSmith Cookbooks repo* LangChain Retrieval blog* Evaluating CSV Question/Answering blog and YouTube* MultiOn Partner blog* Harvard Sports Analytics Collective* Evaluating RAG Webinar* awesome-langchain:* LLM Math Chain* Self-Ask* LangChain Hub UI* “LangChain is Pointless”* Harrison's links* sports - estimating player compatibility in the NBA* early interest in prompt injections* GitHub* TwitterTimestamps* [00:00:00] Introduction* [00:00:48] Harrison's background and how sports led him into ML* [00:04:54] The inspiration for creating LangChain - abstracting common patterns seen in other GPT-3 projects* [00:05:51] Overview of LangChain - a framework for building context-aware reasoning applications* [00:10:09] Components of LangChain - modules, chains, agents, etc.* [00:14:39] Underappreciated parts of LangChain - text splitters, retrieval algorithms like self-query* [00:18:46] Hiring at LangChain* [00:20:27] Designing the LangChain architecture - balancing flexibility and structure* [00:24:09] The difference between chains and agents in LangChain* [00:25:08] Prompt engineering and LangChain* [00:26:16] Announcing LangSmith* [00:30:50] Writing custom evaluators in LangSmith* [00:33:19] Reducing hallucinations - fixing retrieval vs generation issues* [00:38:17] The challenges of long context windows* [00:40:01] LangChain's multi-programming language strategy* [00:45:55] Most popular LangChain blog posts - deep dives into specific topics* [00:50:25] Responding to LangChain criticisms* [00:54:11] Harrison's advice to AI engineers* [00:55:43] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Welcome. Today we have Harrison Chase in the studio with us. Welcome Harrison. [00:00:23]Harrison: Thank you guys for having me. I'm excited to be here. [00:00:25]Swyx: It's been a long time coming. We've been asking you for a little bit and we're really glad that you got some time to join us in the studio. Yeah. [00:00:32]Harrison: I've been dodging you guys for a while. [00:00:34]Swyx: About seven months. You pulled me in here. [00:00:37]Alessio: About seven months. But it's all good. I totally understand. [00:00:38]Swyx: We like to introduce people through the official backgrounds and then ask you a little bit about your personal side. So you went to Harvard, class of 2017. You don't list what you did in Harvard. Was it CS? [00:00:48]Harrison: Stats and CS. [00:00:50]Swyx: That's awesome. I love me some good stats. [00:00:52]Harrison: I got into it through stats, through doing sports analytics. And then there was so much overlap between stats and CS that I found myself doing more and more of that. [00:00:59]Swyx: And it's interesting that a lot of the math that you learn in stats actually comes over into machine learning which you applied at Kensho as a machine learning engineer and Robust Intelligence, which seems to be the home of a lot of AI founders.Harrison: It does. Yeah. Swyx: And you started LangChain, I think around November 2022 and incorporated in January. Yeah. [00:01:19]Harrison: I was looking it up for the podcast and the first tweet was on, I think October 24th. So just before the end of November or end of October. [00:01:26]Swyx: Yeah. So that's your LinkedIn. What should people know about you on the personal side that's not obvious on LinkedIn? [00:01:33]Harrison: A lot of how I got into this is all through sports actually. Like I'm a big sports fan, played a lot of soccer growing up and then really big fan of the NBA and NFL. And so freshman year at college showed up and I knew I liked math. I knew I liked sports. One of the clubs that was there was the Sports Analytics Collective. And so I joined that freshman year, I was doing a lot of stuff in like Excel, just like basic stats, but then like wanted to do more advanced stuff. So learn to code, learn kind of like data science and machine learning through that way. Kind of like just kept on going down that path. I think sports is a great entryway to data science and machine learning. There's a lot of like numbers out there. People like really care. Like I remember, I think sophomore, junior year, I was in the Sports Collective and the main thing we had was a blog. And so we wrote a blog. It wasn't me. One of the other people in the club wrote a blog predicting the NFL season. I think they made some kind of like with stats and I think their stats showed that like the Dolphins would end up beating the Patriots and New England got like pissed about it, of course. So people like really care and they'll give you feedback about whether you're like models doing well or poorly. And so you get that. And then you also get like instantaneous kind of like, well, not instantaneous, but really quick feedback. Like if you predict a game, the game happens that night. Like you don't have to wait a year to see what happens. So I think sports is a great kind of like entryway for kind of like data science. [00:02:43]Alessio: There was actually my first article on the Twilio blog with a Python script to like predict pricing of like Daily Fantasy players based on my past week performance. Yeah, I don't know. It's a good getaway drug. [00:02:56]Swyx: And on my end, the way I got into finance was through sports betting. So maybe we all have some ties in there. Was like Moneyball a big inspiration? The movie? [00:03:06]Harrison: Honestly, not really. I don't really like baseball. That's like the big thing. [00:03:10]Swyx: Let's call it a lot of stats. Cool. Well, we can dive right into LangChain, which is what everyone is excited about. But feel free to make all the sports analogies you want. That really drives home a lot of points. What was your GPT aha moment? When did you start working on GPT itself? Maybe not LangChain, just anything to do with the GPT API? [00:03:29]Harrison: I think it probably started around the time we had a company hackathon. I think that was before I launched LangChain. I'm trying to remember the exact sequence of events, but I do remember that at the hackathon I worked with Will, who's now actually at LangChain as well, and then two other members of Robust. And we made basically a bot where you could ask questions of Notion and Slack. And so I think, yeah, RAG, basically. And I think I wanted to try that out because I'd heard that it was getting good. I'm trying to remember if I did anything before that to realize that it was good. So then I would focus on that on the hackathon. I can't remember or not, but that was one of the first times that I built something [00:04:06]Swyx: with GPT-3. There wasn't that much opportunity before because the API access wasn't that widespread. You had to get into some kind of program to get that. [00:04:16]Harrison: DaVinci-002 was not terrible, but they did an upgrade to get it to there, and they didn't really publicize that as much. And so I think I remember playing around with it when the first DaVinci model came out. I was like, this is cool, but it's not amazing. You'd have to do a lot of work to get it to do something. But then I think that February or something, I think of 2022, they upgraded it and it was it got better, but I think they made less of an announcement around it. And so I just, yeah, it kind of slipped under the radar for me, at least. [00:04:45]Alessio: And what was the step into LangChain? So you did the hackathon, and then as you were building the kind of RAG product, you felt like the developer experience wasn't that great? Or what was the inspiration? [00:04:54]Harrison: No, honestly, so around that time, I knew I was going to leave my previous job. I was trying to figure out what I was going to do next. I went to a bunch of meetups and other events. This was like the September, August, September of that year. So after Stable Diffusion, but before ChatGPT. So there was interest in generative AI as a space, but not a lot of people hacking on language models yet. But there were definitely some. And so I would go to these meetups and just chat with people and basically saw some common abstractions in terms of what they were building, and then thought it would be a cool side project to factor out some of those common abstractions. And that became kind of like LangChain. I looked up again before this, because I remember I did a tweet thread on Twitter to announce LangChain. And we can talk about what LangChain is. It's a series of components. And then there's some end-to-end modules. And there was three end-to-end modules that were in the initial release. One was NatBot. So this was the web agent by Nat Friedman. Another was LLM Math Chain. So it would construct- [00:05:51]Swyx: GPT-3 cannot do math. [00:05:53]Harrison: Yeah, exactly. And then the third was Self-Ask. So some type of RAG search, similar to React style agent. So those were some of the patterns in terms of what I was seeing. And those all came from open source or academic examples, because the people who were actually working on this were building startups. And they were doing things like question answering over your databases, question answering over SQL, things like that. But I couldn't use their code as kind of like inspiration to factor things out. [00:06:18]Swyx: I talked to you a little bit, actually, roundabout, right after you announced LangChain. I'm honored. I think I'm one of many. This is your first open source project. [00:06:26]Harrison: No, that's not actually true. I released, because I like sports stats. And so I remember I did release some really small, random Python package for scraping data from basketball reference or something. I'm pretty sure I released that. So first project to get a star on GitHub, let's say that. [00:06:45]Swyx: Did you reference anything? What was the inspirations, like other frameworks that you look to when open sourcing LangChain or announcing it or anything like that? [00:06:53]Harrison: I mean, the only main thing that I looked for... I remember reading a Hacker News post a little bit before about how a readme on the project goes a long way. [00:07:02]Swyx: Readme's help. [00:07:03]Harrison: Yeah. And so I looked at it and was like, put some status checks at the top and have the title and then one or two lines and then just right into installation. And so that's the main thing that I looked at in terms of how to structure it. Because yeah, I hadn't done open source before. I didn't really know how to communicate that aspect of the marketing or getting people to use it. I think I had some trouble finding it, but I finally found it and used that as a lot [00:07:25]Swyx: of the inspiration there. Yeah. It was one of the subjects of my write-up how it was surprising to me that significant open source experience actually didn't seem to matter in the new wave of AI tooling. Most like auto-GPTs, Torrents, that was his first open source project ever. And that became auto-GPT. Yeah. I don't know. To me, it's just interesting how open source experience is kind of fungible or not necessary. Or you can kind of learn it on the job. [00:07:49]Alessio: Overvalued. [00:07:50]Swyx: Overvalued. Okay. You said it, not me. [00:07:53]Alessio: What's your description of LangChain today? I think when I built the LangChain Hub UI in January, there were a few things. And I think you were one of the first people to talk about agents that were already in there before it got hot now. And it's obviously evolved into a much bigger framework today. Run people through what LangChain is today, how they should think about it, and all of that. [00:08:14]Harrison: The way that we describe it or think about it internally is that LangChain is basically... I started off saying LangChain's a framework for building LLM applications, but that's really vague and not really specific. And I think part of the issue is LangChain does do a lot, so it's hard to be somewhat specific. But I think the way that we think about it internally, in terms of prioritization, what to focus on, is basically LangChain's a framework for building context-aware reasoning applications. And so that's a bit of a mouthful, but I think that speaks to a lot of the core parts of what's in LangChain. And so what concretely that means in LangChain, there's really two things. One is a set of components and modules. And these would be the prompt template abstraction, the LLM abstraction, chat model abstraction, vector store abstraction, text splitters, document loaders. And so these are combinations of things that we build and we implement, or we just have integrations with. So we don't have any language models ourselves. We don't have any vector stores ourselves, but we integrate with a lot of them. And then the text splitters, we have our own logic for that. The document loaders, we have our own logic for that. And so those are the individual modules. But then I think another big part of LangChain, and probably the part that got people using it the most, is the end-to-end chains or applications. So we have a lot of chains for getting started with question answering over your documents, chat question answering, question answering over SQL databases, agent stuff that you can plug in off the box. And that basically combines these components in a series of specific ways to do this. So if you think about a question answering app, you need a lot of different components kind of stacked. And there's a bunch of different ways to do question answering apps. So this is a bit of an overgeneralization, but basically, you know, you have some component that looks up an embedding from a vector store, and then you put that into the prompt template with the question and the context, and maybe you have the chat history as well. And then that generates an answer, and then maybe you parse that out, or you do something with the answer there. And so there's just this sequence of things that you basically stack in a particular way. And so we just provide a bunch of those assembled chains off the shelf to make it really easy to get started in a few lines of code. [00:10:09]Alessio: And just to give people context, when you first released LangChain, OpenAI did not have a chat API. It was a completion-only API. So you had to do all the human assistant, like prompting and whatnot. So you abstracted a lot of that away. I think the most interesting thing to me is you're kind of the Switzerland of this developer land. There's a bunch of vector databases that are killing each other out there to get people to embed data in them, and you're like, I love you all. You all are great. How do you think about being an opinionated framework versus leaving a lot of choice to the user? I mean, in terms of spending time into this integration, it's like you only have 10 people on the team. Obviously that takes time. Yeah. What's that process like for you all? [00:10:50]Harrison: I think right off the bat, having different options for language models. I mean, language models is the main one that right off the bat we knew we wanted to support a bunch of different options for. There's a lot to discuss there. People want optionality between different language models. They want to try it out. They want to maybe change to ones that are cheaper as new ones kind of emerge. They don't want to get stuck into one particular one if a better one comes out. There's some challenges there as well. Prompts don't really transfer. And so there's a lot of nuance there. But from the bat, having this optionality between the language model providers was a big important part because I think that was just something we felt really strongly about. We believe there's not just going to be one model that rules them all. There's going to be a bunch of different models that are good for a bunch of different use cases. I did not anticipate the number of vector stores that would emerge. I don't know how many we supported in the initial release. It probably wasn't as big of a focus as language models was. But I think it kind of quickly became so, especially when Postgres and Elastic and Redis started building their vector store implementations. We saw that some people might not want to use a dedicated vector store. Maybe they want to use traditional databases. I think to your point around what we're opinionated about, I think the thing that we believe most strongly is it's super early in the space and super fast moving. And so there's a lot of uncertainty about how things will shake out in terms of what role will vector databases play? How many will there be? And so I think a lot of it has always kind of been this optionality and ability to switch and not getting locked in. [00:12:19]Swyx: There's other pieces of LangChain which maybe don't get as much attention sometimes. And the way that you explained LangChain is somewhat different from the docs. I don't know how to square this. So for example, you have at the top level in your docs, you have, we mentioned ModelIO, we mentioned Retrieval, we mentioned Chains. Then you have a concept called Agents, which I don't know if exactly matches what other people call Agents. And we also talked about Memory. And then finally there's Callbacks. Are there any of the less understood concepts in LangChain that you want to give some air to? [00:12:53]Harrison: I mean, I think buried in ModelIO is some stuff around like few-shot example selectors that I think is really powerful. That's a workhorse. [00:13:01]Swyx: Yeah. I think that's where I start with LangChain. [00:13:04]Harrison: It's one of those things that you probably don't, if you're building an application, you probably don't start with it. You probably start with like a zero-shot prompt. But I think that's a really powerful one that's probably just talked about less because you don't need it right off the bat. And for those of you who don't know, that basically selects from a bunch of examples the ones that are maybe most relevant to the input at hand. So you can do some nice kind of like in-context learning there. I think that's, we've had that for a while. I don't think enough people use that, basically. Output parsers also used to be kind of important, but then function calling. There's this interesting thing where like the space is just like progressing so rapidly that a lot of things that were really important have kind of diminished a bit, to be honest. Output parsers definitely used to be an understated and underappreciated part. And I think if you're working with non-OpenAI models, they still are, but a lot of people are working with OpenAI models. But even within there, there's different things you can do with kind of like the function calling ability. Sometimes you want to have the option of having the text or the application you're building, it could return either. Sometimes you know that it wants to return in a structured format, and so you just want to take that structured format. Other times you're extracting things that are maybe a key in that structured format, and so you want to like pluck that key. And so there's just like some like annoying kind of like parsing of that to do. Agents, memory, and retrieval, we haven't talked at all. Retrieval, there's like five different subcomponents. You could also probably talk about all of those in depth. You've got the document loaders, the text splitters, the embedding models, the vector stores. Embedding models and vector stores, we don't really have, or sorry, we don't build, we integrate with those. Text splitters, I think we have like 15 or so. Like I think there's an under kind of like appreciated amount of those. [00:14:39]Swyx: And then... Well, it's actually, honestly, it's overwhelming. Nobody knows what to choose. [00:14:43]Harrison: Yeah, there is a lot. [00:14:44]Swyx: Yeah. Do you have personal favorites that you want to shout out? [00:14:47]Harrison: The one that we have in the docs is the default is like the recursive text splitter. We added a playground for text splitters the other week because, yeah, we heard a lot that like, you know, and like these affect things like the chunk overlap and the chunks, they affect things in really subtle ways. And so like I think we added a playground where people could just like choose different options. We have like, and a lot of the ideas are really similar. You split on different characters, depending on kind of like the type of text that you have marked down, you might want to split on differently than HTML. And so we added a playground where you can kind of like choose between those. I don't know if those are like underappreciated though, because I think a lot of people talk about text splitting as being a hard part, and it is a really important part of creating these retrieval applications. But I think we have a lot of really cool retrieval algorithms as well. So like self query is maybe one of my favorite things in LangChain, which is basically this idea of when you have a user question, the typical kind of like thing to do is you embed that question and then find the document that's most similar to that question. But oftentimes questions have things that just, you don't really want to look up semantically, they have some other meaning. So like in the example that I use, the example in the docs is like movies about aliens in the year 1980. 1980, I guess there's some semantic meaning for that, but it's a very particular thing that you care about. And so what the self query retriever does is it splits out the metadata filter and most vector stores support like a metadata filter. So it splits out this metadata filter, and then it splits out the semantic bit. And that's actually like kind of tricky to do because there's a lot of different filters that you can have like greater than, less than, equal to, you can have and things if you have multiple filters. So we have like a pretty complicated like prompt that does all that. That might be one of my favorite things in LangChain, period. Like I think that's, yeah, I think that's really cool. [00:16:26]Alessio: How do you think about speed of development versus support of existing things? So we mentioned retrieval, like you got, or, you know, text splitting, you got like different options for all of them. As you get building LangChain, how do you decide which ones are not going to keep supporting, you know, which ones are going to leave behind? I think right now, as you said, the space moves so quickly that like you don't even know who's using what. What's that like for you? [00:16:50]Harrison: Yeah. I mean, we have, you know, we don't really have telemetry on what people are using in terms of what parts of LangChain, the telemetry we have is like, you know, anecdotal stuff when people ask or have issues with things. A lot of it also is like, I think we definitely prioritize kind of like keeping up with the stuff that comes out. I think we added function calling, like the day it came out or the day after it came out, we added chat model support, like the day after it came out or something like that. That's probably, I think I'm really proud of how the team has kind of like kept up with that because this space is like exhausting sometimes. And so that's probably, that's a big focus of ours. The support, I think we've like, to be honest, we've had to get kind of creative with how we do that. Cause we have like, I think, I don't know how many open issues we have, but we have like 3000, somewhere between 2000 and 3000, like open GitHub issues. We've experimented with a lot of startups that are doing kind of like question answering over your docs and stuff like that. And so we've got them on the website and in the discord and there's a really good one, dosu on the GitHub that's like answering issues and stuff like that. And that's actually something we want to start leaning into more heavily as a company as well as kind of like building out an AI dev rel because we're 10 people now, 10, 11 people now. And like two months ago we were like six or something like that. Right. So like, and to have like 2,500 open issues or something like that, and like 300 or 400 PRs as well. Cause like one of the amazing things is that like, and you kind of alluded to this earlier, everyone's building in the space. There's so many different like touch points. LangChain is lucky enough to kind of like be a lot of the glue that connects it. And so we get to work with a lot of awesome companies, but that's also a lot of like work to keep up with as well. And so I don't really have an amazing answer, but I think like the, I think prioritize kind of like new things that, that come out. And then we've gotten creative with some of kind of like the support functions and, and luckily there's, you know, there's a lot of awesome people working on all those support coding, question answering things that we've been able to work with. [00:18:46]Swyx: I think there is your daily rhythm, which I've seen you, you work like a, like a beast man, like mad impressive. And then there's sometimes where you step back and do a little bit of high level, like 50,000 foot stuff. So we mentioned, we mentioned retrieval. You did a refactor in March and there's, there's other abstractions that you've sort of changed your mind on. When do you do that? When do you do like the, the step back from the day to day and go, where are we going and change the direction of the ship? [00:19:11]Harrison: It's a good question so far. It's probably been, you know, we see three or four or five things pop up that are enough to make us think about it. And then kind of like when it reaches that level, you know, we don't have like a monthly meeting where we sit down and do like a monthly plan or something. [00:19:27]Swyx: Maybe we should. I've thought about this. Yeah. I'd love to host that meeting. [00:19:32]Harrison: It's really been a lot of, you know, one of the amazing things is we get to interact with so many different people. So it's been a lot of kind of like just pattern matching on what people are doing and trying to see those patterns before they punch us in the face or something like that. So for retrieval, it was the pattern of seeing like, Hey, yeah, like a lot of people are using vector sort of stuff. But there's also just like other methods and people are offering like hosted solutions and we want our abstractions to work with that as well. So we shouldn't bake in this paradigm of doing like semantic search too heavily, which sounds like basic now, but I think like, you know, to start a lot of it was people needed help doing these things. But then there was like managed things that did them, hybrid retrieval mechanisms, all of that. I think another example of this, I mean, Langsmith, which we can maybe talk about was like very kind of like, I think we worked on that for like three or four months before announcing it kind of like publicly, two months maybe before giving it to kind of like anyone in beta. But this was a lot of debugging these applications as a pain point. We hear that like just understanding what's going on is a pain point. [00:20:27]Alessio: I mean, you two did a webinar on this, which is called Agents vs. Chains. It was fun, baby. [00:20:32]Swyx: Thanks for having me on. [00:20:33]Harrison: No, thanks for coming. [00:20:34]Alessio: That was a good one. And on the website, you list like RAG, which is retrieval of bank debt generation and agents as two of the main goals of LangChain. The difference I think at the Databricks keynote, you said chains are like predetermined steps and agents is models reasoning to figure out what steps to take and what actions to take. How should people think about when to use the two and how do you transition from one to the other with LangChain? Like is it a path that you support or like do people usually re-implement from an agent to a chain or vice versa? [00:21:05]Swyx: Yeah. [00:21:06]Harrison: You know, I know agent is probably an overloaded term at this point, and so there's probably a lot of different definitions out there. But yeah, as you said, kind of like the way that I think about an agent is basically like in a chain, you have a sequence of steps. You do this and then you do this and then you do this and then you do this. And with an agent, there's some aspect of it where the LLM is kind of like deciding what to do and what steps to do in what order. And you know, there's probably some like gray area in the middle, but you know, don't fight me on this. And so if we think about those, like the benefits of the chains are that they're like, you can say do this and you just have like a more rigid kind of like order and the way that things are done. They have more control and they don't go off the rails and basically everything that's bad about agents in terms of being uncontrollable and expensive, you can control more finely. The benefit of agents is that I think they handle like the long tail of things that can happen really well. And so for an example of this, let's maybe think about like interacting with a SQL database. So you can have like a SQL chain and you know, the first kind of like naive approach at a SQL chain would be like, okay, you have the user question. And then you like write the SQL query, you do some rag, you pull in the relevant tables and schemas, you write a SQL query, you execute that against the SQL database. And then you like return that as the answer, or you like summarize that with an LLM and return that to the answer. And that's basically the SQL chain that we have in LangChain. But there's a lot of things that can go wrong in that process. Starting from the beginning, you may like not want to even query the SQL database at all. Maybe they're saying like, hi, or something, or they're misusing the application. Then like what happens if you have some step, like a big part of the application that people with LangChain is like the context aware part. So there's generally some part of bringing in context to the language model. So if you bring in the wrong context to the language model, so it doesn't know which tables to query, what do you do then? If you write a SQL query, it's like syntactically wrong and it can't run. And then if it can run, like what if it returns an unexpected result or something? And so basically what we do with the SQL agent is we give it access to all these different tools. So it has another tool, it can run the SQL query as another, and then it can respond to the user. But then if it kind of like, it can decide which order to do these. And so it gives it flexibility to handle all these edge cases. And there's like, obviously downsides to that as well. And so there's probably like some safeguards you want to put in place around agents in terms of like not letting them run forever, having some observability in there. But I do think there's this benefit of, you know, like, again, to the other part of what LangChain is like the reasoning part, like each of those steps individually involves some aspect of reasoning, for sure. Like you need to reason about what the SQL query is, you need to reason about what to return. But there's then there's also reasoning about the order of operations. And so I think to me, the key is kind of like giving it an appropriate amount to reason about while still keeping it within checks. And so to the point, like, I would probably recommend that most people get started with chains and then when they get to the point where they're hitting these edge cases, then they think about, okay, I'm hitting a bunch of edge cases where the SQL query is just not returning like the relevant things. Maybe I should add in some step there and let it maybe make multiple queries or something like that. Basically, like start with chain, figure out when you're hitting these edge cases, add in the reasoning step to that to handle those edge cases appropriately. That would be kind of like my recommendation, right? [00:24:09]Swyx: If I were to rephrase it, in my words, an agent would be a reasoning node in a chain, right? Like you start with a chain, then you just add a reasoning node, now it's an agent. [00:24:17]Harrison: Yeah, the architecture for your application doesn't have to be just a chain or just an agent. It can be an agent that calls chains, it can be a chain that has an agent in different parts of them. And this is another part as well. Like the chains in LangChain are largely intended as kind of like a way to get started and take you some amount of the way. But for your specific use case, in order to kind of like eke out the most performance, you're probably going to want to do some customization at the very basic level, like probably around the prompt or something like that. And so one of the things that we've focused on recently is like making it easier to customize these bits of existing architectures. But you probably also want to customize your architectures as well. [00:24:52]Swyx: You mentioned a bit of prompt engineering for self-ask and then for this stuff. There's a bunch of, I just talked to a prompt engineering company today, PromptOps or LLMOps. Do you have any advice or thoughts on that field in general? Like are you going to compete with them? Do you have internal tooling that you've built? [00:25:08]Harrison: A lot of what we do is like where we see kind of like a lot of the pain points being like we can talk about LangSmith and that was a big motivation for that. And like, I don't know, would you categorize LangSmith as PromptOps? [00:25:18]Swyx: I don't know. It's whatever you want it to be. Do you want to call it? [00:25:22]Harrison: I don't know either. Like I think like there's... [00:25:24]Swyx: I think about it as like a prompt registry and you store them and you A-B test them and you do that. LangSmith, I feel like doesn't quite go there yet. Yeah. It's obviously the next step. [00:25:34]Harrison: Yeah, we'll probably go. And yeah, we'll do more of that because I think that's definitely part of the application of a chain or agent is you start with a default one, then you improve it over time. And like, I think a lot of the main new thing that we're dealing with here is like language models. And the main new way to control language models is prompts. And so like a lot of the chains and agents are powered by this combination of like prompt language model and then some output parser or something doing something with the output. And so like, yeah, we want to make that core thing as good as possible. And so we'll do stuff all around that for sure. [00:26:05]Swyx: Awesome. We might as well go into LangSmith because we're bringing it up so much. So you announced LangSmith I think last month. What are your visions for it? Is this the future of LangChain and the company? [00:26:16]Harrison: It's definitely part of the future. So LangSmith is basically a control center for kind of like your LLM application. So the main features that it kind of has is like debugging, logging, monitoring, and then like testing and evaluation. And so debugging, logging, monitoring, basically you set three environment variables and it kind of like logs all the runs that are happening in your LangChain chains or agents. And it logs kind of like the inputs and outputs at each step. And so the main use case we see for this is in debugging. And that's probably the main reason that we started down this path of building it is I think like as you have these more complex things, debugging what's actually going on becomes really painful whether you're using LangChain or not. And so like adding this type of observability and debuggability was really important. Yeah. There's a debugging aspect. You can see the inputs, outputs at each step. You can then quickly enter into like a playground experience where you can fiddle around with it. The first version didn't have that playground and then we'd see people copy, go to open AI playground, paste in there. Okay. Well, that's a little annoying. And then there's kind of like the monitoring, logging experience. And we recently added some analytics on like, you know, how many requests are you getting per hour, minute, day? What's the feedback like over time? And then there's like a testing debugging, sorry, testing and evaluation component as well where basically you can create datasets and then test and evaluate these datasets. And I think importantly, all these things are tied to each other and then also into LangChain, the framework. So what I mean by that is like we've tried to make it as easy as possible to go from logs to adding a data point to a dataset. And because we think a really powerful flow is you don't really get started with a dataset. You can accumulate a dataset over time. And so being able to find points that have gotten like a thumbs up or a thumbs down from a user can be really powerful in terms of creating a good dataset. And so that's maybe like a connection between the two. And then the connection in the other way is like all the runs that you have when you test or evaluate something, they're logged in the same way. So you can debug what exactly is going on and you don't just have like a final score. You have like this nice trace and thing where you can jump in. And then we also want to do more things to hook this into a LangChain proper, the framework. So I think like some of like the managing the prompts will tie in here already. Like we talked about example selectors using datasets as a few short examples is a path that we support in a somewhat janky way right now, but we're going to like make better over time. And so there's this connection between everything. Yeah. [00:28:42]Alessio: And you mentioned the dataset in the announcement blog post, you touched on heuristic evaluation versus LLMs evaluating LLMs. I think there's a lot of talk and confusion about this online. How should people prioritize the two, especially when they might start with like not a good set of evals or like any data at all? [00:29:01]Harrison: I think it's really use case specific in the distinction that I draw between heuristic and LLM. LLMs, you're using an LLM to evaluate the output heuristics, you have some common heuristic that you can use. And so some of these can be like really simple. So we were doing some kind of like measuring of an extraction chain where we wanted it to output JSON. Okay. One evaluation can be, can you use JSON.loads to load it? And like, right. And that works perfectly. You don't need an LLM to do that. But then for like a lot of like the question answering, like, is this factually accurate? And you have some ground truth fact that you know it should be answering with. I think, you know, LLMs aren't perfect. And I think there's a lot of discussion around the pitfalls of using LLMs to evaluate themselves. And I'm not saying they're perfect by any means, but I do think they're, we've found them to be kind of like better than blue or any of those metrics. And the way that I also like to use those is also just like guide my eye about where to look. So like, you know, I might not trust the score of like 0.82, like exactly correct, but like I can look to see like which data points are like flagged as passing or failing. And sometimes the evaluators messing up, but it's like good to like, you know, I don't have to look at like a hundred data points. I can focus on like 10 or something like that. [00:30:10]Alessio: And then can you create a heuristic once in Langsmith? Like what's like your connection to that? [00:30:16]Harrison: Yeah. So right now, all the evaluation, we actually do client side. And part of this is basically due to the fact that a lot of the evaluation is really application specific. So we thought about having evaluators, you could just click off and run in a server side or something like that. But we still think it's really early on in evaluation. We still think there's, it's just really application specific. So we prioritized instead, making it easy for people to write custom evaluators and then run them client side and then upload the results so that they can manually inspect them because I think manual inspection is still a pretty big part of evaluation for better or worse. [00:30:50]Swyx: We have this sort of components of observability. We have cost, latency, accuracy, and then planning. Is that listed in there? [00:30:57]Alessio: Well, planning more in the terms of like, if you're an agent, how to pick the right tool and whether or not you are picking the right tool. [00:31:02]Swyx: So when you talk to customers, how would you stack rank those needs? Are they cost sensitive? Are they latency sensitive? I imagine accuracy is pretty high up there. [00:31:13]Harrison: I think accuracy is definitely the top that we're seeing right now. I think a lot of the applications, people are, especially the ones that we're working with, people are still struggling to get them to work at a level where they're reliable [00:31:24]Swyx: enough. [00:31:25]Harrison: So that's definitely the first. Then I think probably cost becomes the next one. I think a few places where we've started to see this be like one of the main things is the AI simulation that came out. [00:31:36]Swyx: Generative agents. Yeah, exactly. [00:31:38]Harrison: Which is really fun to run, but it costs a lot of money. And so one of our team members, Lance, did an awesome job hooking up like a local model to it. You know, it's not as perfect, but I think it helps with that. Another really big place for this, we believe, is in like extraction of structured data from unstructured data. And the reason that I think it's so important there is that usually you do extraction of some type of like pre-processing or indexing process over your documents. I mean, there's a bunch of different use cases, but one use case is for that. And generally that's over a lot of documents. And so that starts to rack up a bill kind of quickly. And I think extraction is also like a simpler task than like reasoning about which tools to call next in an agent. And so I think it's better suited for that. Yeah. [00:32:15]Swyx: On one of the heuristics I wanted to get your thoughts on, hallucination is one of the big problems there. Do you have any recommendations on how people should reduce hallucinations? [00:32:25]Harrison: To reduce hallucinations, we did a webinar on like evaluating RAG this past week. And I think there's this great project called RAGOS that evaluates four different things across two different spectrums. So the two different spectrums are like, is the retrieval part right? Or is the generation, or sorry, like, is it messing up in retrieval or is it messing up in generation? And so I think to fix hallucination, it probably depends on where it's messing up. If it's messing up in generation, then you're getting the right information, but it's still hallucinating. Or you're getting like partially right information and hallucinating some bits, a lot of that's prompt engineering. And so that's what we would recommend kind of like focusing on the prompt engineering part. And then if you're getting it wrong in the, if you're just not retrieving the right stuff, then there's a lot of different things that you can probably do, or you should look at on the retrieval bit. And honestly, that's where it starts to become a bit like application specific as well. Maybe there's some temporal stuff going on. Maybe you're not parsing things correctly. Yeah. [00:33:19]Swyx: Okay. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. [00:33:35]Harrison: Yeah. Yeah. [00:33:37]Swyx: Yeah. [00:33:38]Harrison: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. [00:33:56]Swyx: Yeah. Yeah. [00:33:58]Harrison: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. [00:34:04]Swyx: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. [00:34:17]Harrison: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah, I mean, there's probably a larger discussion around that, but openAI definitely had a huge headstart, right? And that's... Clawds not even publicly available yet, I don't think. [00:34:28]Swyx: The API? Yeah. Oh, well, you can just basically ask any of the business reps and they'll give it to you. [00:34:33]Harrison: You can. But it's still a different signup process. I think there's... I'm bullish that other ones will catch up especially like Anthropic and Google. The local ones are really interesting. I think we're seeing a big... [00:34:46]Swyx: Lama Two? Yeah, we're doing the fine-tuning hackathon tomorrow. Thanks for promoting that. [00:34:50]Harrison: No, thanks for it. I'm really excited about that stuff. I mean, that's something that like we've been, you know, because like, as I said, like the only thing we know is that the space is moving so fast and changing so rapidly. And like, local models are, have always been one of those things that people have been bullish on. And it seems like it's getting closer and closer to kind of like being viable. So I'm excited to see what we can do with some fine-tuning. [00:35:10]Swyx: Yeah. I have to confess, I did not know that you cared. It's not like a judgment on Langchain. I was just like, you know, you write an adapter for it and you're done, right? Like how much further does it go for Langchain? In terms of like, for you, it's one of the, you know, the model IO modules and that's it. But like, you seem very personally, very passionate about it, but I don't know what the Langchain specific angle for this is, for fine-tuning local models, basically. Like you're just passionate about local models and privacy and all that, right? And open source. [00:35:41]Harrison: Well, I think there's a few different things. Like one, like, you know, if we think about what it takes to build a really reliable, like context-aware reasoning application, there's probably a bunch of different nodes that are doing a bunch of different things. And I think it is like a really complex system. And so if you're relying on open AI for every part of that, like, I think that starts to get really expensive. Also like, probably just like not good to have that much reliability on any one thing. And so I do think that like, I'm hoping that for like, you know, specific parts at the end, you can like fine-tune a model and kind of have a more specific thing for a specific task. Also, to be clear, like, I think like, I also, at the same time, I think open AI is by far the easiest way to get started. And if I was building anything, I would absolutely start with open AI. So. [00:36:27]Swyx: It's something I think a lot of people are wrestling with. But like, as a person building apps, why take five vendors when I can take one vendor, right? Like, as long as I trust Azure, I'm just entrusting all my data to Azure and that's it. So I'm still trying to figure out the real case for local models in production. And I don't know, but fine-tuning, I think, is a good one. That's why I guess open AI worked on fine-tuning. [00:36:49]Harrison: I think there's also like, you know, like if there is, if there's just more options available, like prices are going to go down. So I'm happy about that. So like very selfishly, there's that aspect as well. [00:37:01]Alessio: And in the Lancsmith announcement, I saw in the product screenshot, you have like chain, tool and LLM as like the three core atoms. Is that how people should think about observability in this space? Like first you go through the chain and then you start dig down between like the model itself and like the tool it's using? [00:37:19]Harrison: We've added more. We've added like a retriever logging so that you can see like what query is going in and what are the documents you're getting out. Those are like the three that we started with. I definitely think probably the main ones, like basically the LLM. So the reason I think the debugging in Lancsmith and debugging in general is so needed for these LLM apps is that if you're building, like, again, let's think about like what we want people to build in with LangChain. These like context aware reasoning applications. Context aware. There's a lot of stuff in the prompt. There's like the instructions. There's any previous messages. There's any input this time. There's any documents you retrieve. And so there's a lot of like data engineering that goes into like putting it into that prompt. This sounds silly, but just like making sure the data shows up in the right format is like really important. And then for the reasoning part of it, like that's obviously also all in the prompt. And so being able to like, and there's like, you know, the state of the world right now, like if you have the instructions at the beginning or at the end can actually make like a big difference in terms of whether it forgets it or not. And so being able to kind of like. [00:38:17]Swyx: Yeah. And it takes on that one, by the way, this is the U curve in context, right? Yeah. [00:38:21]Harrison: I think it's real. Basically I've found long context windows really good for when I want to extract like a single piece of information about something basically. But if I want to do reasoning over perhaps multiple pieces of information that are somewhere in like the retrieved documents, I found it not to be that great. [00:38:36]Swyx: Yeah. I have said that that piece of research is the best bull case for Lang chain and all the vector companies, because it means you should do chains. It means you should do retrieval instead of long context, right? People are trying to extend long context to like 100K, 1 million tokens, 5 million tokens. It doesn't matter. You're going to forget. You can't trust it. [00:38:54]Harrison: I expect that it will probably get better over time as everything in this field. But I do also think there'll always be a need for kind of like vector stores and retrieval in some fashions. [00:39:03]Alessio: How should people get started with Langsmith Cookbooks? Wanna talk maybe a bit about that? [00:39:08]Swyx: Yeah. [00:39:08]Harrison: Again, like I think the main thing that even I find valuable about Langsmith is just like the debugging aspect of it. And so for that, it's very simple. You can kind of like turn on three environment variables and it just logs everything. And you don't look at it 95% of the time, but that 5% you do when something goes wrong, it's quite handy to have there. And so that's probably the easiest way to get started. And we're still in a closed beta, but we're letting people off the wait list every day. And if you really need access, just DM me and we're happy to give you access there. And then yeah, there's a lot that you can do with Langsmith that we've been talking about. And so Will on our team has been leading the charge on a really great like Langsmith Cookbooks repo that covers everything from collecting feedback, whether it's thumbs up, thumbs down, or like multi-scale or comments as well, to doing evaluation, doing testing. You can also use Langsmith without Langchain. And so we've got some notebooks on that in there. But we have Python and JavaScript SDKs that aren't dependent on Langchain in any way. [00:40:01]Swyx: And so you can use those. [00:40:01]Harrison: And then we'll also be publishing a notebook on how to do that just with the REST APIs themselves. So yeah, definitely check out that repo. That's a great resource that Will's put together. [00:40:10]Swyx: Yeah, awesome. So we'll zoom out a little bit from Langsmith and talk about Langchain, the company. You're also a first-time founder. Yes. And you've just hired your 10th employee, Julia, who I know from my data engineering days. You mentioned Will Nuno, I think, who maintains Langchain.js. I'm very interested in like your multi-language strategy, by the way. Ankush, your co-founder, Lance, who did AutoEval. What are you staffing up for? And maybe who are you hiring? [00:40:34]Harrison: Yeah, so 10 employees, 12 total. We've got three more joining over the next three weeks. We've got Julia, who's awesome leading a lot of the product, go-to-market, customer success stuff. And then we've got Bri, who's also awesome leading a lot of the marketing and ops aspects. And then other than that, all engineers. We've staffed up a lot on kind of like full stack infra DevOps, kind of like as we've started going into the hosted platform. So internally, we're split about 50-50 between the open source and then the platform stuff. And yeah, we're looking to hire particularly on kind of like the things, we're actually looking to hire across most fronts, to be honest. But in particular, we probably need one or two more people on like open source, both Python and JavaScript and happy to dive into the multi-language kind of like strategy there. But again, like strong focus there on engineering, actually, as opposed to maybe like, we're not a research lab, we're not a research shop. [00:41:48]Swyx: And then on the platform side, [00:41:49]Harrison: like we definitely need some more people on the infra and DevOps side. So I'm using this as an opportunity to tell people that we're hiring and that you should reach out if that sounds like you. [00:41:58]Swyx: Something like that, jobs, whatever. I don't actually know if we have an official job. [00:42:02]Harrison: RIP, what happened to your landing page? [00:42:04]Swyx: It used to be so based. The Berkshire Hathaway one? Yeah, so what was the story, the quick story behind that? Yeah, the quick story behind that is we needed a website [00:42:12]Harrison: and I'm terrible at design. [00:42:14]Swyx: And I knew that we couldn't do a good job. [00:42:15]Harrison: So if you can't do a good job, might as well do the worst job possible. Yeah, and like lean into it. And have some fun with it, yeah. [00:42:21]Swyx: Do you admire Warren Buffett? Yeah, I admire Warren Buffett and admire his website. And actually you can still find a link to it [00:42:26]Harrison: from our current website if you look hard enough. So there's a little Easter egg. Before we dive into more of the open source community things, [00:42:33]Alessio: let's dive into the language thing. How do you think about parity between the Python and JavaScript? Obviously, they're very different ecosystems. So when you're working on a LangChain, is it we need to have the same abstraction in both language or are you to the needs? The core stuff, we want to have the same abstractions [00:42:50]Harrison: because we basically want to be able to do serialize prompts, chains, agents, all the core stuff as tightly as possible and then use that between languages. Like even, yeah, like even right now when we log things to LangChain, we have a playground experience where you can run things that runs in JavaScript because it's kind of like in the browser. But a lot of what's logged is like Python. And so we need that core equivalence for a lot of the core things. Then there's like the incredibly long tail of like integrations, more researchy things. So we want to be able to do that. Python's probably ahead on a lot of like the integrations front. There's more researchy things that we're able to include quickly because a lot of people release some of their code in Python and stuff like that. And so we can use that. And there's just more of an ecosystem around the Python project. But the core stuff will have kind of like the same abstractions and be translatable. That didn't go exactly where I was thinking. So like the LangChain of Ruby, the LangChain of C-sharp, [00:43:44]Swyx: you know, there's demand for that. I mean, I think that's a big part of it. But you are giving up some real estate by not doing it. Yeah, it comes down to kind of like, you know, ROI and focus. And I think like we do think [00:43:58]Harrison: there's a strong JavaScript community and we wanted to lean into that. And I think a lot of the people that we brought on early, like Nuno and Jacob have a lot of experience building JavaScript tooling in that community. And so I think that's a big part of it. And then there's also like, you know, building JavaScript tooling in that community. Will we do another language? Never say never, but like... [00:44:21]Swyx: Python JS for now. Yeah. Awesome. [00:44:23]Alessio: You got 83 articles, which I think might be a record for such a young company. What are like the hottest hits, the most popular ones? [00:44:32]Harrison: I think the most popular ones are generally the ones where we do a deep dive on something. So we did something a few weeks ago around evaluating CSV q