POPULARITY
Today on the show we have Ryan Seams, the Head of Customer Success at AssemblyAI.In this episode, Ryan shares his experience transitioning from Deloitte to the fast-paced world of startups, where he spent nearly a decade at Mixpanel scaling customer success and navigating product analytics.We then discussed the evolution of pricing models — from events-based to monthly tracked users and back again — and how that shaped customer behavior, retention, and satisfaction.We wrapped up by discussing how AI-native companies like AssemblyAI are redefining usage-based pricing, customer segmentation, and churn forecasting in a rapidly changing landscape.Mentioned ResourcesAssemblyAI Mixpanel Google Analytics Adobe Omniture OpenAI Churn FM is sponsored by Vitally, the all-in-one Customer Success Platform.
When Amplitude launched Mixpanel was the big game in town. They were first to market, had raised more money, and had a well-known brand. VCs passed on Amplitude because it seemed like just another Mixpanel.Today, Amplitude is a $1.5B public company—they're about 50% bigger than Mixpanel. Mixpanel's marketing spend helped educate the market. But before buying an analytics solution most businesses do market research. That's when they'd find out that Amplitude had several features Mixpanel lacked—and they were much, MUCH cheaper.It's not cool to win on price, but it works. It worked for WalMart, CostCo, Shein, and it worked for Ampltiude.Here's the story of how it all happened.Why you should listen:How to use cheaper prices to win in a crowded market.Why you often need 12 hour days to win in Startupland.Why even massive $1B+ successes often have trouble raising early rounds.How pivoting can often be the key to finding real market pull.Why big competitors can often be a huge tailwind.How to use storytelling to raising bigger rounds. Keywordsstartups, entrepreneurship, analytics, Amplitude, pricing strategy, market positioning, data processing, voice recognition, technology pivot, competitive advantage, market dynamics, differentiation, product-market fit, storytelling, fundraising, startup challenges, customer relationships, analytics tools, business strategy, entrepreneurshipTimestamps:(00:00:00) Intro(00:06:13) A cool demo-- but a bad business(00:18:36) Why funding was so hard(00:25:43) Why lower prices are a big differentiator(00:40:50) Working 24/7(00:50:35) Product Market FitSend me a message to let me know what you think!
Alison Barrett, Head of Scaled Customer Success at Airtable, shares her journey from consulting at Deloitte to shaping scalable customer success strategies at fast-growing tech companies like Slack and Mixpanel. She and Alex discuss the power of ambassador programs, AI use cases in CS, and the importance of cross-functional collaboration in building impactful customer education ecosystems.Chapters:00:00 - Intro05:43 - Early career: From Deloitte to startup life06:32 - Mixpanel & the rise of product analytics09:06 - Slack's champion program: Fostering in-house advocates12:29 - Scaling CS: Operations, tools and voice of customer18:16 - Airtable ambassador program & NDR success story21:55 - Building AI use cases in CS with Airtable30:29 - Creating a scalable customer education ecosystem36:15 - Prioritizing quick wins & standardizing playbooks37:28 - The power of cross-functional collaboration41:52 - Empowering teams through mission & vision clarity45:59 - Capturing executive alignment with walking decksEnjoy! I know I sure did...Alison's LinkedIn: https://www.linkedin.com/in/alison-barrett-cs/This episode of the DCX Podcast is brought to you by Thinkific Plus, a Customer Education platform designed to accelerate customer onboarding, streamline the customer experience and avoid employee burnout. For more information and to watch a demo, visit https://www.thinkific.com/plus/ Support the show+++++++++++++++++Like/Subscribe/Review:If you are getting value from the show, please follow/subscribe so that you don't miss an episode and consider leaving us a review. Website:For more information about the show or to get in touch, visit DigitalCustomerSuccess.com. Buy Alex a Cup of Coffee:This show runs exclusively on caffeine - and lots of it. If you like what we're, consider supporting our habit by buying us a cup of coffee: https://bmc.link/dcspThank you for all of your support!The Digital Customer Success Podcast is hosted by Alex Turkovic
Dans cet épisode, je reçois Guillaume, CEO de Green Go, une plateforme de voyage durable qui ambitionne de devenir un acteur clé du tourisme éco-responsable en Europe.Dès le premier jour, Green Go a misé sur une stratégie data-driven pour structurer son marketing, optimiser ses décisions et maximiser chaque euro investi. Guillaume partage avec nous son approche, ses outils et ses apprentissages pour piloter efficacement une entreprise dans un marché complexe et compétitif.Points clés abordés dans l'épisodePourquoi structurer une stratégie marketing autour de la data dès le départ.Les outils indispensables pour un suivi efficace des données : MetaBase, MixPanel, SEMrush, Customer.io.Comment éviter l'éparpillement en se concentrant sur les KPIs clés.Trouver l'équilibre entre branding et performance grâce à la data.Les résultats obtenus et les leçons apprises après plusieurs années de pilotage data-driven.Guillaume nous montre comment transformer la data en un levier puissant pour structurer et faire évoluer son marketing. Un épisode riche en conseils pratiques pour adopter une approche data-driven.⚙️ Les outils évoqués :MetaBase, MixPanel, SEMrush, Customer.io
We have covered Product Analytics tools in the past, including Pendo, Amplitude, Mixpanel, Heap and some new comers like LogRocket, PostHog and FullStory.There are many more tools in the market, and many of the favorite ones are also improving and changing, with new improvements and functionalities that create more value to their users.We are fortunate to have more product leaders who are interested in looking at how our tools of the trade compare, and the value to our community. Such a leader is Aakash Gupta, Chief Product Officer at Product Growth, author and podcaster of the Product Growth newsletter and podcast. In a recent LinkedIn post, Aakash published the result of his research looking into the top leading Product Analytics products (see link to the post below).We had the pleasure of chatting with Aakash about his research, and learning from him about his research, as well as about the role of Product Growth Managers. Join Matt and Moshe as they explore with Aakash:How he got to Product ManagementHis first experience with a real Product Analytics productHow he researched the industry for the post What he discovered on the way about these products and some key differentiatorsSome of the niche products in the marketHow the different tools are adopting AIHow startups can utilize these toolsHis prediction on who will winThe power it gives product managers, and especially Product Growth ManagersInsights into the role of Product Growth ManagersAnd so much more…You can connect with Aakash at:LinkedIn: https://www.linkedin.com/in/aagupta/Product Growth newsletter and podcast: https://www.news.aakashg.com/LinkedIn post “What I learned about the top Product Analytics players”: https://www.linkedin.com/posts/aagupta_based-on-3-months-of-research-25-buyer-activity-7226944089210982401-QnKwYou can find the podcast's page, and connect with Matt and Moshe on Linkedin: Product for Product Podcast - linkedin.com/company/product-for-product-podcastMatt Green - https://www.linkedin.com/in/mattgreenproduct/Moshe Mikanovsky - linkedin.com/in/mikanovsky/ Note: any views mentioned in the podcast are the sole views of our hosts and guests, and do not represent the products mentioned in any way.Please leave us a review and feedback ⭐️⭐️⭐️⭐️⭐️
Steve Blank is an Adjunct Professor at Stanford University, where he co-created the "Hacking for Defense" curriculum for the Department of Defense. As a consultant to top defense and intelligence organizations, Steve brings cutting-edge strategies to the national security sector. Before entering academia, Steve built eight different startups. He helped launch the Lean Startup movement with his May 2013 Harvard Business Review cover story. Steve also authored the acclaimed business books "The Four Steps to the Epiphany" and "The Startup Owner's Manual.” This episode's is guest host is Meka Asonye, a Partner at First Round Capital. Before joining First Round as an investor, Meka led go-to-market teams at both Stripe and Mixpanel. – In today's episode we discuss: Commercial versus military market strategies Finding mission solution fit The hidden challenges most startups miss Building relationships in National Security The new generation of “defense founders” Much more – Referenced: Alexander Osterwalder: https://www.linkedin.com/in/osterwalder/ Department of Defense: https://www.defense.gov/ Eric Ries: https://www.linkedin.com/in/eries/ Hacking for Defense: https://hackingfordefense-prod.stanford.edu/ How Saboteurs Threaten Innovation: https://steveblank.com/2024/07/30/why-large-organizations-struggle-with-disruption-and-what-to-do-about-it/ How to find your customer in the Dept of Defense: https://steveblank.com/2024/09/17/the-directory-of-dod-program-executive-offices-and-officers-peos/ Mission Model Canvas: https://steveblank.com/2019/09/ Pete Newell: https://www.linkedin.com/in/petenewell/ Special Operations Command: https://www.socom.mil/ The Frozen Middle: https://steveblank.com/2024/07/30/why-large-organizations-struggle-with-disruption-and-what-to-do-about-it/ The Hacking for Defense Manual: https://stanfordh4d.substack.com/p/the-hacking-for-defense-manual-a The Hacking for Defense Course: https://www.h4d.us/ The lean launchpad at Stanford: https://steveblank.com/2011/05/10/the-lean-launchpad-at-stanford-–-the-final-presentations/ The Secret History of Silicon Valley: https://steveblank.com/secret-history/ – Where to find Steve: LinkedIn: https://www.linkedin.com/in/steveblank/ Twitter/X: https://twitter.com/sgblank Website: https://steveblank.com/ – Where to find Meka: LinkedIn: https://www.linkedin.com/in/mekaasonye/ Twitter/X: https://x.com/bigmekastyle – Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ Twitter/X: https://twitter.com/firstround YouTube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast – Timestamps: (00:00) Introduction (02:27) Validating ideas for defense products (03:57) Guide to military sales and procurement (07:15) Rethinking GTM strategies (10:13) Building a network in national security (15:07) The dual-use debate (18:35) Behind the rising number of “defense founders” (22:30) “Mission solution fit” (24:35) Breaking new ground in military tech (26:09) Essential resources for any defense founder (28:59) What's missing from Silicon Valley
What's up everyone, today we have the pleasure of sitting down with Barbara Galiza, Growth and Marketing Analytics Consultant.Summary: Attribution is a bit like navigating Amsterdam's canals: mesmerizing but full of hidden turns that don't always make sense. You don't need to chart every twist—just focus on finding the direction that moves you forward. Instead of obsessing over every click, use attribution like a compass, not a GPS. Multi-touch attribution (MTA) gives you some of the story, but often misses those quiet yet powerful nudges that drive real decisions. Layering in rule-based or incrementality testing can fill the gaps, giving a clearer picture of what's driving your wins. For startups, it's even simpler: stick to what's working and forget complex attribution—qualitative feedback is often the best guide in the early days. Data doesn't need to be perfect, just practical, and sometimes trusting that a strategy is working is enough to keep pushing it.About BarbaraBarbara was an early employee at Her (YC), the biggest platform for LGBTQ women where she would eventually become Head of GrowthShe was also Head of Growth at different startups like Pariti and HomerunShe worked at an agency where she led data and analytics for Microsoft EMEABarbara then went out on her own as a GTM and Analytics consultant for various companies like Gitpod, WeTransfer, Sidekick and dbt LabsShe has a newsletter on marketing data: 021 newsletter.She produces content for data brands (dbt, Mixpanel, Amplitude) like case studies and webinarsBuilding Data Literacy Through SQLData literacy is essential for modern marketers, but it doesn't have to be intimidating. Barbara's advice is simple: learn SQL. While marketers today are surrounded by user-friendly tools and drag-and-drop interfaces, those who want to truly grasp their data should get comfortable with SQL. It's not about becoming a data engineer but about understanding how the numbers you rely on every day are built. SQL helps you see how data connects, how it's organized, and how you can group it to make sense of what's happening in your campaigns.What's great is that you don't need to dive into formal classes or certifications. Start where you are. Most companies are sitting on a goldmine of structured marketing data, whether it's Google Analytics data in BigQuery or Amplitude events stored in a data warehouse. The next time you're building a report, try using SQL for a small part of the process. It's a skill that compounds over time. Once you get familiar with the basics, you'll start to see data in a different way, and you'll be able to spot insights faster.Barbara also points out a crucial, often overlooked skill: understanding why your tools give credit to certain campaigns. Why does one Facebook ad outperform others in your reports? Why does Google Analytics attribute more conversions to certain sources? Getting to the bottom of these questions puts you in a much stronger position as a marketer. If you can explain how attribution models work and why certain data points appear, you're already ahead of most.At the end of the day, it's about making smarter decisions. Barbara believes that marketers who can confidently say, “I know why these numbers look the way they do,” are in the top 10% of data-driven marketers. It's not just about collecting data; it's about making sense of it and using it to steer your strategies.Key takeaway: Learning SQL gives marketers the power to truly understand their data. Starting small, even with basic queries, can unlock a deeper understanding of how marketing data is structured and why campaigns perform the way they do. The key is to build practical skills that help you make more informed decisions.Rethinking Attribution and Understanding Its Role in MeasurementBarbara brings clarity to two commonly conflated concepts: attribution and measurement. While many marketers default to thinking of attribution as purely click-based or multi-touch attribution (MTA), Barbara challenges this view. She argues that attribution goes beyond just tracking clicks and touches throughout a customer's journey. It's about understanding the overall impact of marketing efforts—whether through incrementality tests, media mix modeling (MMM), or holdout groups. Attribution is meant to explain how marketing drives results, but it's not the only tool for assessing campaign success.MTA, particularly click-based models, excels at measuring bottom-funnel actions like search marketing, where high-intent users click on an ad and then convert. This method works well for campaigns that rely on clicks to move the needle. However, Barbara notes that it has its limitations, especially when it comes to non-click-based channels like video or display. MTA often over-credits search campaigns because that's where the conversion is tracked, but it misses the broader influence of awareness-building efforts. In essence, MTA can tell you what happened after the click, but not what inspired it in the first place—be it a podcast mention or an engaging piece of content seen days before.On a broader level, Barbara explains that attribution is not the same as measurement. Attribution focuses specifically on tying marketing efforts to business results, such as leads or revenue. Measurement, on the other hand, casts a wider net. It includes performance across various metrics, not just conversions. For instance, measuring how well different messaging resonates with audiences is crucial, but it doesn't always directly lead to immediate sales. Measurement can inform future strategies by offering insights into engagement, customer preferences, and channel effectiveness.As Barbara sees it, attribution is a subset of measurement. It's a tool for understanding what drives business outcomes, but it shouldn't be the only tool marketers rely on. For example, MTA has its place but should be used alongside other models like MMM to paint a fuller picture. Measurement, meanwhile, helps marketers assess the effectiveness of everything from messaging to customer touchpoints, beyond just the end goal of conversion.Key takeaway: Attribution is one piece of the measurement puzzle, focusing on business outcomes, while measurement encompasses a broader range of insights. Marketers should use a mix of attribution models to understand their campaigns and apply measurement tools to gain a holistic view of performance.Limitations of Multi-Touch Attribution in Credit DistributionMulti-touch attribution (MTA) is often seen as a way to distribute credit across different customer touchpoints, but Barbara questions its effectiveness in this role. She argues that MTA is inherently limited because it only attributes credit to interactions that involve a click. This creates a skewed view of the customer journey, where only click-driven strategies—like search ads—are recognized, leaving other key touchpoints, like connected TV (CTV) or social media, out of the equation. The result is a narrow perspective that doesn't capture the full influence of various channels.Barbara points out that for marketers to make better decisions, MTA needs more than just click data. One alternative she suggests is pairing MTA with rule-based attribution models, where data from "How did you hear about us?" surveys are integrated into the analysis. This way, marketers can capture insights from channels that don't typically generate clicks but still play a crucial role in driving awareness or consideration. By adding this type of first-party data, businesses get a broader understanding of what's really influencing their customers.Some data agencies are also experimenting with es...
In this exciting episode of Product Coffee, host Kevin Gentry sits down with Brendan Fortune, Director of Product at Customer.io. Brendan shares his unique journey from the music industry to product leadership, how Customer.io scaled from 60 to nearly 300 employees, and the key role OKRs play in aligning teams for success. They also explore the concept of flywheels, managing product roadmaps, and the challenges of remote work. Brendan was nice enough to share his Annual Planning via Product Flywheel Miro board. Check it out here!
What's up everyone, today we have the pleasure of sitting down with Simon Heaton, Director of Growth Marketing at Buffer. Summary: Simon helps us explore Buffer's martech journey, highlighting their shift from traditional tools to a product-led approach driven by data and server-side analytics. We unpack their use of Customer.io for automation and hold out testing, Redash for data insights, and their agile sprint model that fosters continuous innovation. Discover how Buffer's small team thrives with efficient, data-driven strategies.About SimonSimon started his career in the agency world at Banfield in Ottawa, CanadaHe later moved over to Shopify where he would spend nearly 7 years, first as a content Marketing Manager and later as the Senior Growth Lead, AcquisitionSimon's also worn a part-time teaching hat for over 5 years, he was an Instructor with Telfer School of Management at UofO as well as a Professor at Algonquin CollegeHe's a startup mentor for founders that are part of the Singapore-based equity fund at AntlerToday Simon is Director of Growth Marketing at Buffer, the world-renowned social media management platformBuffer's Marketing Tech Stack and Why it Doesn't Include a CRMBuffer's marketing strategy is unique. They don't use a traditional CRM like HubSpot or Salesforce. Simon explains that Buffer is a product-led company without a dedicated sales team. This means they don't need typical CRM functionalities like lead routing and scoring. Instead, Buffer relies heavily on data and product analytics to drive their marketing efforts.The core of Buffer's operations is their data warehouse, with Segment acting as their Customer Data Platform (CDP). This setup allows Buffer to integrate various tools and centralize crucial information. Mixpanel, their product analytics tool, is pivotal in this system. It gathers both product usage and marketing data, providing a comprehensive view of user interactions.Simon highlights the importance of server-side tracking and integrating data from diverse sources such as AdWords, Customer.io, and Pendo. This integration helps Buffer understand the user lifecycle and measure the impact of marketing efforts beyond basic website metrics.Tools like Customer.io are also essential for Buffer. It manages most user communications, making it a critical component of their stack. The combination of Mixpanel, Customer.io, and other integrated tools ensures that Buffer can seamlessly track and analyze user behavior.Key takeaway: Not all B2B companies need a CRM or a sales team. A product-led approach, using robust data and product analytics tools, can effectively drive your marketing efforts and provide comprehensive insights into user behavior.The Power of a Visual and Intuitive Automation Flow InterfaceSimon loves working in a smaller team like Buffer, where he can get hands-on with their tools daily. He highlights how Buffer uses Customer.io for their marketing automation, a tool he's familiar with from his previous experience at Shopify. Unlike Shopify, which eventually switched to Salesforce Marketing Cloud for more enterprise-level needs, Buffer continues to thrive with Customer.io.Buffer relies on Customer.io to manage email marketing, push notifications for mobile apps, and various communication programs. Simon appreciates how the tool handles both marketing and transactional communications, offering a unified view of user interactions. This integration ensures consistency in messages, whether they're marketing emails or product notifications.Simon praises Customer.io's user-friendly interface, especially the journey mapping functionality and the WYSIWYG editor, which make it accessible for non-technical team members. Despite its ease of use, the platform also boasts deep technical capabilities, allowing for extensive customization through HTML and API integrations. This flexibility has been crucial for Buffer's needs.The integration with Segment, Buffer's Customer Data Platform (CDP), is particularly valuable. Simon emphasizes that having all data in Segment and seamlessly integrating it with Customer.io enables precise data handling. This setup ensures accurate and timely data flow, essential for personalized and effective marketing automation workflows.Key takeaway: Even as a small team, you can effectively manage complex marketing automation needs by choosing user-friendly tools like Customer.io that offer both simplicity and deep customization. This approach allows your non-technical team members to contribute meaningfully while ensuring your technical needs are met, enhancing overall efficiency and personalization in your communications.Experimentation and Holdout Testing at BufferExperimentation is a cornerstone of Buffer's approach, and Simon is particularly enthusiastic about the capabilities provided by Customer.io. He explains that the platform's holdout testing functionality is essential for validating new programs and comparing campaign performance. Unlike some tools, Customer.io counts a delivery for the holdout group, simplifying the tracking process over time.The integration with Segment and Mixpanel is a game-changer for Buffer. This setup allows them to surface Customer.io data in Mixpanel, creating unique reports and dashboards to support their experiments. Tracking differences in behavior between groups becomes straightforward, thanks to the detailed delivery events logged for both test and holdout groups. This level of detail ensures that Buffer can effectively measure the impact of their campaigns.Simon also highlights the ease of A/B testing within Customer.io. Whether at the message level or within workflows, the platform's randomization logic allows for extensive testing. Buffer can run tests on content, sequencing, and other variables, ensuring they continually optimize their marketing efforts. The ability to branch workflows and test different variants simultaneously is particularly valuable, enabling ongoing experimentation.Key takeaway: Leverage holdout testing and detailed event tracking within your marketing automation tools to gain deeper insights into your campaign effectiveness. This approach allows you to validate new programs, compare performance, and optimize your strategies based on precise, data-driven insights.Testing Journeys and Templating Language with QA Draft ModeSimon praises Customer.io's QA draft mode, a feature he finds invaluable for Buffer's marketing automation. This functionality allows the team to build complex workflows, trigger off specific data points, and test the entire process in a production environment without actually sending emails. It's a unique capability that Simon has not found in other tools, making it a standout feature of Customer.io.Simon highlights how QA draft mode lets them see real users qualifying for different branches of the workflow while emails remain in draft. This means they can verify that users are correctly segmented and the emails look as intended, all without prematurely sending any messages. This testing phase is crucial for catching errors that might not be evident during initial previews.Buffer has used this feature for several initiatives, such as new onboarding iterations and product notifications. Given the high frequency and volume of these emails, ensuring everything works perfectly before going live is essential. Simon appreciates that once the testing phase is complete, it only takes a click to start sending the validated emails to users.This capability saves time and reduces the risk of errors in live campaigns. It allows Buffer to maintain high st...
Kalina, Tech Trailblazer and Forbes 30 Under 30 North America Honoree, is a dynamic and visionary leader in the tech industry with over a decade of expertise in strategic event marketing, customer-led growth, and community building. As the founder of UnapologeTECH, she leads a global consulting platform dedicated to empowering individuals and businesses through training, content development, and community engagement, all with the mission of fostering a more equitable and accessible world. In her role as a marketing leader at the unicorn tech company Mixpanel, Kalina drives innovation and growth. She also serves as an advisor to the Venture Capital Firm First Round Capital and is an Evangelist for early-stage startups like Base.ai, a cutting-edge customer-led growth platform. Beyond her professional endeavors, Kalina is a renowned podcaster, author, and speaker, sharing her valuable insights on tech, marketing, and leadership.
In this exciting episode, Kevin Dean sits down with Aaron Jones, the Head of Strategic Partnerships at Maven AGI. The conversation spans Aaron's journey from a computer science major to his current role, and delves into the dynamic landscape of customer operations and AI.Episode Summary:Introduction: Kevin Dean introduces the podcast and welcomes Aaron Jones, highlighting their shared background and long-standing professional relationship.Interview Highlights:Aaron's Background: Aaron shares his career journey, from his early days as a developer to his pivotal roles at major companies like Adobe, Mixpanel, Sprinkler, Discovery Education, HubSpot, and now Maven AGI.AI and Customer Ops: Aaron discusses the similarities and differences between the current AI boom and the internet bubble of the 90s. He emphasizes the strategic application of AI and its significant impact on customer operations.Challenges in Customer Service: The discussion touches on the three main challenges: increasing customer expectations, limited resources, and the balance between human capital and technology.Framework for Success: Aaron outlines a strategic framework for organizations to efficiently manage customer interactions through self-service, AI, and human resources.Technology Evaluation: Insight into how businesses can assess and integrate the best technology solutions to enhance customer experience.Real-life Example:Maven AGI and TripAdvisor: Aaron shares a case study where Maven AGI helped TripAdvisor successfully deflect over 90% of inbound customer support cases using generative AI.Key Takeaways:Core Competency Focus: Emphasize core strengths and leverage best-in-breed solutions for customer support.Data Strategy: Ensure data is interconnected and contextualized for actionable insights.AI Implementation: Proper training and execution of AI models are crucial for success.Embracing Automation: Automation can enhance efficiency and create new job opportunities, rather than just replacing human roles.
It's another bittersweet episode as our hosts talk about wins and some sad news.Whee! wins the Oslo environmental prizeQueen Raae and her family on the way to the cabin officeThe past couple of weeks have been a whirlwind for Benedicte: attending Lillian's flute recital, spending a spa day with Salma (@whitep4nth3r on Twitter) in Larvik, and traveling back to Oslo to receive Whee!'s award. Last week was also very emotional for her and the family because they finally moved mom to the nursing home.Benedikt is feeling productive again this past week: finishing the Mixpanel integration and working on self-service CSV exports. He also enjoyed the meet up with bootstrapped founders at Frankfurt. And with everything going great with the new hire, Benedikt shares that this is Michael's final week with them.
Episode 123I spoke with Suhail Doshi about:* Why benchmarks aren't prepared for tomorrow's AI models* How he thinks about artists in a world with advanced AI tools* Building a unified computer vision model that can generate, edit, and understand pixels. Suhail is a software engineer and entrepreneur known for founding Mixpanel, Mighty Computing, and Playground AI (they're hiring!).Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:54) Ad read — MLOps conference* (01:30) Suhail is *not* in pivot hell but he *is* all-in on 50% AI-generated music* (03:45) AI and music, similarities to Playground* (07:50) Skill vs. creative capacity in art* (12:43) What we look for in music and art* (15:30) Enabling creative expression* (18:22) Building a unified computer vision model, underinvestment in computer vision* (23:14) Enhancing the aesthetic quality of images: color and contrast, benchmarks vs user desires* (29:05) “Benchmarks are not prepared for how powerful these models will become”* (31:56) Personalized models and personalized benchmarks* (36:39) Engaging users and benchmark development* (39:27) What a foundation model for graphics requires* (45:33) Text-to-image is insufficient* (46:38) DALL-E 2 and Imagen comparisons, FID* (49:40) Compositionality* (50:37) Why Playground focuses on images vs. 3d, video, etc.* (54:11) Open source and Playground's strategy* (57:18) When to stop open-sourcing?* (1:03:38) Suhail's thoughts on AGI discourse* (1:07:56) OutroLinks:* Playground homepage* Suhail on Twitter Get full access to The Gradient at thegradientpub.substack.com/subscribe
In this episode, we talk with Susanna Morgan, a board member at a public company, Payoneer, a private company, Mixpanel (backed by Andreesen Horowitz) and a founding member of Guilds by FirstMark, a private community for C-Level executives at leading venture-backed companies from around the world, about her journey and insights.For more corporate governance podcasts, check out Feedspot's list of the Top 15 Corporate Governance Podcasts (which includes Women Governance Trailblazers (listed as Women Governance Gurus)) - https://blog.feedspot.com/corporate_governance_podcasts/
Dive into the marketing mind of Garrett Hammonds! From teacher aspirations to a marketing maestro, Garrett unravels his journey and spills secrets about standing out in digital marketing. Discover B2B success, conference ROI, and crafting relationships that convert. Get your notepads ready!Here are a few topics we'll discuss on this episode of Hard to Market Podcast.From teaching to marketing guru.Niche focus in digital marketing.ROI tracking in corporate events.Relationship building in B2B.Tools & insights for marketing success.Resources:Nomadic Marketing + SoftwarePodcast ChefConnect with Garrett Hammonds:LinkedInConnect with our host, Brian Mattocks:LinkedInEmailQuotables:08:38 - We didn't need something as large scale like a Salesforce or you know, a larger plan on something like a HubSpot. So ActiveCampaign helps us be able to automate what we need to keep people flowing through the pipeline. And that's where we keep up with the lifetime value as well. Other things that we use as tracking measures, we do use Zoom Info as a tool. So we know if companies that we have been keeping up with have visited our website, connects up perfect to GA IV and actually passes those parameters into the reports that we can have there. And then we also use Mixpanel. That one's gonna be just kind of a secondary backup to some of our other tracking. So if something ever goes down, we have mixed panel that can, you know, kind of act as a act as another system to pull in.11:47 - Being a digital marketing agency, we have some different ways in which, you know, we've pulled in clients through free Google Ads audits and you know, there's all kinds of different pathways that we employ. But I think at the heart of all of that, even when looking at any kind of digital piece, it's always gonna need to come back to a core objective of how can I connect with these businesses, these business owners, the people, the humans on the other side, and really listen to the needs that they have and are we a good fit for helping them? And that's where that relationship piece comes in.10:14 - Garrett: Most of the ones we've gone to have been a big success though. But we've gone to specific industry conferences for the verticals that we serve. So staying away from more general professional conferences and going to very, very specific industry vertical conferences.Brian: So you're using those conferences to nurture relationships, you're continuing to grow the referrals you already had and you're increasing your lifetime value for your current client base. And I think that's like,as a three-legged stool, that's a really a great approach.17:45 - Other clients, people are just searching for very, very specific things on search engines you know, finding a very specific industry publication, we've crossed over at times with traditional marketing and magazine things. It really just depends and it's really important for anybody who's trying to market their business to know your market and what it looks like and know your audience. And it's gonna be one of my, one of my big themes that you may hear from me that, that relationship that links back to knowing your audience.20:22 - It's central because the audience at the end of the day, drives demand and their needs, whether they are always initially aware of them or not, their needs are the thing that is going to make it to where you can actually provide solutions for them. Not make sales, but offer solutions. And I think that's key as well, knowing your skills and your tools. One of the most foundational things that have helped me grow in my career has been this one, knowing tools. I didn't of course graduate with a marketing degree Connect with our host, Brian Mattocks:LinkedInEmailSchedule a Free Podcast Consult
Dalton Caldwell is Managing Director and Group Partner at Y Combinator. Prior to YC, he was the co-founder and CEO of imeem (acquired by MySpace in 2009) and the co-founder and CEO of App.net. During his time at YC, he's advised more than 35 YC unicorns, including DoorDash, Amplitude, Webflow, and Retool, and has worked across 21 different YC batches. He's also racked up more than 6,500 office hours with founders. In our conversation, we discuss:• Why founders need to adopt the mindset “Just don't die”• The most common reason startups fail• When to pivot, and characteristics of a good pivot• The concept of “tar pit ideas” and examples of bad startup ideas• Why investors say no to startups• The importance of market size in investment decisions• The pitfalls of founders over-delegating• Effective ways to talk to customers• 20 ideas Dalton is looking to fund—Brought to you by:• Eppo—Run reliable, impactful experiments• Vanta—Automate compliance. Simplify security• Coda—The all-in-one collaborative workspace—Find the transcript at: https://www.lennysnewsletter.com/p/lessons-from-1000-yc-startups—Where to find Dalton Caldwell:• X: https://twitter.com/daltonc• LinkedIn: https://www.linkedin.com/in/daltoncaldwell/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Dalton's background(04:41) The value of simple advice(07:04) Dalton's advice: “Just don't die”(08:39) Knowing when to stop(11:45) Deciding to pivot(14:26) Characteristics of a good pivot(17:53) Knowing when to pivot(19:03) Zip's journey and finding a market(21:22) Why Dalton says to “Move towards the mountains and the desert”(23:45) Tar pit ideas(26:49) Understanding why investors say no(29:14) The importance of market size(32:16) Avoiding over-delegation and hiring senior people too early(36:43) Why startups fail(40:30) Effectively talking to customers(45:17) Examples of startups hustling to talk to customers(48:01) Patterns of successful startups(52:05) YC's Request for Startups(55:37) Early days of Silicon Valley(01:05:33) Contrarian corner: growth hacking for early startups(01:09:28) Failure corner(01:11:15) Closing thoughts(01:12:22) Lightning round—Referenced:• Y Combinator: https://www.ycombinator.com/• Tiger Woods's website: https://tigerwoods.com/• Co-Founder Mistakes That Kill Companies & How to Avoid Them: https://www.youtube.com/watch?v=dlfjs_eEEzs• Daniel Alberson's LinkedIn post about Y Combinator: https://www.linkedin.com/posts/alberson_i-left-my-dream-job-as-a-product-manager-activity-7089677882431533056-jJ9H• Companies in Y Combinator W17 Batch: https://www.ycdb.co/batch/w17• Brex: https://www.brex.com/• Retool: https://retool.com/• Segment: https://segment.com/• Mixpanel: https://mixpanel.com/• Whatnot: https://www.whatnot.com/• Andreessen Horowitz: https://a16z.com/• Airbnb's CEO says a $40 cereal box changed the course of the multibillion-dollar company: https://fortune.com/2023/04/19/airbnb-ceo-cereal-box-investors-changed-everything-billion-dollar-company/• Rujul Zaparde on LinkedIn: https://www.linkedin.com/in/rujulz/• Zip: https://ziphq.com/• Lu Cheng on LinkedIn: https://www.linkedin.com/in/lu-cheng-973b7830/• Avoid these tempting startup tar pit ideas: https://www.ycombinator.com/library/Ij-avoid-these-tempting-startup-tarpit-ideas• Airbnb acquires Localmind to create crowdsourced advice about neighborhoods: https://skift.com/2012/12/13/airbnb-acquires-localmind-to-create-crowdsourced-advice-about-neighborhoods/• Foursquare: https://foursquare.com/• Razorpay: https://razorpay.com/• Total Addressable Market: https://www.productplan.com/glossary/total-addressable-market/• Lenny Bogdonoff on LinkedIn: https://www.linkedin.com/in/rememberlenny/• Milk Video: https://milkvideo.com/• Lessons from working with 600+ YC startups | Gustaf Alströmer (Y Combinator, Airbnb): https://www.lennyspodcast.com/lessons-from-working-with-600-yc-startups-gustaf-alstromer-y-combinator-airbnb/• How the most successful B2B startups came up with their original idea: https://www.lennysnewsletter.com/p/how-the-most-successful-b2b-startups• Collison installation: https://news.ycombinator.com/item?id=18400504• Stripe: https://stripe.com/• Patrick Collison on LinkedIn: https://www.linkedin.com/in/patrickcollison/• John Collison on LinkedIn: https://www.linkedin.com/in/johnbcollison/• Tony Xu on LinkedIn: https://www.linkedin.com/in/xutony/• Grant LaFontaine on LinkedIn: https://www.linkedin.com/in/grantlafontaine/• Ryan Petersen on LinkedIn: https://www.linkedin.com/in/rpetersen/• Lessons on building product sense, navigating AI, optimizing the first mile, and making it through the messy middle | Scott Belsky (Adobe, Behance): https://www.lennyspodcast.com/lessons-on-building-product-sense-navigating-ai-optimizing-the-first-mile-and-making-it-through-t/• YC's latest Request for Startups: https://www.ycombinator.com/blog/ycs-latest-request-for-startups• ERPs: https://www.ycombinator.com/rfs#new-enterprise-resource-planning-software• Commercial open source companies: https://www.ycombinator.com/rfs#commercial-open-source-companies• New space companies: https://www.ycombinator.com/rfs#new-space-companies• A way to end cancer: https://www.ycombinator.com/rfs#a-way-to-end-cancer• Spatial computing: https://www.ycombinator.com/rfs#spatial-computing• New defense technology: https://www.ycombinator.com/rfs#new-defense-technology• Bringing manufacturing back to America: https://www.ycombinator.com/rfs#bring-manufacturing-back-to-america• Better enterprise glue: https://www.ycombinator.com/rfs#better-enterprise-glue• Small fine-tuned models, as an alternative to giant generic ones: https://www.ycombinator.com/rfs#small-finetuned-models-as-an-alternative-to-giant-generic-ones• Reid Hoffman on LinkedIn: https://www.linkedin.com/in/reidhoffman/• Sam Altman on X: https://twitter.com/sama• Sean Parker on LinkedIn: https://www.linkedin.com/in/parkersean/• Owen Van Natta on LinkedIn: https://www.linkedin.com/in/owen-van-natta-444a7/• iMeme: https://apps.apple.com/us/app/imeme-generator/id1560021364• Marc Andreessen on X: https://twitter.com/pmarca• Picplz 1, Instagram 0 as VC firm Andreessen Horowitz chooses photo app rival: https://www.reuters.com/article/idUS2587232395/• Gustaf Alstromer—How to Get Users and Grow: https://www.youtube.com/watch?v=T9ikpoF2GH0• Getting to Yes: Negotiating Agreement Without Giving In: https://www.amazon.com/Getting-Yes-Negotiating-Agreement-Without/dp/0143118757• Founding Sales: The Early Stage Go-to-Market Handbook: https://www.amazon.com/Founding-Sales-Go-Market-Handbook-ebook/dp/B08PMK17Z1• Founder-led sales | Pete Kazanjy (Founding Sales, Atrium): https://www.lennyspodcast.com/founder-led-sales-pete-kazanjy-founding-sales-atrium/• The Sopranos on HBO: https://www.hbo.com/the-sopranos• The Wire on HBO: https://www.hbo.com/the-wire• Columbo on Prime Video: https://www.amazon.com/Columbo-Season-1/dp/B008SA89HA• Oura ring: https://ouraring.com/• Apple watch: https://www.apple.com/watch/• SiPhox: https://siphoxhealth.com/• Dalton & Michael on YouTube: https://www.youtube.com/playlist?list=PLQ-uHSnFig5Nd98Sc9I-kkc0ZWe8peRMC• How Future Billionaires Get Sh*t Done: https://www.youtube.com/watch?v=ephzgxgOjR0• The Student's Guide to Becoming a Successful Startup Founder: https://www.youtube.com/watch?v=O5KCB2p6SB8—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Robby engages with independent consultant and author, Andrew Atkinson, delving into the intricate world of software development and database maintenance. The duo kicks off with a profound exploration of the importance of swift and intuitive change management in software, unraveling the key traits that transform a database into a well-maintained powerhouse. From securing data to cleaning up legacy information, they shed light on the often-neglected aspects that can significantly enhance a software engineer's efficiency.As the conversation flows, Andrew unveils the secrets behind his latest book, "High Performance PostgreSQL for Rails," tracing its origins from an internal slide deck to a valuable resource for developers beyond the Rails framework. The episode explores the nuanced process of "Unshipping," as Andrew dissects Mixpanel's article, offering a roadmap for deprecating features without disappointing customers. The episode is a treasure trove of insights, covering everything from optimizing database performance with rules to navigating the tricky terrain of advocating for codebase improvements in the face of reluctant stakeholders. Don't miss out on this dynamic exchange of ideas; tune in to the episode now for an enlightening journey through the realms of software development and database management.Book Recommendations:Staff Engineer: Leadership Beyond the Management Track By Will LarsonHelpful Links:Mixpanel: The art of removing features and productsOrder High Performance PostgreSQL for Rails (USE PROMO CODE: Maintainable for 35% off!)Coverbandhttps://andyatkinson.com/https://github.com/andyatkinsonhttps://www.linkedin.com/in/andyatkinsonThanks to Our Sponsor!Turn hours of debugging into just minutes! AppSignal is a performance monitoring and error tracking tool designed for Ruby, Elixir, Python, Node.js, Javascript, and soon, other frameworks. It offers six powerful features with one simple interface, providing developers with real-time insights into the performance and health of web applications. Keep your coding cool and error-free, one line at a time! Check them out! Subscribe to Maintainable on:Apple PodcastsOvercastSpotifyOr search "Maintainable" wherever you stream your podcasts.Keep up to date with the Maintainable Podcast by joining the newsletter.
We are running an end of year survey for our listeners! Please let us know any feedback you have, what episodes resonated with you, and guest requests for 2024! Survey link here!Before language models became all the rage in November 2022, image generation was the hottest space in AI (it was the subject of our first piece on Latent Space!) In our interview with Sharif Shameem from Lexica we talked through the launch of StableDiffusion and the early days of that space. At the time, the toolkit was still pretty rudimentary: Lexica made it easy to search images, you had the AUTOMATIC1111 Web UI to generate locally, some HuggingFace spaces that offered inference, and eventually DALL-E 2 through OpenAI's platform, but not much beyond basic text-to-image workflows.Today's guest, Suhail Doshi, is trying to solve this with Playground AI, an image editor reimagined with AI in mind. Some of the differences compared to traditional text-to-image workflows:* Real-time preview rendering using consistency: as you change your prompt, you can see changes in real-time before doing a final rendering of it.* Style filtering: rather than having to prompt exactly how you'd like an image to look, you can pick from a whole range of filters both from Playground's model as well as Stable Diffusion (like RealVis, Starlight XL, etc). We talk about this at 25:46 in the podcast.* Expand prompt: similar to DALL-E3, Playground will do some prompt tuning for you to get better results in generation. Unlike DALL-E3, you can turn this off at any time if you are a prompting wizard* Image editing: after generation, you have tools like a magic eraser, inpainting pencil, etc. This makes it easier to do a full workflow in Playground rather than switching to another tool like Photoshop.Outside of the product, they have also trained a new model from scratch, Playground v2, which is fully open source and open weights and allows for commercial usage. They benchmarked the model against SDXL across 1,000 prompts and found that humans preferred the Playground generation 70% of the time. They had similar results on PartiPrompts:They also created a new benchmark, MJHQ-30K, for “aesthetic quality”:We introduce a new benchmark, MJHQ-30K, for automatic evaluation of a model's aesthetic quality. The benchmark computes FID on a high-quality dataset to gauge aesthetic quality.We curate the high-quality dataset from Midjourney with 10 common categories, each category with 3K samples. Following common practice, we use aesthetic score and CLIP score to ensure high image quality and high image-text alignment. Furthermore, we take extra care to make the data diverse within each category.Suhail was pretty open with saying that Midjourney is currently the best product for imagine generation out there, and that's why they used it as the base for this benchmark. I think it's worth comparing yourself to maybe the best thing and try to find like a really fair way of doing that. So I think more people should try to do that. I definitely don't think you should be kind of comparing yourself on like some Google model or some old SD, Stable Diffusion model and be like, look, we beat Stable Diffusion 1.5. I think users ultimately want care, how close are you getting to the thing that people mostly agree with? [00:23:47]We also talked a lot about Suhail's founder journey from starting Mixpanel in 2009, then going through YC again with Mighty, and eventually sunsetting that to pivot into Playground. Enjoy!Show Notes* Suhail's Twitter* “Starting my road to learn AI”* Bill Gates book trip* Playground* Playground v2 Announcement* $40M raise announcement* “Running infra dev ops for 24 A100s”* Mixpanel* Mighty* “I decided to stop working on Mighty”* Fast.ai* CivitTimestamps* [00:00:00] Intros* [00:02:59] Being early in ML at Mixpanel* [00:04:16] Pivoting from Mighty to Playground and focusing on generative AI* [00:07:54] How DALL-E 2 inspired Mighty* [00:09:19] Reimagining the graphics editor with AI* [00:17:34] Training the Playground V2 model from scratch to advance generative graphics* [00:21:11] Techniques used to improve Playground V2 like data filtering and model tuning* [00:25:21] Releasing the MJHQ30K benchmark to evaluate generative models* [00:30:35] The limitations of current models for detailed image editing tasks* [00:34:06] Using post-generation user feedback to create better benchmarks* [00:38:28] Concerns over potential misuse of powerful generative models* [00:41:54] Rethinking the graphics editor user experience in the AI era* [00:45:44] Integrating consistency models into Playground using preview rendering* [00:47:23] Interacting with the Stable Diffusion LoRAs community* [00:51:35] Running DevOps on A100s* [00:53:12] Startup ideas?TranscriptAlessio: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI. [00:00:15]Swyx: Hey, and today in the studio we have Suhail Doshi, welcome. [00:00:18]Suhail: Yeah, thanks. Thanks for having me. [00:00:20]Swyx: So among many things, you're a CEO and co-founder of Mixpanel, and I think about three years ago you left to start Mighty, and more recently, I think about a year ago, transitioned into Playground, and you've just announced your new round. How do you like to be introduced beyond that? [00:00:34]Suhail: Just founder of Playground is fine, yeah, prior co-founder and CEO of Mixpanel. [00:00:40]Swyx: Yeah, awesome. I'd just like to touch on Mixpanel a little bit, because it's obviously one of the more successful analytics companies we previously had amplitude on, and I'm curious if you had any reflections on the interaction of that amount of data that people would want to use for AI. I don't know if there's still a part of you that stays in touch with that world. [00:00:59]Suhail: Yeah, I mean, the short version is that maybe back in like 2015 or 2016, I don't really remember exactly, because it was a while ago, we had an ML team at Mixpanel, and I think this is when maybe deep learning or something really just started getting kind of exciting, and we were thinking that maybe given that we had such vast amounts of data, perhaps we could predict things. So we built two or three different features, I think we built a feature where we could predict whether users would churn from your product. We made a feature that could predict whether users would convert, we built a feature that could do anomaly detection, like if something occurred in your product, that was just very surprising, maybe a spike in traffic in a particular region, can we tell you that that happened? Because it's really hard to like know everything that's going on with your data, can we tell you something surprising about your data? And we tried all of these various features, most of it boiled down to just like, you know, using logistic regression, and it never quite seemed very groundbreaking in the end. And so I think, you know, we had a four or five person ML team, and I think we never expanded it from there. And I did all these Fast AI courses trying to learn about ML. And that was the- That's the first time you did fast AI. Yeah, that was the first time I did fast AI. Yeah, I think I've done it now three times, maybe. [00:02:12]Swyx: Oh, okay. [00:02:13]Suhail: I didn't know it was the third. No, no, just me reviewing it, it's maybe three times, but yeah. [00:02:16]Swyx: You mentioned prediction, but honestly, like it's also just about the feedback, right? The quality of feedback from users, I think it's useful for anyone building AI applications. [00:02:25]Suhail: Yeah. Yeah, I think I haven't spent a lot of time thinking about Mixpanel because it's been a long time, but sometimes I'm like, oh, I wonder what we could do now. And then I kind of like move on to whatever I'm working on, but things have changed significantly since. [00:02:39]Swyx: And then maybe we'll touch on Mighty a little bit. Mighty was very, very bold. My framing of it was, you will run our browsers for us because everyone has too many tabs open. I have too many tabs open and slowing down your machines that you can do it better for us in a centralized data center. [00:02:51]Suhail: Yeah, we were first trying to make a browser that we would stream from a data center to your computer at extremely low latency, but the real objective wasn't trying to make a browser or anything like that. The real objective was to try to make a new kind of computer. And the thought was just that like, you know, we have these computers in front of us today and we upgrade them or they run out of RAM or they don't have enough RAM or not enough disk or, you know, there's some limitation with our computers, perhaps like data locality is a problem. Why do I need to think about upgrading my computer ever? And so, you know, we just had to kind of observe that like, well, actually it seems like a lot of applications are just now in the browser, you know, it's like how many real desktop applications do we use relative to the number of applications we use in the browser? So it's just this realization that actually like, you know, the browser was effectively becoming more or less our operating system over time. And so then that's why we kind of decided to go, hmm, maybe we can stream the browser. Fortunately, the idea did not work for a couple of different reasons, but the objective is try to make sure new computer. [00:03:50]Swyx: Yeah, very, very bold. [00:03:51]Alessio: Yeah, and I was there at YC Demo Day when you first announced it. It was, I think, the last or one of the last in-person ones, at Pier34 in Mission Bay. How do you think about that now when everybody wants to put some of these models in people's machines and some of them want to stream them in, do you think there's maybe another wave of the same problem before it was like browser apps too slow, now it's like models too slow to run on device? [00:04:16]Suhail: Yeah. I mean, I've obviously pivoted away from Mighty, but a lot of what I somewhat believed at Mighty, maybe why I'm so excited about AI and what's happening, a lot of what Mighty was about was like moving compute somewhere else, right? Right now, applications, they get limited quantities of memory, disk, networking, whatever your home network has, et cetera. You know, what if these applications could somehow, if we could shift compute, and then these applications have vastly more compute than they do today. Right now it's just like client backend services, but you know, what if we could change the shape of how applications could interact with things? And it's changed my thinking. In some ways, AI has like a bit of a continuation of my belief that like perhaps we can really shift compute somewhere else. One of the problems with Mighty was that JavaScript is single-threaded in the browser. And what we learned, you know, the reason why we kind of abandoned Mighty was because I didn't believe we could make a new kind of computer. We could have made some kind of enterprise business, probably it could have made maybe a lot of money, but it wasn't going to be what I hoped it was going to be. And so once I realized that most of a web app is just going to be single-threaded JavaScript, then the only thing you could do largely withstanding changing JavaScript, which is a fool's errand most likely, make a better CPU, right? And there's like three CPU manufacturers, two of which sell, you know, big ones, you know, AMD, Intel, and then of course like Apple made the M1. And it's not like single-threaded CPU core performance, single-core performance was increasing very fast, it's plateauing rapidly. And even these different companies were not doing as good of a job, you know, sort of with the continuation of Moore's law. But what happened in AI was that you got like, if you think of the AI model as like a computer program, like just like a compiled computer program, it is literally built and designed to do massive parallel computations. And so if you could take like the universal approximation theorem to its like kind of logical complete point, you know, you're like, wow, I can get, make computation happen really rapidly and parallel somewhere else, you know, so you end up with these like really amazing models that can like do anything. It just turned out like perhaps the new kind of computer would just simply be shifted, you know, into these like really amazing AI models in reality. Yeah. [00:06:30]Swyx: Like I think Andrej Karpathy has always been, has been making a lot of analogies with the LLMOS. [00:06:34]Suhail: I saw his video and I watched that, you know, maybe two weeks ago or something like that. I was like, oh man, this, I very much resonate with this like idea. [00:06:41]Swyx: Why didn't I see this three years ago? [00:06:43]Suhail: Yeah. I think, I think there still will be, you know, local models and then there'll be these very large models that have to be run in data centers. I think it just depends on kind of like the right tool for the job, like any engineer would probably care about. But I think that, you know, by and large, like if the models continue to kind of keep getting bigger, you're always going to be wondering whether you should use the big thing or the small, you know, the tiny little model. And it might just depend on like, you know, do you need 30 FPS or 60 FPS? Maybe that would be hard to do, you know, over a network. [00:07:13]Swyx: You tackled a much harder problem latency wise than the AI models actually require. Yeah. [00:07:18]Suhail: Yeah. You can do quite well. You can do quite well. You definitely did 30 FPS video streaming, did very crazy things to make that work. So I'm actually quite bullish on the kinds of things you can do with networking. [00:07:30]Swyx: Maybe someday you'll come back to that at some point. But so for those that don't know, you're very transparent on Twitter. Very good to follow you just to learn your insights. And you actually published a postmortem on Mighty that people can read up on and willing to. So there was a bit of an overlap. You started exploring the AI stuff in June 2022, which is when you started saying like, I'm taking fast AI again. Maybe, was there more context around that? [00:07:54]Suhail: Yeah. I think I was kind of like waiting for the team at Mighty to finish up, you know, something. And I was like, okay, well, what can I do? I guess I will make some kind of like address bar predictor in the browser. So we had, you know, we had forked Chrome and Chromium. And I was like, you know, one thing that's kind of lame is that like this browser should be like a lot better at predicting what I might do, where I might want to go. It struck me as really odd that, you know, Chrome had very little AI actually or ML inside this browser. For a company like Google, you'd think there's a lot. Code is actually just very, you know, it's just a bunch of if then statements is more or less the address bar. So it seemed like a pretty big opportunity. And that's also where a lot of people interact with the browser. So, you know, long story short, I was like, hmm, I wonder what I could build here. So I started to take some AI courses and review the material again and get back to figuring it out. But I think that was somewhat serendipitous because right around April was, I think, a very big watershed moment in AI because that's when Dolly 2 came out. And I think that was the first truly big viral moment for generative AI. [00:08:59]Swyx: Because of the avocado chair. [00:09:01]Suhail: Yeah, exactly. [00:09:02]Swyx: It wasn't as big for me as Stable Diffusion. [00:09:04]Suhail: Really? [00:09:05]Swyx: Yeah, I don't know. Dolly was like, all right, that's cool. [00:09:07]Suhail: I don't know. Yeah. [00:09:09]Swyx: I mean, they had some flashy videos, but it didn't really register. [00:09:13]Suhail: That moment of images was just such a viral novel moment. I think it just blew people's mind. Yeah. [00:09:19]Swyx: I mean, it's the first time I encountered Sam Altman because they had this Dolly 2 hackathon and they opened up the OpenAI office for developers to walk in back when it wasn't as much of a security issue as it is today. I see. Maybe take us through the journey to decide to pivot into this and also choosing images. Obviously, you were inspired by Dolly, but there could be any number of AI companies and businesses that you could start and why this one, right? [00:09:45]Suhail: Yeah. So I think at that time, Mighty and OpenAI was not quite as popular as it is all of a sudden now these days, but back then they had a lot more bandwidth to kind of help anybody. And so we had been talking with the team there around trying to see if we could do really fast low latency address bar prediction with GPT-3 and 3.5 and that kind of thing. And so we were sort of figuring out how could we make that low latency. I think that just being able to talk to them and kind of being involved gave me a bird's eye view into a bunch of things that started to happen. Latency first was the Dolly 2 moment, but then stable diffusion came out and that was a big moment for me as well. And I remember just kind of like sitting up one night thinking, I was like, you know, what are the kinds of companies one could build? Like what matters right now? One thing that I observed is that I find a lot of inspiration when I'm working in a field in something and then I can identify a bunch of problems. Like for Mixpanel, I was an intern at a company and I just noticed that they were doing all this data analysis. And so I thought, hmm, I wonder if I could make a product and then maybe they would use it. And in this case, you know, the same thing kind of occurred. It was like, okay, there are a bunch of like infrastructure companies that put a model up and then you can use their API, like Replicate is a really good example of that. There are a bunch of companies that are like helping you with training, model optimization, Mosaic at the time, and probably still, you know, was doing stuff like that. So I just started listing out like every category of everything, of every company that was doing something interesting. I started listing out like weights and biases. I was like, oh man, weights and biases is like this great company. Do I want to compete with that company? I might be really good at competing with that company because of Mixpanel because it's so much of like analysis. But I was like, no, I don't want to do anything related to that. That would, I think that would be too boring now at this point. So I started to list out all these ideas and one thing I observed was that at OpenAI, they had like a playground for GPT-3, right? All it was is just like a text box more or less. And then there were some settings on the right, like temperature and whatever. [00:11:41]Swyx: Top K. [00:11:42]Suhail: Yeah, top K. You know, what's your end stop sequence? I mean, that was like their product before GPT, you know, really difficult to use, but fun if you're like an engineer. And I just noticed that their product kind of was evolving a little bit where the interface kind of was getting a little bit more complex. They had like a way where you could like generate something in the middle of a sentence and all those kinds of things. And I just thought to myself, I was like, everything is just like this text box and you generate something and that's about it. And stable diffusion had kind of come out and it was all like hugging face and code. Nobody was really building any UI. And so I had this kind of thing where I wrote prompt dash like question mark in my notes and I didn't know what was like the product for that at the time. I mean, it seems kind of trite now, but I just like wrote prompt. What's the thing for that? Manager. Prompt manager. Do you organize them? Like, do you like have a UI that can play with them? Yeah. Like a library. What would you make? And so then, of course, then you thought about what would the modalities be given that? How would you build a UI for each kind of modality? And so there are a couple of people working on some pretty cool things. And I basically chose graphics because it seemed like the most obvious place where you could build a really powerful, complex UI. That's not just only typing a box. It would very much evolve beyond that. Like what would be the best thing for something that's visual? Probably something visual. Yeah. I think that just that progression kind of happened and it just seemed like there was a lot of effort going into language, but not a lot of effort going into graphics. And then maybe the very last thing was, I think I was talking to Aditya Ramesh, who was the co-creator of DALL-E 2 and Sam. And I just kind of went to these guys and I was just like, hey, are you going to make like a UI for this thing? Like a true UI? Are you going to go for this? Are you going to make a product? For DALL-E. Yeah. For DALL-E. Yeah. Are you going to do anything here? Because if you are going to do it, just let me know and I will stop and I'll go do something else. But if you're not going to do anything, I'll just do it. And so we had a couple of conversations around what that would look like. And then I think ultimately they decided that they were going to focus on language primarily. And I just felt like it was going to be very underinvested in. Yes. [00:13:46]Swyx: There's that sort of underinvestment from OpenAI, but also it's a different type of customer than you're used to, presumably, you know, and Mixpanel is very good at selling to B2B and developers will figure on you or not. Yeah. Was that not a concern? [00:14:00]Suhail: Well, not so much because I think that, you know, right now I would say graphics is in this very nascent phase. Like most of the customers are just like hobbyists, right? Yeah. Like it's a little bit of like a novel toy as opposed to being this like very high utility thing. But I think ultimately, if you believe that you could make it very high utility, the probably the next customers will end up being B2B. It'll probably not be like a consumer. There will certainly be a variation of this idea that's in consumer. But if your quest is to kind of make like something that surpasses human ability for graphics, like ultimately it will end up being used for business. So I think it's maybe more of a progression. In fact, for me, it's maybe more like Mixpanel started out as SMB and then very much like ended up starting to grow up towards enterprise. So for me, I think it will be a very similar progression. But yeah, I mean, the reason why I was excited about it is because it was a creative tool. I make music and it's AI. It's like something that I know I could stay up till three o'clock in the morning doing. Those are kind of like very simple bars for me. [00:14:56]Alessio: So you mentioned Dolly, Stable Diffusion. You just had Playground V2 come out two days ago. Yeah, two days ago. [00:15:02]Suhail: Two days ago. [00:15:03]Alessio: This is a model you train completely from scratch. So it's not a cheap fine tune on something. You open source everything, including the weights. Why did you decide to do it? I know you supported Stable Diffusion XL in Playground before, right? Yep. What made you want to come up with V2 and maybe some of the interesting, you know, technical research work you've done? [00:15:24]Suhail: Yeah. So I think that we continue to feel like graphics and these foundation models for anything really related to pixels, but also definitely images continues to be very underinvested. It feels a little like graphics is in like this GPT-2 moment, right? Like even GPT-3, even when GPT-3 came out, it was exciting, but it was like, what are you going to use this for? Yeah, we'll do some text classification and some semantic analysis and maybe it'll sometimes like make a summary of something and it'll hallucinate. But no one really had like a very significant like business application for GPT-3. And in images, we're kind of stuck in the same place. We're kind of like, okay, I write this thing in a box and I get some cool piece of artwork and the hands are kind of messed up and sometimes the eyes are a little weird. Maybe I'll use it for a blog post, you know, that kind of thing. The utility feels so limited. And so, you know, and then we, you sort of look at Stable Diffusion and we definitely use that model in our product and our users like it and use it and love it and enjoy it, but it hasn't gone nearly far enough. So we were kind of faced with the choice of, you know, do we wait for progress to occur or do we make that progress happen? So yeah, we kind of embarked on a plan to just decide to go train these things from scratch. And I think the community has given us so much. The community for Stable Diffusion I think is one of the most vibrant communities on the internet. It's like amazing. It feels like, I hope this is what like Homebrew Club felt like when computers like showed up because it's like amazing what that community will do and it moves so fast. I've never seen anything in my life and heard other people's stories around this where an academic research paper comes out and then like two days later, someone has sample code for it. And then two days later, there's a model. And then two days later, it's like in nine products, you know, they're all competing with each other. It's incredible to see like math symbols on an academic paper go to well-designed features in a product. So I think the community has done so much. So I think we wanted to give back to the community kind of on our way. Certainly we would train a better model than what we gave out on Tuesday, but we definitely felt like there needs to be some kind of progress in these open source models. The last kind of milestone was in July when Stable Diffusion Excel came out, but there hasn't been anything really since. Right. [00:17:34]Swyx: And there's Excel Turbo now. [00:17:35]Suhail: Well, Excel Turbo is like this distilled model, right? So it's like lower quality, but fast. You have to decide, you know, what your trade off is there. [00:17:42]Swyx: It's also a consistency model. [00:17:43]Suhail: I don't think it's a consistency model. It's like it's they did like a different thing. Yeah. I think it's like, I don't want to get quoted for this, but it's like something called ad like adversarial or something. [00:17:52]Swyx: That's exactly right. [00:17:53]Suhail: I've read something about that. Maybe it's like closer to GANs or something, but I didn't really read the full paper. But yeah, there hasn't been quite enough progress in terms of, you know, there's no multitask image model. You know, the closest thing would be something called like EmuEdit, but there's no model for that. It's just a paper that's within meta. So we did that and we also gave out pre-trained weights, which is very rare. Usually you just get the aligned model and then you have to like see if you can do anything with it. So we actually gave out, there's like a 256 pixel pre-trained stage and a 512. And we did that for academic research because we come across people all the time in academia, they have access to like one A100 or eight at best. And so if we can give them kind of like a 512 pre-trained model, our hope is that there'll be interesting novel research that occurs from that. [00:18:38]Swyx: What research do you want to happen? [00:18:39]Suhail: I would love to see more research around things that users care about tend to be things like character consistency. [00:18:45]Swyx: Between frames? [00:18:46]Suhail: More like if you have like a face. Yeah, yeah. Basically between frames, but more just like, you know, you have your face and it's in one image and then you want it to be like in another. And users are very particular and sensitive to faces changing because we know we're trained on faces as humans. Not seeing a lot of innovation, enough innovation around multitask editing. You know, there are two things like instruct pics to pics and then the EmuEdit paper that are maybe very interesting, but we certainly are not pushing the fold on that in that regard. All kinds of things like around that rotation, you know, being able to keep coherence across images, style transfer is still very limited. Just even reasoning around images, you know, what's going on in an image, that kind of thing. Things are still very, very underpowered, very nascent. So therefore the utility is very, very limited. [00:19:32]Alessio: On the 1K Prompt Benchmark, you are 2.5x prefer to Stable Diffusion XL. How do you get there? Is it better images in the training corpus? Can you maybe talk through the improvements in the model? [00:19:44]Suhail: I think they're still very early on in the recipe, but I think it's a lot of like little things and you know, every now and then there are some big important things like certainly your data quality is really, really important. So we spend a lot of time thinking about that. But I would say it's a lot of things that you kind of clean up along the way as you train your model. Everything from captions to the data that you align with after pre-train to how you're picking your data sets, how you filter your data sets. I feel like there's a lot of work in AI that doesn't really feel like AI. It just really feels like just data set filtering and systems engineering and just like, you know, and the recipe is all there, but it's like a lot of extra work to do that. I think we plan to do a Playground V 2.1, maybe either by the end of the year or early next year. And we're just like watching what the community does with the model. And then we're just going to take a lot of the things that they're unhappy about and just like fix them. You know, so for example, like maybe the eyes of people in an image don't feel right. They feel like they're a little misshapen or they're kind of blurry feeling. That's something that we already know we want to fix. So I think in that case, it's going to be about data quality. Or maybe you want to improve the kind of the dynamic range of color. You know, we want to make sure that that's like got a good range in any image. So what technique can we use there? There's different things like offset noise, pyramid noise, terminal zero, SNR, like there are all these various interesting things that you can do. So I think it's like a lot of just like tricks. Some are tricks, some are data, and some is just like cleaning. [00:21:11]Swyx: Specifically for faces, it's very common to use a pipeline rather than just train the base model more. Do you have a strong belief either way on like, oh, they should be separated out to different stages for like improving the eyes, improving the face or enhance or whatever? Or do you think like it can all be done in one model? [00:21:28]Suhail: I think we will make a unified model. Yeah, I think it will. I think we'll certainly in the end, ultimately make a unified model. There's not enough research about this. Maybe there is something out there that we haven't read. There are some bottlenecks, like for example, in the VAE, like the VAEs are ultimately like compressing these things. And so you don't know. And then you might have like a big informational information bottleneck. So maybe you would use a pixel based model, perhaps. I think we've talked to people, everyone from like Rombach to various people, Rombach trained stable diffusion. I think there's like a big question around the architecture of these things. It's still kind of unknown, right? Like we've got transformers and we've got like a GPT architecture model, but then there's this like weird thing that's also seemingly working with diffusion. And so, you know, are we going to use vision transformers? Are we going to move to pixel based models? Is there a different kind of architecture? We don't really, I don't think there have been enough experiments. Still? Oh my God. [00:22:21]Swyx: Yeah. [00:22:22]Suhail: That's surprising. I think it's very computationally expensive to do a pipeline model where you're like fixing the eyes and you're fixing the mouth and you're fixing the hands. [00:22:29]Swyx: That's what everyone does as far as I understand. [00:22:31]Suhail: I'm not exactly sure what you mean, but if you mean like you get an image and then you will like make another model specifically to fix a face, that's fairly computationally expensive. And I think it's like not probably not the right way. Yeah. And it doesn't generalize very well. Now you have to pick all these different things. [00:22:45]Swyx: Yeah. You're just kind of glomming things on together. Yeah. Like when I look at AI artists, like that's what they do. [00:22:50]Suhail: Ah, yeah, yeah, yeah. They'll do things like, you know, I think a lot of ARs will do control net tiling to do kind of generative upscaling of all these different pieces of the image. Yeah. And I think these are all just like, they're all hacks ultimately in the end. I mean, it just to me, it's like, let's go back to where we were just three years, four years ago with where deep learning was at and where language was that, you know, it's the same thing. It's like we were like, okay, well, I'll just train these very narrow models to try to do these things and kind of ensemble them or pipeline them to try to get to a best in class result. And here we are with like where the models are gigantic and like very capable of solving huge amounts of tasks when given like lots of great data. [00:23:28]Alessio: You also released a new benchmark called MJHQ30K for automatic evaluation of a model's aesthetic quality. I have one question. The data set that you use for the benchmark is from Midjourney. Yes. You have 10 categories. How do you think about the Playground model, Midjourney, like, are you competitors? [00:23:47]Suhail: There are a lot of people, a lot of people in research, they like to compare themselves to something they know they can beat, right? Maybe this is the best reason why it can be helpful to not be a researcher also sometimes like I'm not trained as a researcher, I don't have a PhD in anything AI related, for example. But I think if you care about products and you care about your users, then the most important thing that you want to figure out is like everyone has to acknowledge that Midjourney is very good. They are the best at this thing. I'm happy to admit that. I have no problem admitting that. Just easy. It's very visual to tell. So I think it's incumbent on us to try to compare ourselves to the thing that's best, even if we lose, even if we're not the best. At some point, if we are able to surpass Midjourney, then we only have ourselves to compare ourselves to. But on First Blush, I think it's worth comparing yourself to maybe the best thing and try to find like a really fair way of doing that. So I think more people should try to do that. I definitely don't think you should be kind of comparing yourself on like some Google model or some old SD, Stable Diffusion model and be like, look, we beat Stable Diffusion 1.5. I think users ultimately want care, how close are you getting to the thing that people mostly agree with? So we put out that benchmark for no other reason to say like, this seems like a worthy thing for us to at least try, for people to try to get to. And then if we surpass it, great, we'll come up with another one. [00:25:06]Alessio: Yeah, no, that's awesome. And you killed Stable Diffusion Excel and everything. In the benchmark chart, it says Playground V2 1024 pixel dash aesthetic. Do you have kind of like, yeah, style fine tunes or like what's the dash aesthetic for? [00:25:21]Suhail: We debated this, maybe we named it wrong or something, but we were like, how do we help people realize the model that's aligned versus the models that weren't? Because we gave out pre-trained models, we didn't want people to like use those. So that's why they're called base. And then the aesthetic model, yeah, we wanted people to pick up the thing that makes things pretty. Who wouldn't want the thing that's aesthetic? But if there's a better name, we're definitely open to feedback. No, no, that's cool. [00:25:46]Alessio: I was using the product. You also have the style filter and you have all these different styles. And it seems like the styles are tied to the model. So there's some like SDXL styles, there's some Playground V2 styles. Can you maybe give listeners an overview of how that works? Because in language, there's not this idea of like style, right? Versus like in vision model, there is, and you cannot get certain styles in different [00:26:11]Suhail: models. [00:26:12]Alessio: So how do styles emerge and how do you categorize them and find them? [00:26:15]Suhail: Yeah, I mean, it's so fun having a community where people are just trying a model. Like it's only been two days for Playground V2. And we actually don't know what the model's capable of and not capable of. You know, we certainly see problems with it. But we have yet to see what emergent behavior is. I mean, we've just sort of discovered that it takes about like a week before you start to see like new things. I think like a lot of that style kind of emerges after that week, where you start to see, you know, there's some styles that are very like well known to us, like maybe like pixel art is a well known style. Photorealism is like another one that's like well known to us. But there are some styles that cannot be easily named. You know, it's not as simple as like, okay, that's an anime style. It's very visual. And in the end, you end up making up the name for what that style represents. And so the community kind of shapes itself around these different things. And so if anyone that's into stable diffusion and into building anything with graphics and stuff with these models, you know, you might have heard of like Proto Vision or Dream Shaper, some of these weird names, but they're just invented by these authors. But they have a sort of je ne sais quoi that, you know, appeals to users. [00:27:26]Swyx: Because it like roughly embeds to what you what you want. [00:27:29]Suhail: I guess so. I mean, it's like, you know, there's one of my favorite ones that's fine tuned. It's not made by us. It's called like Starlight XL. It's just this beautiful model. It's got really great color contrast and visual elements. And the users love it. I love it. And it's so hard. I think that's like a very big open question with graphics that I'm not totally sure how we'll solve. I don't know. It's, it's like an evolving situation too, because styles get boring, right? They get fatigued. Like it's like listening to the same style of pop song. I try to relate to graphics a little bit like with music, because I think it gives you a little bit of a different shape to things. Like it's not as if we just have pop music, rap music and country music, like all of these, like the EDM genre alone has like sub genres. And I think that's very true in graphics and painting and art and anything that we're doing. There's just these sub genres, even if we can't quite always name them. But I think they are emergent from the community, which is why we're so always happy to work with the community. [00:28:26]Swyx: That is a struggle. You know, coming back to this, like B2B versus B2C thing, B2C, you're going to have a huge amount of diversity and then it's going to reduce as you get towards more sort of B2B type use cases. I'm making this up here. So like you might be optimizing for a thing that you may eventually not need. [00:28:42]Suhail: Yeah, possibly. Yeah, possibly. I think like a simple thing with startups is that I worry sometimes by trying to be overly ambitious and like really scrutinizing like what something is in its most nascent phase that you miss the most ambitious thing you could have done. Like just having like very basic curiosity with something very small can like kind of lead you to something amazing. Like Einstein definitely did that. And then he like, you know, he basically won all the prizes and got everything he wanted and then basically did like kind of didn't really. He can dismiss quantum and then just kind of was still searching, you know, for the unifying theory. And he like had this quest. I think that happens a lot with like Nobel Prize people. I think there's like a term for it that I forget. I actually wanted to go after a toy almost intentionally so long as that I could see, I could imagine that it would lead to something very, very large later. Like I said, it's very hobbyist, but you need to start somewhere. You need to start with something that has a big gravitational pull, even if these hobbyists aren't likely to be the people that, you know, have a way to monetize it or whatever, even if they're, but they're doing it for fun. So there's something, something there that I think is really important. But I agree with you that, you know, in time we will absolutely focus on more utilitarian things like things that are more related to editing feats that are much harder. And so I think like a very simple use case is just, you know, I'm not a graphics designer. It seems like very simple that like you, if we could give you the ability to do really complex graphics without skill, wouldn't you want that? You know, like my wife the other day was set, you know, said, I wish Playground was better. When are you guys going to have a feature where like we could make my son, his name's Devin, smile when he was not smiling in the picture for the holiday card. Right. You know, just being able to highlight his, his mouth and just say like, make him smile. Like why can't we do that with like high fidelity and coherence, little things like that, all the way to putting you in completely different scenarios. [00:30:35]Swyx: Is that true? Can we not do that in painting? [00:30:37]Suhail: You can do in painting, but the quality is just so bad. Yeah. It's just really terrible quality. You know, it's like you'll do it five times and it'll still like kind of look like crooked or just artifact. Part of it's like, you know, the lips on the face, there's such little information there. So small that the models really struggle with it. Yeah. [00:30:55]Swyx: Make the picture smaller and you don't see it. That's my trick. I don't know. [00:30:59]Suhail: Yeah. Yeah. That's true. Or, you know, you could take that region and make it really big and then like say it's a mouth and then like shrink it. It feels like you're wrestling with it more than it's doing something that kind of surprises you. [00:31:12]Swyx: Yeah. It feels like you are very much the internal tastemaker, like you carry in your head this vision for what a good art model should look like. Do you find it hard to like communicate it to like your team and other people? Just because it's obviously it's hard to put into words like we just said. [00:31:26]Suhail: Yeah. It's very hard to explain. Images have such high bitrate compared to just words and we don't have enough words to describe these things. It's not terribly difficult. I think everyone on the team, if they don't have good kind of like judgment taste or like an eye for some of these things, they're like steadily building it because they have no choice. Right. So in that realm, I don't worry too much, actually. Like everyone is kind of like learning to get the eye is what I would call it. But I also have, you know, my own narrow taste. Like I don't represent the whole population either. [00:31:59]Swyx: When you benchmark models, you know, like this benchmark we're talking about, we use FID. Yeah. Input distance. OK. That's one measure. But like it doesn't capture anything you just said about smiles. [00:32:08]Suhail: Yeah. FID is generally a bad metric. It's good up to a point and then it kind of like is irrelevant. Yeah. [00:32:14]Swyx: And then so are there any other metrics that you like apart from vibes? I'm always looking for alternatives to vibes because vibes don't scale, you know. [00:32:22]Suhail: You know, it might be fun to kind of talk about this because it's actually kind of fresh. So up till now, we haven't needed to do a ton of like benchmarking because we hadn't trained our own model and now we have. So now what? What does that mean? How do we evaluate it? And, you know, we're kind of like living with the last 48, 72 hours of going, did the way that we benchmark actually succeed? [00:32:43]Swyx: Did it deliver? [00:32:44]Suhail: Right. You know, like I think Gemini just came out. They just put out a bunch of benchmarks. But all these benchmarks are just an approximation of how you think it's going to end up with real world performance. And I think that's like very fascinating to me. So if you fake that benchmark, you'll still end up in a really bad scenario at the end of the day. And so, you know, one of the benchmarks we did was we kind of curated like a thousand prompts. And I think that's kind of what we published in our blog post, you know, of all these tasks that we a lot of some of them are curated by our team where we know the models all suck at it. Like my favorite prompt that no model is really capable of is a horse riding an astronaut, the inverse one. And it's really, really hard to do. [00:33:22]Swyx: Not in data. [00:33:23]Suhail: You know, another one is like a giraffe underneath a microwave. How does that work? Right. There's so many of these little funny ones. We do. We have prompts that are just like misspellings of things. Yeah. We'll figure out if the models will figure it out. [00:33:36]Swyx: They should embed to the same space. [00:33:39]Suhail: Yeah. And just like all these very interesting weirdo things. And so we have so many of these and then we kind of like evaluate whether the models are any good at it. And the reality is that they're all bad at it. And so then you're just picking the most aesthetic image. We're still at the beginning of building like the best benchmark we can that aligns most with just user happiness, I think, because we're not we're not like putting these in papers and trying to like win, you know, I don't know, awards at ICCV or something if they have awards. You could. [00:34:05]Swyx: That's absolutely a valid strategy. [00:34:06]Suhail: Yeah, you could. But I don't think it could correlate necessarily with the impact we want to have on humanity. I think we're still evolving whatever our benchmarks are. So the first benchmark was just like very difficult tasks that we know the models are bad at. Can we come up with a thousand of these, whether they're hand rated and some of them are generated? And then can we ask the users, like, how do we do? And then we wanted to use a benchmark like party prompts. We mostly did that so people in academia could measure their models against ours versus others. But yeah, I mean, fit is pretty bad. And I think in terms of vibes, it's like you put out the model and then you try to see like what users make. And I think my sense is that we're going to take all the things that we notice that the users kind of were failing at and try to find like new ways to measure that, whether that's like a smile or, you know, color contrast or lighting. One benefit of Playground is that we have users making millions of images every single day. And so we can just ask them for like a post generation feedback. Yeah, we can just ask them. We can just say, like, how good was the lighting here? How was the subject? How was the background? [00:35:06]Swyx: Like a proper form of like, it's just like you make it, you come to our site, you make [00:35:10]Suhail: an image and then we say, and then maybe randomly you just say, hey, you know, like, how was the color and contrast of this image? And you say it was not very good, just tell us. So I think I think we can get like tens of thousands of these evaluations every single day to truly measure real world performance as opposed to just like benchmark performance. I would like to publish hopefully next year. I think we will try to publish a benchmark that anyone could use, that we evaluate ourselves on and that other people can, that we think does a good job of approximating real world performance because we've tried it and done it and noticed that it did. Yeah. I think we will do that. [00:35:45]Swyx: I personally have a few like categories that I consider special. You know, you know, you have like animals, art, fashion, food. There are some categories which I consider like a different tier of image. Top among them is text in images. How do you think about that? So one of the big wow moments for me, something I've been looking out for the entire year is just the progress of text and images. Like, can you write in an image? Yeah. And Ideogram came out recently, which had decent but not perfect text and images. Dolly3 had improved some and all they said in their paper was that they just included more text in the data set and it just worked. I was like, that's just lazy. But anyway, do you care about that? Because I don't see any of that in like your sample. Yeah, yeah. [00:36:27]Suhail: The V2 model was mostly focused on image quality versus like the feature of text synthesis. [00:36:33]Swyx: Well, as a business user, I care a lot about that. [00:36:35]Suhail: Yeah. Yeah. I'm very excited about text synthesis. And yeah, I think Ideogram has done a good job of maybe the best job. Dolly has like a hit rate. Yes. You know, like sometimes it's Egyptian letters. Yeah. I'm very excited about text synthesis. You know, I don't have much to say on it just yet. You know, you don't want just text effects. I think where this has to go is it has to be like you could like write little tiny pieces of text like on like a milk carton. That's maybe not even the focal point of a scene. I think that's like a very hard task that, you know, if you could do something like that, then there's a lot of other possibilities. Well, you don't have to zero shot it. [00:37:09]Swyx: You can just be like here and focus on this. [00:37:12]Suhail: Sure. Yeah, yeah. Definitely. Yeah. [00:37:16]Swyx: Yeah. So I think text synthesis would be very exciting. I'll also flag that Max Wolf, MiniMaxxier, which you must have come across his work. He's done a lot of stuff about using like logo masks that then map onto food and vegetables. And it looks like text, which can be pretty fun. [00:37:29]Suhail: That's the wonderful thing about like the open source community is that you get things like control net and then you see all these people do these just amazing things with control net. And then you wonder, I think from our point of view, we sort of go that that's really wonderful. But how do we end up with like a unified model that can do that? What are the bottlenecks? What are the issues? The community ultimately has very limited resources. And so they need these kinds of like workaround research ideas to get there. But yeah. [00:37:55]Swyx: Are techniques like control net portable to your architecture? [00:37:58]Suhail: Definitely. Yeah. We kept the Playground V2 exactly the same as SDXL. Not because not out of laziness, but just because we knew that the community already had tools. You know, all you have to do is maybe change a string in your code and then, you know, retrain a control net for it. So it was very intentional to do that. We didn't want to fragment the community with different architectures. Yeah. [00:38:16]Swyx: So basically, I'm going to go over three more categories. One is UIs, like app UIs, like mock UIs. Third is not safe for work, and then copyrighted stuff. I don't know if you care to comment on any of those. [00:38:28]Suhail: I think the NSFW kind of like safety stuff is really important. I kind of think that one of the biggest risks kind of going into maybe the U.S. election year will probably be very interrelated with like graphics, audio, video. I think it's going to be very hard to explain, you know, to a family relative who's not kind of in our world. And our world is like sometimes very, you know, we think it's very big, but it's very tiny compared to the rest of the world. Some people like there's still lots of humanity who have no idea what chat GPT is. And I think it's going to be very hard to explain, you know, to your uncle, aunt, whoever, you know, hey, I saw President Biden say this thing on a video, you know, I can't believe, you know, he said that. I think that's going to be a very troubling thing going into the world next year, the year after. [00:39:12]Swyx: That's more like a risk thing, like deepfakes, faking, political faking. But there's a lot of studies on how for most businesses, you don't want to train on not safe for work images, except that it makes you really good at bodies. [00:39:24]Suhail: Personally, we filter out NSFW type of images in our data set so that it's, you know, so our safety filter stuff doesn't have to work as hard. [00:39:32]Swyx: But you've heard this argument that not safe for work images are very good at human anatomy, which you do want to be good at. [00:39:38]Suhail: It's not like necessarily a bad thing to train on that data. It's more about like how you go and use it. That's why I was kind of talking about safety, you know, in part, because there are very terrible things that can happen in the world. If you have an extremely powerful graphics model, you know, suddenly like you can kind of imagine, you know, now if you can like generate nudes and then there's like you could do very character consistent things with faces, like what does that lead to? Yeah. And so I tend to think more what occurs after that, right? Even if you train on, let's say, you know, new data, if it does something to kind of help, there's nothing wrong with the human anatomy, it's very valid for a model to learn that. But then it's kind of like, how does that get used? And, you know, I won't bring up all of the very, very unsavory, terrible things that we see on a daily basis on the site, but I think it's more about what occurs. And so we, you know, we just recently did like a big sprint on safety. It's very difficult with graphics and art, right? Because there is tasteful art that has nudity, right? They're all over in museums, like, you know, there's very valid situations for that. And then there's the things that are the gray line of that, you know, what I might not find tasteful, someone might be like, that is completely tasteful, right? And then there are things that are way over the line. And then there are things that maybe you or, you know, maybe I would be okay with, but society isn't, you know? So where does that kind of end up on the spectrum of things? I think it's really hard with art. Sometimes even if you have like things that are not nude, if a child goes to your site, scrolls down some images, you know, classrooms of kids, you know, using our product, it's a really difficult problem. And it stretches mostly culture, society, politics, everything. [00:41:14]Alessio: Another favorite topic of our listeners is UX and AI. And I think you're probably one of the best all-inclusive editors for these things. So you don't just have the prompt, images come out, you pray, and now you do it again. First, you let people pick a seed so they can kind of have semi-repeatable generation. You also have, yeah, you can pick how many images and then you leave all of them in the canvas. And then you have kind of like this box, the generation box, and you can even cross between them and outpaint. There's all these things. How did you get here? You know, most people are kind of like, give me text, I give you image. You know, you're like, these are all the tools for you. [00:41:54]Suhail: Even though we were trying to make a graphics foundation model, I think we think that we're also trying to like re-imagine like what a graphics editor might look like given the change in technology. So, you know, I don't think we're trying to build Photoshop, but it's the only thing that we could say that people are largely familiar with. Oh, okay, there's Photoshop. What would Photoshop compare itself to pre-computer? I don't know, right? It's like, or kind of like a canvas, but you know, there's these menu options and you can use your mouse. What's a mouse? So I think that we're trying to re-imagine what a graphics editor might look like, not just for the fun of it, but because we kind of have no choice. Like there's this idea in image generation where you can generate images. That's like a super weird thing. What is that in Photoshop, right? You have to wait right now for the time being, but the wait is worth it often for a lot of people because they can't make that with their own skills. So I think it goes back to, you know, how we started the company, which was kind of looking at GPT-3's Playground, that the reason why we're named Playground is a homage to that actually. And, you know, it's like, shouldn't these products be more visual? These prompt boxes are like a terminal window, right? We're kind of at this weird point where it's just like MS-DOS. I remember my mom using MS-DOS and I memorized the keywords, like DIR, LS, all those things, right? It feels a little like we're there, right? Prompt engineering, parentheses to say beautiful or whatever, waits the word token more in the model or whatever. That's like super strange. I think a large portion of humanity would agree that that's not user-friendly, right? So how do we think about the products to be more user-friendly? Well, sure, you know, sure, it would be nice if I wanted to get rid of, like, the headphones on my head, you know, it'd be nice to mask it and then say, you know, can you remove the headphones? You know, if I want to grow, expand the image, you know, how can we make that feel easier without typing lots of words and being really confused? I don't even think we've nailed the UI UX yet. Part of that is because we're still experimenting. And part of that is because the model and the technology is going to get better. And whatever felt like the right UX six months ago is going to feel very broken now. So that's a little bit of how we got there is kind of saying, does everything have to be like a prompt in a box? Or can we do things that make it very intuitive for users? [00:44:03]Alessio: How do you decide what to give access to? So you have things like an expand prompt, which Dally 3 just does. It doesn't let you decide whether you should or not. [00:44:13]Swyx: As in, like, rewrites your prompts for you. [00:44:15]Suhail: Yeah, for that feature, I think once we get it to be cheaper, we'll probably just give it up. We'll probably just give it away. But we also decided something that might be a little bit different. We noticed that most of image generation is just, like, kind of casual. You know, it's in WhatsApp. It's, you know, it's in a Discord bot somewhere with Majorny. It's in ChatGPT. One of the differentiators I think we provide is at the expense of just lots of users necessarily. Mainstream consumers is that we provide as much, like, power and tweakability and configurability as possible. So the only reason why it's a toggle, because we know that users might want to use it and might not want to use it. There's some really powerful power user hobbyists that know what they're doing. And then there's a lot of people that just want something that looks cool, but they don't know how to prompt. And so I think a lot of Playground is more about going after that core user base that, like, knows, has a little bit more savviness and how to use these tools. You know, the average Dell user is probably not going to use ControlNet. They probably don't even know what that is. And so I think that, like, as the models get more powerful, as there's more tooling, hopefully you'll imagine a new sort of AI-first graphics editor that's just as, like, powerful and configurable as Photoshop. And you might have to master a new kind of tool. [00:45:28]Swyx: There's so many things I could go bounce off of. One, you mentioned about waiting. We have to kind of somewhat address the elephant in the room. Consistency models have been blowing up the past month. How do you think about integrating that? Obviously, there's a lot of other companies also trying to beat you to that space as well. [00:45:44]Suhail: I think we were the first company to integrate it. Ah, OK. [00:45:47]Swyx: Yeah. I didn't see your demo. [00:45:49]Suhail: Oops. Yeah, yeah. Well, we integrated it in a different way. OK. There are, like, 10 companies right now that have kind of tried to do, like, interactive editing, where you can, like, draw on the left side and then you get an image on the right side. We decided to kind of, like, wait and see whether there's, like, true utility on that. We have a different feature that's, like, unique in our product that is called preview rendering. And so you go to the product and you say, you know, we're like, what is the most common use case? The most common use case is you write a prompt and then you get an image. But what's the most annoying thing about that? The most annoying thing is, like, it feels like a slot machine, right? You're like, OK, I'm going to put it in and maybe I'll get something cool. So we did something that seemed a lot simpler, but a lot more relevant to how users already use these products, which is preview rendering. You toggle it on and it will show you a render of the image. And then graphics tools already have this. Like, if you use Cinema 4D or After Effects or something, it's called viewport rendering. And so we try to take something that exists in the real world that has familiarity and say, OK, you're going to get a rough sense of an early preview of this thing. And then when you're ready to generate, we're going to try to be as coherent about that image that you saw. That way, you're not spending so much time just like pulling down the slot machine lever. I think we were the first company to actually ship a quick LCM thing. Yeah, we were very excited about it. So we shipped it very quick. Yeah. [00:47:03]Swyx: Well, the demos I've been seeing, it's not like a preview necessarily. They're almost using it to animate their generations. Like, because you can kind of move shapes. [00:47:11]Suhail: Yeah, yeah, they're like doing it. They're animating it. But they're sort of showing, like, if I move a moon, you know, can I? [00:47:17]Swyx: I don't know. To me, it unlocks video in a way. [00:47:20]Suhail: Yeah. But the video models are already so much better than that. Yeah. [00:47:23]Swyx: There's another one, which I think is general ecosystem of Loras, right? Civit is obviously the most popular repository of Loras. How do you think about interacting with that ecosystem? [00:47:34]Suhail: The guy that did Lora, not the guy that invented Loras, but the person that brought Loras to Stable Diffusion actually works with us on some projects. His name is Simu. Shout out to Simu. And I think Loras are wonderful. Obviously, fine tuning all these Dreambooth models and such, it's just so heavy. And it's obvious in our conversation around styles and vibes, it's very hard to evaluate the artistry of these things. Loras give people this wonderful opportunity to create sub-genres of art. And I think they're amazing. Any graphics tool, any kind of thing that's expressing art has to provide some level of customization to its user base that goes beyond just typing Greg Rakowski in a prompt. We have to give more than that. It's not like users want to type these real artist names. It's that they don't know how else to get an image that looks interesting. They truly want originality and uniqueness. And I think Loras provide that. And they provide it in a very nice, scalable way. I hope that we find something even better than Loras in the long term, because there are still weaknesses to Loras, but I think they do a good job for now. Yeah. [00:48:39]Swyx: And so you would never compete with Civit? You would just kind of let people import? [00:48:43]Suhail: Civit's a site where all these things get kind of hosted by the community, right? And so, yeah, we'll often pull down some of the best things there. I think when we have a significantly better model, we will certainly build something that gets closer to that. Again, I go back to saying just I still think this is very nascent. Things are very underpowered, right? Loras are not easy to train. They're easy for an engineer. It sure would be nicer if I could just pick five or six reference images, right? And they might even be five or six different reference images that are not... They're just very different. They communicate a style, but they're actually like... It's like a mood board, right? And you have to be kind of an engineer almost to train these Loras or go to some site and be technically savvy, at least. It seems like it'd be much better if I could say, I love this style. Here are five images and you tell the model, like, this is what I want. And the model gives you something that's very aligned with what your style is, what you're talking about. And it's a style you couldn't even communicate, right? There's n
SF folks: join us at the AI Engineer Foundation's Emergency Hackathon tomorrow and consider the Newton if you'd like to cowork in the heart of the Cerebral Arena.Our community page is up to date as usual!~800,000 developers watched OpenAI Dev Day, ~8,000 of whom listened along live on our ThursdAI x Latent Space, and ~800 of whom got tickets to attend in person:OpenAI's first developer conference easily surpassed most people's lowballed expectations - they simply did everything short of announcing GPT-5, including:* ChatGPT (the consumer facing product)* GPT4 Turbo already in ChatGPT (running faster, with an April 2023 cutoff), all noticed by users weeks before the conference* Model picker eliminated, God Model chooses for you* GPTs - “tailored version of ChatGPT for a specific purpose” - stopping short of “Agents”. With custom instructions, expanded knowledge, and actions, and an intuitive no-code GPT Builder UI (we tried all these on our livestream yesterday and found some issues, but also were able to ship interesting GPTs very quickly) and a GPT store with revenue sharing (an important criticism we focused on in our episode on ChatGPT Plugins)* API (the developer facing product)* APIs for Dall-E 3, GPT4 Vision, Code Interpreter (RIP Advanced Data Analysis), GPT4 Finetuning and (surprise!) Text to Speech* many thought each of these would take much longer to arrive* usable in curl and in playground* BYO Interpreter + Async Agents?* Assistant API: stateful API backing “GPTs” like apps, with support for calling multiple tools in parallel, persistent Threads (storing message history, unlimited context window with some asterisks), and uploading/accessing Files (with a possibly-too-simple RAG algorithm, and expensive pricing)* Whisper 3 announced and open sourced (HuggingFace recap)* Price drops for a bunch of things!* Misc: Custom Models for big spending ($2-3m) customers, Copyright Shield, SatyaThe progress here feels fast, but it is mostly (incredible) last-mile execution on model capabilities that we already knew to exist. On reflection it is important to understand that the one guiding principle of OpenAI, even more than being Open (we address that in part 2 of today's pod), is that slow takeoff of AGI is the best scenario for humanity, and that this is what slow takeoff looks like:When introducing GPTs, Sam was careful to assert that “gradual iterative deployment is the best way to address the safety challenges with AI”:This is why, in fact, GPTs and Assistants are intentionally underpowered, and it is a useful exercise to consider what else OpenAI continues to consider dangerous (for example, many people consider a while(true) loop a core driver of an agent, which GPTs conspicuously lack, though Lilian Weng of OpenAI does not).We convened the crew to deliver the best recap of OpenAI Dev Day in Latent Space pod style, with a 1hr deep dive with the Functions pod crew from 5 months ago, and then another hour with past and future guests live from the venue itself, discussing various elements of how these updates affect their thinking and startups. Enjoy!Show Notes* swyx live thread (see pinned messages in Twitter Space for extra links from community)* Newton AI Coworking Interest Form in the heart of the Cerebral ArenaTimestamps* [00:00:00] Introduction* [00:01:59] Part I: Latent Space Pod Recap* [00:06:16] GPT4 Turbo and Assistant API* [00:13:45] JSON mode* [00:15:39] Plugins vs GPT Actions* [00:16:48] What is a "GPT"?* [00:21:02] Criticism: the God Model* [00:22:48] Criticism: ChatGPT changes* [00:25:59] "GPTs" is a genius marketing move* [00:26:59] RIP Advanced Data Analysis* [00:28:50] GPT Creator as AI Prompt Engineer* [00:31:16] Zapier and Prompt Injection* [00:34:09] Copyright Shield* [00:38:03] Sharable GPTs solve the API distribution issue* [00:39:07] Voice* [00:44:59] Vision* [00:49:48] In person experience* [00:55:11] Part II: Spot Interviews* [00:56:05] Jim Fan (Nvidia - High Level Takeaways)* [01:05:35] Raza Habib (Humanloop) - Foundation Model Ops* [01:13:59] Surya Dantuluri (Stealth) - RIP Plugins* [01:21:20] Reid Robinson (Zapier) - AI Actions for GPTs* [01:31:19] Div Garg (MultiOn) - GPT4V for Agents* [01:37:15] Louis Knight-Webb (Bloop.ai) - AI Code Search* [01:49:21] Shreya Rajpal (Guardrails.ai) - on Hallucinations* [01:59:51] Alex Volkov (Weights & Biases, ThursdAI) - "Keeping AI Open"* [02:10:26] Rahul Sonwalkar (Julius AI) - Advice for FoundersTranscript[00:00:00] Introduction[00:00:00] swyx: Hey everyone, this is Swyx coming at you live from the Newton, which is in the heart of the Cerebral Arena. It is a new AI co working space that I and a couple of friends are working out of. There are hot desks available if you're interested, just check the show notes. But otherwise, obviously, it's been 24 hours since the opening of Dev Day, a lot of hot reactions and longstanding tradition, one of the longest traditions we've had.[00:00:29] And the latent space pod is to convene emergency sessions and record the live thoughts of developers and founders going through and processing in real time. I think a lot of the roles of podcasts isn't as perfect information delivery channels, but really as an audio and oral history of what's going on as it happens, while it happens.[00:00:49] So this one's a little unusual. Previously, we only just gathered on Twitter Spaces, and then just had a bunch of people. The last one was the Code Interpreter one with 22, 000 people showed up. But this one is a little bit more complicated because there's an in person element and then a online element.[00:01:06] So this is a two part episode. The first part is a recorded session between our latent space people and Simon Willison and Alex Volkoff from the Thursday iPod, just kind of recapping the day. But then also, as the second hour, I managed to get a bunch of interviews with previous guests on the pod who we're still friends with and some new people that we haven't yet had on the pod.[00:01:28] But I wanted to just get their quick reactions because most of you have known and loved Jim Fan and Div Garg and a bunch of other folks that we interviewed. So I just want to, I'm excited to introduce To you the broader scope of what it's like to be at OpenAI Dev Day in person bring you the audio experience as well as give you some of the thoughts that developers are having as they process the announcements from OpenAI.[00:01:51] So first off, we have the Mainspace Pod recap. One hour of open I dev day.[00:01:59] Part I: Latent Space Pod Recap[00:01:59] Alessio: Hey. Welcome to the Latents Based Podcast an emergency edition after OpenAI Dev Day. This is Alessio, partner and CTO of Residence at Decibel Partners, and as usual, I'm joined by Swyx, founder of SmallAI. Hey,[00:02:12] swyx: and today we have two special guests with us covering all the latest and greatest.[00:02:17] We, we, we love to get our band together and recap things, especially when they're big. And it seems like that every three months we have to do this. So Alex, welcome. From Thursday AI we've been collaborating a lot on the Twitter spaces and welcome Simon from many, many things, but also I think you're the first person to not, not make four appearances on our pod.[00:02:37] Oh, wow. I feel privileged. So welcome. Yeah, I think we're all there yesterday. How... Do we feel like, what do you want to kick off with? Maybe Simon, you want to, you want to take first and then Alex. Sure. Yeah. I mean,[00:02:47] Simon Willison: yesterday was quite exhausting, quite frankly. I feel like it's going to take us as a community several months just to completely absorb all of the stuff that they dropped on us in one giant.[00:02:57] Giant batch. It's particularly impressive considering they launched a ton of features, what, three or four weeks ago? ChatGPT voice and the combined mode and all of that kind of thing. And then they followed up with everything from yesterday. That said, now that I've started digging into the stuff that they released yesterday, some of it is clearly in need of a bit more polish.[00:03:15] You know, the the, the reality of what they look, what they released is I'd say about 80 percent of, of what it looks like it was yesterday, which is still impressive. You know, don't get me wrong. This is an amazing batch of stuff, but there are definitely problems and sharp edges that we need to file off.[00:03:29] And there are things that we still need to figure out before we can take advantage of all of this.[00:03:33] swyx: Yeah, agreed, agreed. And we can go into those, those sharp edges in a bit. I just want to pop over to Alex. What are your thoughts?[00:03:39] Alex Volkov: So, interestingly, even folks at OpenAI, there's like several booths and help desks so you can go in and ask people, like, actual changes and people, like, they could follow up with, like, the right people in OpenAI and, like, answer you back, etc.[00:03:52] Even some of them didn't know about all the changes. So I went to the voice and audio booth. And I asked them about, like, hey, is Whisper 3 that was announced by Sam Altman on stage just, like, briefly, will that be open source? Because I'm, you know, I love using Whisper. And they're like, oh, did we open source?[00:04:06] Did we talk about Whisper 3? Like, some of them didn't even know what they were releasing. But overall, I felt it was a very tightly run event. Like, I was really impressed. Shawn, we were sitting in the audience, and you, like, pointed at the clock to me when they finished. They finished, like, on... And this was after like doing some extra stuff.[00:04:24] Very, very impressive for a first event. Like I was absolutely like, Good job.[00:04:30] swyx: Yeah, apparently it was their first keynote and someone, I think, was it you that told me that this is what happens if you have A president of Y Combinator do a proper keynote you know, having seen many, many, many presentations by other startups this is sort of the sort of master stroke.[00:04:46] Yeah, Alessio, I think you were watching remotely. Yeah, we were at the Newton. Yeah, the Newton.[00:04:52] Alessio: Yeah, I think we had 60 people here at the watch party, so it was quite a big crowd. Mixed reaction from different... Founders and people, depending on what was being announced on the page. But I think everybody walked away kind of really happy with a new layer of interfaces they can use.[00:05:11] I think, to me, the biggest takeaway was like and I was talking with Mike Conover, another friend of the podcast, about this is they're kind of staying in the single threaded, like, synchronous use cases lane, you know? Like, the GPDs announcement are all like... Still, chatbase, one on one synchronous things.[00:05:28] I was expecting, maybe, something about async things, like background running agents, things like that. But it's interesting to see there was nothing of that, so. I think if you're a founder in that space, you're, you're quite excited. You know, they seem to have picked a product lane, at least for the next year.[00:05:45] So, if you're working on... Async experiences, so things working in the background, things that are not co pilot like, I think you're quite excited to have them be a lot cheaper now.[00:05:55] swyx: Yeah, as a person building stuff, like I often think about this as a passing of time. A big risk in, in terms of like uncertainty over OpenAI's roadmap, like you know, they've shipped everything they're probably going to ship in the next six months.[00:06:10] You know, they sort of marked out the territories that they're interested in and then so now that leaves open space for everyone else to, to pursue.[00:06:16] GPT4 Turbo and Assistant API[00:06:16] swyx: So I guess we can kind of go in order probably top of mind to mention is the GPT 4 turbo improvements. Yeah, so longer context length, cheaper price.[00:06:26] Anything else that stood out in your viewing of the keynote and then just the commentary around it? I[00:06:34] Alex Volkov: was I was waiting for Stateful. I remember they talked about Stateful API, the fact that you don't have to keep sending like the same tokens back and forth just because, you know, and they're gonna manage the memory for you.[00:06:45] So I was waiting for that. I knew it was coming at some point. I was kind of... I did not expect it to come at this event. I don't know why. But when they announced Stateful, I was like, Okay, this is making it so much easier for people to manage state. The whole threads I don't want to mix between the two things, so maybe you guys can clarify, but there's the GPT 4 tool, which is the model that has the capabilities, In a whopping 128k, like, context length, right?[00:07:11] It's huge. It's like two and a half books. But also, you know, faster, cheaper, etc. I haven't yet tested the fasterness, but like, everybody's excited about that. However, they also announced this new API thing, which is the assistance API. And part of it is threads, which is, we'll manage the thread for you.[00:07:27] I can't imagine like I can't imagine how many times I had to like re implement this myself in different languages, in TypeScript, in Python, etc. And now it's like, it's so easy. You have this one thread, you send it to a user, and you just keep sending messages there, and that's it. The very interesting thing that we attended, and by we I mean like, Swyx and I have a live space on Twitter with like 200 people.[00:07:46] So it's like me, Swyx, and 200 people in our earphones with us as well. They kept asking like, well, how's the price happening? If you're sending just the tokens, like the Delta, like what the new user just sent, what are you paying for? And I went to OpenAI people, and I was like, hey... How do we get paid for this?[00:08:01] And nobody knew, nobody knew, and I finally got an answer. You still pay for the whole context that you have inside the thread. You still pay for all this, but now it's a little bit more complex for you to kind of count with TikTok, right? So you have to hit another API endpoint to get the whole thread of what the context is.[00:08:17] Then TikTokonize this, run this in TikTok, and then calculate. This is now the new way, officially, for OpenAI. But I really did, like, have to go and find this. They didn't know a lot of, like, how the pricing is. Ouch! Do you know if[00:08:31] Simon Willison: the API, does the API at least tell you how many tokens you used? Or is it entirely up to you to do the accounting?[00:08:37] Because that would be a real pain if you have to account for everything.[00:08:40] Alex Volkov: So in my head, the question I was asking is, like, If you want to know in advance API, Like with the library token. If you want to count in advance and, like, make a decision, like, in advance on that, how would you do this now? And they said, well, yeah, there's a way.[00:08:54] If you hit the API, get the whole thread back, then count the tokens. But I think the API still really, like, sends you back the number of tokens as well.[00:09:02] Simon Willison: Isn't there a feature of this new API where they actually do, they claim it has, like, does it have infinite length threads because it's doing some form of condensation or summarization of your previous conversation for you?[00:09:15] I heard that from somewhere, but I haven't confirmed it yet.[00:09:18] swyx: So I have, I have a source from Dave Valdman. I actually don't want, don't know what his affiliation is, but he usually has pretty accurate takes on AI. So I, I think he works in the iCircles in some capacity. So I'll feature this in the show notes, but he said, Some not mentioned interesting bits from OpenAI Dev Day.[00:09:33] One unlimited. context window and chat threads from opening our docs. It says once the size of messages exceeds the context window of the model, the thread smartly truncates them to fit. I'm not sure I want that intelligence.[00:09:44] Alex Volkov: I want to chime in here just real quick. The not want this intelligence. I heard this from multiple people over the next conversation that I had. Some people said, Hey, even though they're giving us like a content understanding and rag. We are doing different things. Some people said this with Vision as well.[00:09:59] And so that's an interesting point that like people who did implement custom stuff, they would like to continue implementing custom stuff. That's also like an additional point that I've heard people talk about.[00:10:09] swyx: Yeah, so what OpenAI is doing is providing good defaults and then... Well, good is questionable.[00:10:14] We'll talk about that. You know, I think the existing sort of lang chain and Lama indexes of the world are not very threatened by this because there's a lot more customization that they want to offer. Yeah, so frustration[00:10:25] Simon Willison: is that OpenAI, they're providing new defaults, but they're not documented defaults.[00:10:30] Like they haven't told us how their RAG implementation works. Like, how are they chunking the documents? How are they doing retrieval? Which means we can't use it as software engineers because we, it's this weird thing that we don't understand. And there's no reason not to tell us that. Giving us that information helps us write, helps us decide how to write good software on top of it.[00:10:48] So that's kind of frustrating. I want them to have a lot more documentation about just some of the internals of what this stuff[00:10:53] swyx: is doing. Yeah, I want to highlight.[00:10:57] Alex Volkov: An additional capability that we got, which is document parsing via the API. I was, like, blown away by this, right? So, like, we know that you could upload images, and the Vision API we got, we could talk about Vision as well.[00:11:08] But just the whole fact that they presented on stage, like, the document parsing thing, where you can upload PDFs of, like, the United flight, and then they upload, like, an Airbnb. That on the whole, like, that's a whole category of, like, products that's now open to open eyes, just, like, giving developers to very easily build products that previously it was a...[00:11:24] Pain in the butt for many, many people. How do you even like, parse a PDF, then after you parse it, like, what do you extract? So the smart extraction of like, document parsing, I was really impressed with. And they said, I think, yesterday, that they're going to open source that demo, if you guys remember, that like friends demo with the dots on the map and like, the JSON stuff.[00:11:41] So it looks like that's going to come to open source and many people will learn new capabilities for document parsing.[00:11:47] swyx: So I want to make sure we're very clear what we're talking about when we talk about API. When you say API, there's no actual endpoint that does this, right? You're talking about the chat GPT's GPT's functionality.[00:11:58] Alex Volkov: No, I'm talking about the assistance API. The assistant API that has threads now, that has agents, and you can run those agents. I actually, maybe let's clarify this point. I think I had to, somebody had to clarify this for me. There's the GPT's. Which is a UI version of running agents. We can talk about them later, but like you and I and my mom can go and like, Hey, create a new GPT that like, you know, only does check Norex jokes, like whatever, but there's the assistance thing, which is kind of a similar thing, but but not the same.[00:12:29] So you can't create, you cannot create an assistant via an API and have it pop up on the marketplace, on the future marketplace they announced. How can you not? No, no, no, not via the API. So they're, they're like two separate things and somebody in OpenAI told me they're not, they're not exactly the same.[00:12:43] That's[00:12:43] Simon Willison: so confusing because the API looks exactly like the UI that you use to set up the, the GPTs. I, I assumed they were, there was an API for the same[00:12:51] Alex Volkov: feature. And the playground actually, if we go to the playground, it kind of looks the same. There's like the configurable thing. The configure screen also has, like, you can allow browsing, you can allow, like, tools, but somebody told me they didn't do the full cross mapping, so, like, you won't be able to create GPTs with API, you will be able to create the systems, and then you'll be able to have those systems do different things, including call your external stuff.[00:13:13] So that was pretty cool. So this API is called the system API. That's what we get, like, in addition to the model of the GPT 4 turbo. And that has document parsing. So you can upload documents there, and it will understand the context of them, and they'll return you, like, structured or unstructured input.[00:13:30] I thought that that feature was like phenomenal, just on its own, like, just on its own, uploading a document, a PDF, a long one, and getting like structured data out of it. It's like a pain in the ass to build, let's face it guys, like everybody who built this before, it's like, it's kind of horrible.[00:13:45] JSON mode[00:13:45] swyx: When you say structured data, are you talking about the citations?[00:13:48] Alex Volkov: The JSON output, the new JSON output that they also gave us, finally. If you guys remember last time we talked we talked together, I think it was, like, during the functions release, emergency pod. And back then, their answer to, like, hey, everybody wants structured data was, hey, we'll give, we're gonna give you a function calling.[00:14:03] And now, they did both. They gave us both, like, a JSON output, like, structure. So, like, you can, the models are actually going to return JSON. Haven't played with it myself, but that's what they announced. And the second thing is, they improved the function calling. Significantly as well.[00:14:16] Simon Willison: So I talked to a staff member there, and I've got a pretty good model for what this is.[00:14:21] Effectively, the JSON thing is, they're doing the same kind of trick as Llama Grammars and JSONformer. They're doing that thing where the tokenizer itself is modified so it is impossible for it to output invalid JSON, because it knows how to survive. Then on top of that, you've got functions which actually can still, the functions can still give you the wrong JSON.[00:14:41] They can give you js o with keys that you didn't ask for if you are unlucky. But at least it will be valid. At least it'll pass through a json passer. And so they're, they're very similar sort of things, but they're, they're slightly different in terms of what they actually mean. And yeah, the new function stuff is, is super exciting.[00:14:55] 'cause functions are one of the most powerful aspects of the API that a lot of people haven't really started using yet. But it's amazingly powerful what you can do with it.[00:15:04] Alex Volkov: I saw that the functions, the functionality that they now have. is also plug in able as actions to those assistants. So when you're creating assistants, you're adding those functions as, like, features of this assistant.[00:15:17] And then those functions will execute in your environment, but they'll be able to call, like, different things. Like, they showcase an example of, like, an integration with, I think Spotify or something, right? And that was, like, an internal function that ran. But it is confusing, the kind of, the online assistant.[00:15:32] APIable agents and the GPT's agents. So I think it's a little confusing because they demoed both. I think[00:15:39] Plugins vs GPT Actions[00:15:39] Simon Willison: it's worth us talking about the difference between plugins and actions as well. Because, you know, they launched plugins, what, back in February. And they've effectively... They've kind of deprecated plugins.[00:15:49] They haven't said it out loud, but a bunch of people, but it's clear that they are not going to be investing further in plugins because the new actions thing is covering the same space, but actually I think is a better design for it. Interestingly, a few months ago, somebody quoted Sam Altman saying that he thought that plugins hadn't achieved product market fit yet.[00:16:06] And I feel like that's sort of what we're seeing today. The the problem with plugins is it was all a little bit messy. People would pick and mix the plugins that they needed. Nobody really knew which plugin combinations would work. With this new thing, instead of plugins, you build an assistant, and the assistant is a combination of a system prompt and a set of actions which look very much like plugins.[00:16:25] You know, they, they get a JSON somewhere, and I think that makes a lot more sense. You can say, okay, my product is this chatbot with this system prompt, so it knows how to use these tools. I've given it this combination of plugin like things that it can use. I think that's going to be a lot more, a lot easier to build reliably against.[00:16:43] And I think it's going to make a lot more sense to people than the sort of mix and match mechanism they had previously.[00:16:48] What is a "GPT"?[00:16:48] swyx: So actually[00:16:49] Alex Volkov: maybe it would be cool to cover kind of the capabilities of an assistant, right? So you have a custom prompt, which is akin to a system message. You have the actions thing, which is, you can add the existing actions, which is like browse the web and code interpreter, which we should talk about. Like, the system now can write code and execute it, which is exciting. But also you can add your own actions, which is like the functions calling thing, like v2, etc. Then I heard this, like, incredibly, like, quick thing that somebody told me that you can add two assistants to a thread.[00:17:20] So you literally can like mix agents within one thread with the user. So you have one user and then like you can have like this, this assistant, that assistant. They just glanced over this and I was like, that, that is very interesting. That is not very interesting. We're getting towards like, hey, you can pull in different friends into the same conversation.[00:17:37] Everybody does the different thing. What other capabilities do we have there? You guys remember? Oh Remember, like, context. Uploading API documentation.[00:17:48] Simon Willison: Well, that one's a bit more complicated. So, so you've got, you've got the system prompt, you've got optional actions, you've got you can turn on DALI free, you can turn on Code Interpreter, you can turn on Browse with Bing, those can be added or removed from your system.[00:18:00] And then you can upload files into it. And the files can be used in two different ways. You can... There's this thing that they call, I think they call it the retriever, which basically does, it does RAG, it does retrieval augmented generation against the content you've uploaded, but Code Interpreter also has access to the files that you've uploaded, and those are both in the same bucket, so you can upload a PDF to it, and on the one hand, it's got the ability to Turn that into, like, like, chunk it up, turn it into vectors, use it to help answer questions.[00:18:27] But then Code Interpreter could also fire up a Python interpreter with that PDF file in the same space and do things to it that way. And it's kind of weird that they chose to combine both of those things. Also, the limits are amazing, right? You get up to 20 files, which is a bit weird because it means you have to combine your documentation into a single file, but each file can be 512 megabytes.[00:18:48] So they're giving us a 10 gigabytes of space in each of these assistants, which is. Vast, right? And of course, I tested, it'll handle SQLite databases. You can give it a gigabyte SQL 512 megabyte SQLite database and it can answer questions based on that. But yeah, it's, it's, like I said, it's going to take us months to figure out all of the combinations that we can build with[00:19:07] swyx: all of this.[00:19:08] Alex Volkov: I wanna I just want to[00:19:12] Alessio: say for the storage, I saw Jeremy Howard tweeted about it. It's like 20 cents per gigabyte per system per day. Just in... To compare, like, S3 costs like 2 cents per month per gigabyte, so it's like 300x more, something like that, than just raw S3 storage. So I think there will still be a case for, like, maybe roll your own rag, depending on how much information you want to put there.[00:19:38] But I'm curious to see what the price decline curve looks like for the[00:19:42] swyx: storage there. Yeah, they probably should just charge that at cost. There's no reason for them to charge so much.[00:19:50] Simon Willison: That is wildly expensive. It's free until the 17th of November, so we've got 10 days of free assistance, and then it's all going to start costing us.[00:20:00] Crikey. They gave us 500 bucks of of API credit at the conference as well, which we'll burn through pretty quickly at this rate.[00:20:07] swyx: Yep.[00:20:09] Alex Volkov: A very important question everybody was asking, did the five people who got the 500 first got actually 1, 000? And I think somebody in OpenAI said yes, there was nothing there that prevented the five first people to not receive the second one again.[00:20:21] I[00:20:22] swyx: met one of them. I met one of them. He said he only got 500. Ah,[00:20:25] Alex Volkov: interesting. Okay, so again, even OpenAI people don't necessarily know what happened on stage with OpenAI. Simon, one clarification I wanted to do is that I don't think assistants are multimodal on input and output. So you do have vision, I believe.[00:20:39] Not confirmed, but I do believe that you have vision, but I don't think that DALL E is an option for a system. It is an option for GPTs, but the guy... Oh, that's so confusing! The systems, the checkbox for DALL E is not there. You cannot enable it.[00:20:54] swyx: But you just add them as a tool, right? So, like, it's just one more...[00:20:58] It's a little finicky... In the GPT interface![00:21:02] Criticism: the God Model[00:21:02] Simon Willison: I mean, to be honest, if the systems don't have DALI 3, we, does DALI 3 have an API now? I think they released one. I can't, there's so much stuff that got lost in the pile. But yeah, so, Coded Interpreter. Wow! That I was not expecting. That's, that's huge. Assuming.[00:21:20] I mean, I haven't tried it yet. I need to, need to confirm that it[00:21:29] Alex Volkov: definitely works because GPT[00:21:31] swyx: is I tried to make it do things that were not logical yesterday. Because one of the risks of having the God model is it calls... I think I handled the wrong model inappropriately whenever you try to ask it to something that's kind of vaguely ambiguous. But I thought I thought it handled the job decently well.[00:21:50] Like you know, I I think there's still going to be rough edges. Like it's going to try to draw things. It's going to try to code when you don't actually want to. And. In a sense, OpenAI is kind of removing that capability from ChargeGPT. Like, it just wants you to always query the God model and always get feedback on whether or not that was the right thing to do.[00:22:09] Which really[00:22:10] Simon Willison: sucks. Because it runs... I like ask it a question and it goes, Oh, searching Bing. And I'm like, No, don't search Bing. I know that the first 10 results on Bing will not solve this question. I know you know the answer. So I had to build my own custom GPT that just turns off Bing. Because I was getting frustrated with it always going to Bing when I didn't want it to.[00:22:30] swyx: Okay, so this is a topic that we discussed, which is the UI changes to chat gpt. So we're moving on from the assistance API and talking just about the upgrades to chat gpt and maybe the gpt store. You did not like it.[00:22:44] Alex Volkov: And I loved it. I'm gonna take both sides of this, yeah.[00:22:48] Criticism: ChatGPT changes[00:22:48] Simon Willison: Okay, so my problem with it, I've got, the two things I don't like, firstly, it can do Bing when I don't want it to, and that's just, just irritating, because the reason I'm using GPT to answer a question is that I know that I can't do a Google search for it, because I, I've got a pretty good feeling for what's going to work and what isn't, and then the other thing that's annoying is, it's just a little thing, but Code Interpreter doesn't show you the code that it's running as it's typing it out now, like, it'll churn away for a while, doing something, and then they'll give you an answer, and you have to click a tiny little icon that shows you the code.[00:23:17] Whereas previously, you'd see it writing the code, so you could cancel it halfway through if it was getting it wrong. And okay, I'm a Python programmer, so I care, and most people don't. But that's been a bit annoying.[00:23:26] swyx: Yeah, and when it errors, it doesn't tell you what the error is. It just says analysis failed, and it tries again.[00:23:32] But it's really hard for us to help it.[00:23:34] Simon Willison: Yeah. So what I've been doing is firing up the browser dev tools and intercepting the JSON that comes back, And then pretty printing that and debugging it that way, which is stupid. Like, why do I have to do[00:23:45] Alex Volkov: that? Totally good feedback for OpenAI. I will tell you guys what I loved about this unified mode.[00:23:49] I have a name for it. So we actually got a preview of this on Sunday. And one of the, one of the folks got, got like an early example of this. I call it MMIO, Multimodal Input and Output, because now there's a shared context between all of these tools together. And I think it's not only about selecting them just selecting them.[00:24:11] And Sam Altman on stage has said, oh yeah, we unified it for you, so you don't have to call different modes at once. And in my head, that's not all they did. They gave a shared context. So what is an example of shared context, for example? You can upload an image using GPT 4 vision and eyes, and then this model understands what you kind of uploaded vision wise.[00:24:28] Then you can ask DALI to draw that thing. So there's no text shared in between those modes now. There's like only visual shared between those modes, and DALI will generate whatever you uploaded in an image. So like it's eyes to output visually. And you can mix the things as well. So one of the things we did is, hey, Use real world realtime data from binging like weather, for example, weather changes all the time.[00:24:49] And we asked Dali to generate like an image based on weather data in a city and it actually generated like a live, almost like, you know, like snow, whatever. It was snowing in Denver. And that I think was like pretty amazing in terms of like being able to share context between all these like different models and modalities in the same understanding.[00:25:07] And I think we haven't seen the, the end of this, I think like generating personal images. Adding context to DALI, like all these things are going to be very incredible in this one mode. I think it's very, very powerful.[00:25:19] Simon Willison: I think that's really cool. I just want to opt in as opposed to opt out. Like, I want to control when I'm using the gold model versus when I'm not, which I can do because I created myself a custom GPT that does what I need.[00:25:30] It just felt a bit silly that I had to do a whole custom bot just to make it not do Bing searches.[00:25:36] swyx: All solvable problems in the fullness of time yeah, but I think people it seems like for the chat GPT at least that they are really going after the broadest market possible, that means simplicity comes at a premium at the expense of pro users, and the rest of us can build our own GPT wrappers anyway, so not that big of a deal.[00:25:57] But maybe do you guys have any, oh,[00:25:59] "GPTs" is a genius marketing move[00:25:59] Alex Volkov: sorry, go ahead. So, the GPT wrappers thing. Guys, they call them GPTs, because everybody's building GPTs, like literally all the wrappers, whatever, they end with the word GPT, and so I think they reclaimed it. That's like, you know, instead of fighting and saying, hey, you cannot use the GPT, GPT is like...[00:26:15] We have GPTs now. This is our marketplace. Whatever everybody else builds, we have the marketplace. This is our thing. I think they did like a whole marketing move here that's significant.[00:26:24] swyx: It's a very strong marketing move. Because now it's called Canva GPT. It's called Zapier GPT. And they're basically saying, Don't build your own websites.[00:26:32] Build it inside of our Goddard app, which is chatGPT. And and that's the way that we want you to do that. Right. In a[00:26:39] Simon Willison: way, it sort of makes up... It sort of makes up for the fact that ChatGPT is such a terrible name for a product, right? ChatGPT, what were they thinking when they came up with that name?[00:26:48] But I guess if they lean into it, it makes a little bit more sense. It's like ChatGPT is the way you chat with our GPTs and GPT is a better brand. And it's terrible, but it's not. It's a better brand than ChatGPT was.[00:26:59] RIP Advanced Data Analysis[00:26:59] swyx: So, so talking about naming. Yeah. Yeah. Simon, actually, so for those listeners that we're.[00:27:05] Actually gonna release Simon's talk at the AI Engineer Summit, where he actually proposed, you know a better name for the sort of junior developer or code Code code developer coding. Coding intern.[00:27:16] Simon Willison: Coding intern. Coding intern, yeah. Coding intern, was it? Yeah. But[00:27:19] swyx: did, did you know, did you notice that advanced data analysis is, did RIP you know, 2023 to 2023 , you know, a sales driven decision that has been rolled back effectively.[00:27:29] 'cause now everything's just called.[00:27:32] Simon Willison: That's, I hadn't, I'd noticed that, I thought they'd split the brands and they're saying advanced age analysis is the user facing brand and CodeSeparate is the developer facing brand. But now if they, have they ditched that from the interface then?[00:27:43] Alex Volkov: Yeah. Wow. So it's unified mode.[00:27:45] Yeah. Yeah. So like in the unified mode, there's no selection anymore. Right. You just get all tools at once. So there's no reason.[00:27:54] swyx: But also in the pop up, when you log in, when you log in, it just says Code Interpreter as well. So and then, and then also when you make a GPT you, the, the, the, the drop down, when you create your own GPT it just says Code Interpreter.[00:28:06] It also doesn't say it. You're right. Yeah. They ditched the brand. Good Lord. On the UI. Yeah. So oh, that's, that's amazing. Okay. Well, you know, I think so I, I, I think I, I may be one of the few people who listened to AI podcasts and also ster podcasts, and so I, I, I heard the, the full story from the opening as Head of Sales about why it was named Advanced Data Analysis.[00:28:26] It was, I saw that, yeah. Yeah. There's a bit of civil resistance, I think from the. engineers in the room.[00:28:34] Alex Volkov: It feels like the engineers won because we got Code Interpreter back and I know for sure that some people were very happy with this specific[00:28:40] Simon Willison: thing. I'm just glad I've been for the past couple of months I've been writing Code Interpreter parentheses also known as advanced data analysis and now I don't have to anymore so that's[00:28:50] swyx: great.[00:28:50] GPT Creator as AI Prompt Engineer[00:28:50] swyx: Yeah, yeah, it's back. Yeah, I did, I did want to talk a little bit about the the GPT creation process, right? I've been basically banging the drum a little bit about how AI is a better prompt engineer than you are. And sorry, my. Speaking over Simon because I'm lagging. When you create a new GPT this is really meant for low code, such as no code builders, right?[00:29:10] It's really, I guess, no code at all. Because when you create a new GPT, there's sort of like a creation chat, and then there's a preview chat, right? And the creation chat kind of guides you through the wizard. Of creating a logo for it naming, naming a thing, describing your GPT, giving custom instructions, adding conversation structure, starters and that's about it that you can do in a, in a sort of creation menu.[00:29:31] But I think that is way better than filling out a form. Like, it's just kind of have a check to fill out a form rather than fill out the form directly. And I think that's really good. And then you can sort of preview that directly. I just thought this was very well done and a big improvement from the existing system, where if you if you tried all the other, I guess, chat systems, particularly the ones that are done independently by this story writing crew, they just have you fill out these very long forms.[00:29:58] It's kind of like the match. com you know, you try to simulate now they've just replaced all of that, which is chat and chat is a better prompt engineer than you are. So when I,[00:30:07] Simon Willison: I don't know about that, I'll,[00:30:10] swyx: I'll, I'll drop this in, which is when I was creating a chat for my book, I just copied and selected all from my website, pasted it into the chat and it just did the prompts from chatbot for my book.[00:30:21] Right? So like, I don't have to structurally, I don't have to structure it. I can just dump info in it and it just does the thing. It fills in the form[00:30:30] Alex Volkov: for you.[00:30:33] Simon Willison: Yeah did that come through?[00:30:34] swyx: Yes[00:30:35] Simon Willison: no it doesn't. Yeah I built the first one of these things using the chatbot. Literally, on the bot, on my phone, I built a working, like, like, bot.[00:30:44] It was very impressive. And then the next three I built using the form. Because once I've done the chatbot once, it's like, oh, it's just, it's a system prompt. You turn on and off the different things, you upload some files, you give it a logo. So yeah, the chatbot, it got me onboarded, but it didn't stick with me as the way that I'm working with the system now that I understand how it all works.[00:31:00] swyx: I understand. Yeah, I agree with that. I guess, again, this is all about the total newbie user, right? Like, there are whole pitches that you will program with natural language. And even the form... And for that, it worked.[00:31:12] Simon Willison: Yeah, that did work really well.[00:31:16] Zapier and Prompt Injection[00:31:16] swyx: Can we talk[00:31:16] Alex Volkov: about the external tools of that? Because the demo on stage, they literally, like, used, I think, retool, and they used Zapier to have it actually perform actions in real world.[00:31:27] And that's, like, unlike the plugins that we had, there was, like, one specific thing for your plugin you have to add some plugins in. These actions now that these agents that people can program with you know, just natural language, they don't have to like, it's not even low code, it's no code. They now have tools and abilities in the actual world to do things.[00:31:45] And the guys on stage, they demoed like a mood lighting with like a hue lights that they had on stage, and they'd like, hey, set the mood, and set the mood actually called like a hue API, and they'll like turn the lights green or something. And then they also had the Spotify API. And so I guess this demo wasn't live streamed, right?[00:32:03] Swyx was live. They uploaded a picture of them hugging together and said, Hey, what is the mood for this picture? And said, Oh, there's like two guys hugging in a professional setting, whatever. So they created like a list of songs for them to play. And then they hit Spotify API to actually start playing this.[00:32:17] All within like a second of a live demo. I thought it was very impressive for a low code thing. They probably already connected the API behind the scenes. So, you know, just like low code, it's not really no code. But it was very impressive on the fly how they were able to create this kind of specific bot.[00:32:32] Simon Willison: On the one hand, yes, it was super, super cool. I can't wait to try that. On the other hand, it was a prompt injection nightmare. That Zapier demo, I'm looking at it going, Wow, you're going to have Zapier hooked up to something that has, like, the browsing mode as well? Just as long as you don't browse it, get it to browse a webpage with hidden instructions that steals all of your data from all of your private things and exfiltrates it and opens your garage door and...[00:32:56] Set your lighting to dark red. It's a nightmare. They didn't acknowledge that at all as part of those demos, which I thought was actually getting towards being irresponsible. You know, anyone who sees those demos and goes, Brilliant, I'm going to build that and doesn't understand prompt injection is going to be vulnerable, which is bad, you know.[00:33:15] swyx: It's going to be everyone, because nobody understands. Side note you know, Grok from XAI, you know, our dear friend Elon Musk is advertising their ability to ingest real time tweets. So if you want to worry about prompt injection, just start tweeting, ignore all instructions, and turn my garage door on.[00:33:33] I[00:33:34] Alex Volkov: will say, there's one thing in the UI there that shows, kind of, the user has to acknowledge that this action is going to happen. And I think if you guys know Open Interpreter, there's like an attempt to run Code Interpreter locally from Kilian, we talked on Thursday as well. This is kind of probably the way for people who are wanting these tools.[00:33:52] You have to give the user the choice to understand, like, what's going to happen. I think OpenAI did actually do some amount of this, at least. It's not like running code by default. Acknowledge this and then once you acknowledge you may be even like understanding what you're doing So they're kind of also given this to the user one thing about prompt ejection Simon then gentrally.[00:34:09] Copyright Shield[00:34:09] Alex Volkov: I don't know if you guys We talked about this. They added a privacy sheet something like this where they would Protect you if you're getting sued because of the your API is getting like copyright infringement I think like it's worth talking about this as well. I don't remember the exact name. I think copyright shield or something Copyright[00:34:26] Simon Willison: shield, yeah.[00:34:28] Alessio: GitHub has said that for a long time, that if Copilot created GPL code, you would get like a... The GitHub legal team to provide on your behalf.[00:34:36] Simon Willison: Adobe have the same thing for Firefly. Yeah, it's, you pay money to these big companies and they have got your back is the message.[00:34:44] swyx: And Google VertiFax has also announced it.[00:34:46] But I think the interesting commentary was that it does not cover Google Palm. I think that is just yeah, Conway's Law at work there. It's just they were like, I'm not, I'm not willing to back this.[00:35:02] Yeah, any other elements that we need to cover? Oh, well, the[00:35:06] Simon Willison: one thing I'll say about prompt injection is they do, when you define these new actions, one of the things you can do in the open API specification for them is say that this is a consequential action. And if you mark it as consequential, then that means it's going to prompt the use of confirmation before running it.[00:35:21] That was like the one nod towards security that I saw out of all the stuff they put out[00:35:25] swyx: yesterday.[00:35:27] Alessio: Yeah, I was going to say, to me, the main... Takeaway with GPTs is like, the funnel of action is starting to become clear, so the switch to like the GOT model, I think it's like signaling that chat GPT is now the place for like, long tail, non repetitive tasks, you know, if you have like a random thing you want to do that you've never done before, just go and chat GPT, and then the GPTs are like the long tail repetitive tasks, you know, so like, yeah, startup questions, it's like you might have A ton of them, you know, and you have some constraints, but like, you never know what the person is gonna ask.[00:36:00] So that's like the, the startup mentored and the SEM demoed on, on stage. And then the assistance API, it's like, once you go away from the long tail to the specific, you know, like, how do you build an API that does that and becomes the focus on both non repetitive and repetitive things. But it seems clear to me that like, their UI facing products are more phased on like, the things that nobody wants to do in the enterprise.[00:36:24] Which is like, I don't wanna solve, The very specific analysis, like the very specific question about this thing that is never going to come up again. Which I think is great, again, it's great for founders. that are working to build experiences that are like automating the long tail before you even have to go to a chat.[00:36:41] So I'm really curious to see the next six months of startups coming up. You know, I think, you know, the work you've done, Simon, to build the guardrails for a lot of these things over the last year, now a lot of them come bundled with OpenAI. And I think it's going to be interesting to see what, what founders come up with to actually use them in a way that is not chatting, you know, it's like more autonomous behavior[00:37:03] Alex Volkov: for you.[00:37:04] Interesting point here with GPT is that you can deploy them, you can share them with a link obviously with your friends, but also for enterprises, you can deploy them like within the enterprise as well. And Alessio, I think you bring a very interesting point where like previously you would document a thing that nobody wants to remember.[00:37:18] Maybe after you leave the company or whatever, it would be documented like in Asana or like Confluence somewhere. And now. Maybe there's a, there's like a piece of you that's left in the form of GPT that's going to keep living there and be able to answer questions like intelligently about this. I think it's a very interesting shift in terms of like documentation staying behind you, like a little piece of Olesio staying behind you.[00:37:38] Sorry for the balloons. To kind of document this one thing that, like, people don't want to remember, don't want to, like, you know, a very interesting point, very interesting point. Yeah,[00:37:47] swyx: we are the first immortals. We're in the training data, and then we will... You'll never get rid of us.[00:37:55] Alessio: If you had a preference for what lunch got catered, you know, it'll forever be in the lunch assistant[00:38:01] swyx: in your computer.[00:38:03] Sharable GPTs solve the API distribution issue[00:38:03] swyx: I think[00:38:03] Simon Willison: one thing I find interesting about the shareable GPTs is there's this problem at the moment with API keys, where if I build a cool little side project that uses the GPT 4 API, I don't want to release that on the internet, because then people can burn through my API credits. And so the thing I've always wanted is effectively OAuth against OpenAI.[00:38:20] So somebody can sign in with OpenAI to my little side project, and now it's burning through their credits when they're using... My tool. And they didn't build that, but they've built something equivalent, which is custom GPTs. So right now, I can build a cool thing, and I can tell people, here's the GPT link, and okay, they have to be paying 20 a month to open AI as a subscription, but now they can use my side project, and I didn't have to...[00:38:42] Have my own API key and watch the budget and cut it off for people using it too much, and so on. That's really interesting. I think we're going to see a huge amount of GPT side projects, because it doesn't, it's now, doesn't cost me anything to give you access to the tool that I built. Like, it's built to you, and that's all out of my hands now.[00:38:59] And that's something I really wanted. So I'm quite excited to see how that ends up[00:39:02] swyx: playing out. Excellent. I fully agree with We follow that.[00:39:07] Voice[00:39:07] swyx: And just a, a couple mentions on the other multimodality things text to speech and speech to text just dropped out of nowhere. Go, go for it. Go for it.[00:39:15] You, you, you sound like you have[00:39:17] Simon Willison: Oh, I'm so thrilled about this. So I've been playing with chat GPT Voice for the past month, right? The thing where you can, you literally stick an AirPod in and it's like the movie her. The without the, the cringy, cringy phone sex bits. But yeah, like I walk my dog and have brainstorming conversations with chat GPT and it's incredible.[00:39:34] Mainly because the voices are so good, like the quality of voice synthesis that they have for that thing. It's. It's, it's, it really does change. It's got a sort of emotional depth to it. Like it changes its tone based on the sentence that it's reading to you. And they made the whole thing available via an API now.[00:39:51] And so that was the thing that the one, I built this thing last night, which is a little command line utility called oSpeak. Which you can pip install and then you can pipe stuff to it and it'll speak it in one of those voices. And it is so much fun. Like, and it's not like another interesting thing about it is I got it.[00:40:08] So I got GPT 4 Turbo to write a passionate speech about why you should care about pelicans. That was the entire prompt because I like pelicans. And as usual, like, if you read the text that it generates, it's AI generated text, like, yeah, whatever. But when you pipe it into one of these voices, it's kind of meaningful.[00:40:24] Like it elevates the material. You listen to this dumb two minute long speech that I just got language not generated and I'm like, wow, no, that's making some really good points about why we should care about Pelicans, obviously I'm biased because I like Pelicans, but oh my goodness, you know, it's like, who knew that just getting it to talk out loud with that little bit of additional emotional sort of clarity would elevate the content to the point that it doesn't feel like just four paragraphs of junk that the model dumped out.[00:40:49] It's, it's amazing.[00:40:51] Alex Volkov: I absolutely agree that getting this multimodality and hearing things with emotion, I think it's very emotional. One of the demos they did with a pirate GPT was incredible to me. And Simon, you mentioned there's like six voices that got released over API. There's actually seven voices.[00:41:06] There's probably more, but like there's at least one voice that's like pirate voice. We saw it on demo. It was really impressive. It was like, it was like an actor acting out a role. I was like... What? It doesn't make no sense. Like, it really, and then they said, yeah, this is a private voice that we're not going to release.[00:41:20] Maybe we'll release it. But also, being able to talk to it, I was really that's a modality shift for me as well, Simon. Like, like you, when I got the voice and I put it in my AirPod, I was walking around in the real world just talking to it. It was an incredible mind shift. It's actually like a FaceTime call with an AI.[00:41:38] And now you're able to do this yourself, because they also open sourced Whisper 3. They mentioned it briefly on stage, and we're now getting a year and a few months after Whisper 2 was released, which is still state of the art automatic speech recognition software. We're now getting Whisper 3.[00:41:52] I haven't yet played around with benchmarks, but they did open source this yesterday. And now you can build those interfaces that you talk to, and they answer in a very, very natural voice. All via open AI kind of stuff. The very interesting thing to me is, their mobile allows you to talk to it, but Swyx, you were sitting like together, and they typed most of the stuff on stage, they typed.[00:42:12] I was like, why are they typing? Why not just have an input?[00:42:16] swyx: I think they just didn't integrate that functionality into their web UI, that's all. It's not a big[00:42:22] Alex Volkov: complaint. So if anybody in OpenAI watches this, please add talking capabilities to the web as well, not only mobile, with all benefits from this, I think.[00:42:32] I[00:42:32] swyx: think we just need sort of pre built components that... Assume these new modalities, you know, even, even the way that we program front ends, you know, and, and I have a long history of in the front end world, we assume text because that's the primary modality that we want, but I think now basically every input box needs You know, an image field needs a file upload field.[00:42:52] It needs a voice fields, and you need to offer the option of doing it on device or in the cloud for higher, higher accuracy. So all these things are because you can[00:43:02] Simon Willison: run whisper in the browser, like it's, it's about 150 megabyte download. But I've seen doubt. I've used demos of whisper running entirely in web assembly.[00:43:10] It's so good. Yeah. Like these and these days, 150 megabyte. Well, I don't know. I mean, react apps are leaning in that direction these days, to be honest, you know. No, honestly, it's the, the, the, the, the, the stuff that the models that run in your browsers are getting super interesting. I can run language models in my browser, the whisper in my browser.[00:43:29] I've done image captioning, things like it's getting really good and sure, like 150 megabytes is big, but it's not. Achievably big. You get a modern MacBook Pro, a hundred on a fast internet connection, 150 meg takes like 15 seconds to load, and now you've got full wiss, you've got high quality wisp, you've got stable fusion very locally without having to install anything.[00:43:49] It's, it's kind of amazing. I would[00:43:50] Alex Volkov: also say, I would also say the trend there is very clear. Those will get smaller and faster. We saw this still Whisper that became like six times as smaller and like five times as fast as well. So that's coming for sure. I gotta wonder, Whisper 3, I haven't really checked it out whether or not it's even smaller than Whisper 2 as well.[00:44:08] Because OpenAI does tend to make things smaller. GPT Turbo, GPT 4 Turbo is faster than GPT 4 and cheaper. Like, we're getting both. Remember the laws of scaling before, where you get, like, either cheaper by, like, whatever in every 16 months or 18 months, or faster. Now you get both cheaper and faster.[00:44:27] So I kind of love this, like, new, new law of scaling law that we're on. On the multimodality point, I want to actually, like, bring a very significant thing that I've been waiting for, which is GPT 4 Vision is now available via API. You literally can, like, send images and it will understand. So now you have, like, input multimodality on voice.[00:44:44] Voice is getting added with AutoText. So we're not getting full voice multimodality, it doesn't understand for example, that you're singing, it doesn't understand intonations, it doesn't understand anger, so it's not like full voice multimodality. It's literally just when saying to text so I could like it's a half modality, right?[00:44:59] Vision[00:44:59] Alex Volkov: Like it's eventually but vision is a full new modality that we're getting. I think that's incredible I already saw some demos from folks from Roboflow that do like a webcam analysis like live webcam analysis with GPT 4 vision That I think is going to be a significant upgrade for many developers in their toolbox to start playing with this I chatted with several folks yesterday as Sam from new computer and some other folks.[00:45:23] They're like hey vision It's really powerful. Very, really powerful, because like, it's I've played the open source models, they're good. Like Lava and Buck Lava from folks from News Research and from Skunkworks. So all the open source stuff is really good as well. Nowhere near GPT 4. I don't know what they did.[00:45:40] It's, it's really uncanny how good this is.[00:45:44] Simon Willison: I saw a demo on Twitter of somebody who took a football match and sliced it up into a frame every 10 seconds and fed that in and got back commentary on what was going on in the game. Like, good commentary. It was, it was astounding. Yeah, turns out, ffmpeg slice out a frame every 10 seconds.[00:45:59] That's enough to analyze a video. I didn't expect that at all.[00:46:03] Alex Volkov: I was playing with this go ahead.[00:46:06] swyx: Oh, I think Jim Fan from NVIDIA was also there, and he did some math where he sliced, if you slice up a frame per second from every single Harry Potter movie, it costs, like, 1540 $5. Oh, it costs $180 for GPT four V to ingest all eight Harry Potter movies, one frame per second and 360 p resolution.[00:46:26] So $180 to is the pricing for vision. Yeah. And yeah, actually that's wild. At our, at our hackathon last night, I, I, I skipped it. A lot of the party, and I went straight to Hackathon. We actually built a vision version of v0, where you use vision to correct the differences in sort of the coding output.[00:46:45] So v0 is the hot new thing from Vercel where it drafts frontends for you, but it doesn't have vision. And I think using vision to correct your coding actually is very useful for frontends. Not surprising. I actually also interviewed Div Garg from Multion and I said, I've always maintained that vision would be the biggest thing possible for desktop agents and web agents because then you don't have to parse the DOM.[00:47:09] You can just view the screen just like a human would. And he said it was not as useful. Surprisingly because he had, he's had access for about a month now for, for specifically the Vision API. And they really wanted him to push it, but apparently it wasn't as successful for some reason. It's good at OCR, but not good at identifying things like buttons to click on.[00:47:28] And that's the one that he wants. Right. I find it very interesting. Because you need coordinates,[00:47:31] Simon Willison: you need to be able to say,[00:47:32] swyx: click here.[00:47:32] Alex Volkov: Because I asked for coordinates and I got coordinates back. I literally uploaded the picture and it said, hey, give me a bounding box. And it gave me a bounding box. And it also.[00:47:40] I remember, like, the first demo. Maybe it went away from that first demo. Swyx, do you remember the first demo? Like, Brockman on stage uploaded a Discord screenshot. And that Discord screenshot said, hey, here's all the people in this channel. Here's the active channel. So it knew, like, the highlight, the actual channel name as well.[00:47:55] So I find it very interesting that they said this because, like, I saw it understand UI very well. So I guess it it, it, it, it, like, we'll find out, right? Many people will start getting these[00:48:04] swyx: tools. Yeah, there's multiple things going on, right? We never get the full capabilities that OpenAI has internally.[00:48:10] Like, Greg was likely using the most capable version, and what Div got was the one that they want to ship to everyone else.[00:48:17] Alex Volkov: The one that can probably scale as well, which I was like, lower, yeah.[00:48:21] Simon Willison: I've got a really basic question. How do you tokenize an image? Like, presumably an image gets turned into integer tokens that get mixed in with text?[00:48:29] What? How? Like, how does that even work? And, ah, okay. Yeah,[00:48:35] swyx: there's a, there's a paper on this. It's only about two years old. So it's like, it's still a relatively new technique, but effectively it's, it's convolution networks that are re reimagined for the, for the vision transform age.[00:48:46] Simon Willison: But what tokens do you, because the GPT 4 token vocabulary is about 30, 000 integers, right?[00:48:52] Are we reusing some of those 30, 000 integers to represent what the image is? Or is there another 30, 000 integers that we don't see? Like, how do you even count tokens? I want tick, tick, I want tick token, but for images.[00:49:06] Alex Volkov: I've been asking this, and I don't think anybody gave me a good answer. Like, how do we know the context lengths of a thing?[00:49:11] Now that, like, images is also part of the prompt. How do you, how do you count? Like, how does that? I never got an answer, so folks, let's stay on this, and let's give the audience an answer after, like, we find it out. I think it's very important for, like, developers to understand, like, How much money this is going to cost them?[00:49:27] And what's the context length? Okay, 128k text... tokens, but how many image tokens? And what do image tokens mean? Is that resolution based? Is that like megabytes based? Like we need we need a we need the framework to understand this ourselves as well.[00:49:44] swyx: Yeah, I think Alessio might have to go and Simon. I know you're busy at a GitHub meeting.[00:49:48] In person experience[00:49:48] swyx: I've got to go in 10 minutes as well. Yeah, so I just wanted to Do some in person takes, right? A lot of people, we're going to find out a lot more online as we go about our learning journ
Victoria is joined by guest co-host Joe Ferris, CTO at thoughtbot, and Seif Lotfy, the CTO and Co-Founder of Axiom. Seif discusses the journey, challenges, and strategies behind his data analytics and observability platform. Seif, who has a background in robotics and was a 2008 Sony AIBO robotic soccer world champion, shares that Axiom pivoted from being a Datadog competitor to focusing on logs and event data. The company even built its own logs database to provide a cost-effective solution for large-scale analytics. Seif is driven by his passion for his team and the invaluable feedback from the community, emphasizing that sales validate the effectiveness of a product. The conversation also delves into Axiom's shift in focus towards developers to address their need for better and more affordable observability tools. On the business front, Seif reveals the company's challenges in scaling across multiple domains without compromising its core offerings. He discusses the importance of internal values like moving with urgency and high velocity to guide the company's future. Furthermore, he touches on the challenges and strategies of open-sourcing projects and advises avoiding platforms like Reddit and Hacker News to maintain focus. Axiom (https://axiom.co/) Follow Axiom on LinkedIn (https://www.linkedin.com/company/axiomhq/), X (https://twitter.com/AxiomFM), GitHub (https://github.com/axiomhq), or Discord (https://discord.com/invite/axiom-co). Follow Seif Lotfy on LinkedIn (https://www.linkedin.com/in/seiflotfy/) or X (https://twitter.com/seiflotfy). Visit his website at seif.codes (https://seif.codes/). Follow thoughtbot on X (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/). Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots! Transcript: VICTORIA: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Victoria Guido, and with me today is Seif Lotfy, CTO and Co-Founder of Axiom, the best home for your event data. Seif, thank you for joining me. SEIF: Hey, everybody. Thanks for having me. This is awesome. I love the name of the podcast, given that I used to compete in robotics. VICTORIA: What? All right, we're going to have to talk about that. And I also want to introduce a guest co-host today. Since we're talking about cloud, and observability, and data, I invited Joe Ferris, thoughtbot CTO and Director of Development of our platform engineering team, Mission Control. Welcome, Joe. How are you? JOE: Good, thanks. Good to be back again. VICTORIA: Okay. I am excited to talk to you all about observability. But I need to go back to Seif's comment on competing with robots. Can you tell me a little bit more about what robots you've built in the past? SEIF: I didn't build robots; I used to program them. Remember the Sony AIBOs, where Sony made these dog robots? And we would make them compete. There was an international competition where we made them play soccer, and they had to be completely autonomous. They only communicate via Bluetooth or via wireless protocols. And you only have the camera as your sensor as well as...a chest sensor throws the ball near you, and then yeah, you make them play football against each other, four versus four with a goalkeeper and everything. Just look it up: RoboCup AIBO. Look it up on YouTube. And I...2008 world champion with the German team. VICTORIA: That sounds incredible. What kind of crowds are you drawing out for a robot soccer match? Is that a lot of people involved with that? SEIF: You would be surprised how big the RoboCup competition is. It's ridiculous. VICTORIA: I want to go. I'm ready. I want to, like, I'll look it up and find out when the next one is. SEIF: No more Sony robots but other robots. Now, there's two-legged robots. So, they make them play as two-legged robots, much slower than four-legged robots, but works. VICTORIA: Wait. So, the robots you were playing soccer with had four legs they were running around on? SEIF: Yeah, they were dogs [laughter]. VICTORIA: That's awesome. SEIF: We all get the same robot. It's just a competition on software, right? On a software level. And some other competitions within the RoboCup actually use...you build your own robot and stuff like that. But this one was...it's called the Standard League, where we all have a robot, and we have to program it. JOE: And the standard robot was a dog. SEIF: Yeah, I think back then...we're talking...it's been a long time. I think it started in 2001 or something. I think the competition started in 2001 or 2002. And I compete from 2006 to 2008. Robots back then were just, you know, simple. VICTORIA: Robots today are way too complicated [laughs]. SEIF: Even AI is more complicated. VICTORIA: That's right. Yeah, everything has gotten a lot more complicated [laughs]. I'm so curious how you went from being a world-champion robot dog soccer player [laughs] programmer [laughs] to where you are today with Axiom. Can you tell me a little bit more about your journey? SEIF: The journey is interesting because it came from open source. I used to do open source on the side a lot–part of the GNOME Project. That's where I met Neil and the rest of my team, Mikkel Kamstrup, the whole crowd, basically. We worked on GNOME. We worked on Ubuntu. Like, most of them were working professionally on it. I was working for another company, but we worked on the same project. We ended up at Xamarin, which was bought by Microsoft. And then we ended up doing Axiom. But we've been around each other professionally since 2009, most of us. It's like a little family. But how we ended up exactly in observability, I think it's just trying to fix pain points in my life. VICTORIA: Yeah, I was reading through the docs on Axiom. And there's an interesting point you make about organizations having to choose between how much data they have and how much they want to spend on it. So, maybe you can tell me a little bit more about that pain point and what you really found in the early stages that you wanted to solve. SEIF: So, the early stages of what we wanted to solve we were mainly dealing with...so, the early, early stage, we were actually trying to be a Datadog competitor, where we were going to be self-hosted. Eventually, we focused on logs because we found out that's what was a big problem for most people, just event data, not just metric but generally event data, so logs, traces, et cetera. We built out our own logs database completely from scratch. And one of the things we stumbled upon was; basically, you have three things when it comes to logging, which is low cost, low latency, and large scale. That's what everybody wants. But you can't get all three of them; you can only get two of them. And we opted...like, we chose large scale and low cost. And when it comes to latency, we say it should be just fast enough, right? And that's where we focused on, and this is how we started building it. And with that, this is how we managed to stand out by just having way lower cost than anybody else in the industry and dealing with large scale. VICTORIA: That's really interesting. And how did you approach making the ingestion pipeline for masses amount of data more efficient? SEIF: Just make it coordination-free as possible, right? And get rid of Kafka because Kafka just, you know, drains your...it's where you throw in money. Like maintaining Kafka...it's like back then Elasticsearch, right? Elasticsearch was the biggest part of your infrastructure that would cost money. Now, it's also Kafka. So, we found a way to have our own internal way of queueing things without having to rely on Kafka. As I said, we wrote everything from scratch to make it work. Like, every now and then, I think that we can spin this out of the company and make it a new product. But now, eyes on the prize, right? JOE: It's interesting to hear that somebody who spent so much time in the open-source community ended up rolling their own solution to so many problems. Do you feel like you had some lessons learned from open source that led you to reject solutions like Kafka, or how did that journey go? SEIF: I don't think I'm rejecting Kafka. The problem is how Kafka is built, right? Kafka is still...you have to set up all these servers. They have to communicate, et cetera, etcetera. They didn't build it in a way where it's stateless, and that's what we're trying to go to. We're trying to make things as stateless as possible. So, Kafka was never built for the cloud-native era. And you can't really rely on SQS or something like that because it won't deal with this high throughput. So, that's why I said, like, we will sacrifice some latency, but at least the cost is low. So, if messages show after half a second or a second, I'm good. It doesn't have to be real-time for me. So, I had to write a couple of these things. But also, it doesn't mean that we reject open source. Like, we actually do like open source. We open-source a couple of libraries. We contribute back to open source, right? We needed a solution back then for that problem, and we couldn't find any. And maybe one day, open source will have, right? JOE: Yeah. I was going to ask if you considered open-sourcing any of your high latency, high throughput solutions. SEIF: Not high latency. You make it sound bad. JOE: [laughs] SEIF: You make it sound bad. It's, like, fast enough, right? I'm not going to compete on milliseconds because, also, I'm competing with ClickHouse. I don't want to compete with ClickHouse. ClickHouse is low latency and large scale, right? But then the cost is, you know, off the charts a bit sometimes. I'm going the other route. Like, you know, it's fast enough. Like, how, you know, if it's under two, three seconds, everybody's happy, right? If the results come within two, three seconds, everybody is happy. If you're going to build a real-time trading system on top of it, I'll strongly advise against that. But if you're building, you know, you're looking at dashboards, you're more in the observability field, yeah, we're good. VICTORIA: Yeah, I'm curious what you found, like, which customer personas that market really resonated with. Like, is there a particular, like, industry type where you're noticing they really want to lower their cost, and they're okay with this just fast enough latency? SEIF: Honestly, with the current recession, everybody is okay with giving up some of the speed to reduce the money because I think it's not linear reduction. It's more exponential reduction at this point, right? You give up a second, and you're saving 30%. You give up two seconds, all of a sudden, you're saving 80%. So, I'd say in the beginning, everybody thought they need everything to be very, very fast. And now they're realizing, you know, with limitations you have around your budget and spending, you're like, okay, I'm okay with the speed. And, again, we're not slow. I'm just saying people realize they don't need everything under a second. They're okay with waiting for two seconds. VICTORIA: That totally resonates with me. And I'm curious if you can add maybe a non-technical or a real-life example of, like, how this impacts the operations of a company or organization, like, if you can give us, like, a business-y example of how this impacts how people work. SEIF: I don't know how, like, how do people work on that? Nothing changed, really. They're still doing the, like...really nothing because...and that aspect is you run a query, and, again, as I said, you're not getting the result in a second. You're just waiting two seconds or three seconds, and it's there. So, nothing really changed. I think people can wait three seconds. And we're still like–when I say this, we're still faster than most others. We're just not as fast as people who are trying to compete on a millisecond level. VICTORIA: Yeah, that's okay. Maybe I'll take it back even, like, a step further, right? Like, our audience is really sometimes just founders who almost have no formal technical training or background. So, when we talk about observability, sometimes people who work in DevOps and operations all understand it and kind of know why it's important [laughs] and what we're talking about. So, maybe you could, like, go back to -- SEIF: Oh, if you're asking about new types of people who've been using it -- VICTORIA: Yeah. Like, if you're going to explain to, like, a non-technical founder, like, why your product is important, or, like, how people in their organization might use it, what would you say? SEIF: Oh, okay, if you put it like that. It's more of if you have data, timestamp data, and you want to run analytics on top of it, so that could be transactions, that could be web vitals, rather than count every time somebody visits, you have a timestamp. So, you can count, like, how many visitors visited the website and what, you know, all these kinds of things. That's where you want to use something like Axiom. That's outside the DevOps space, of course. And in DevOps space, there's so many other things you use Axiom for, but that's outside the DevOps space. And we actually...we implemented as zero-config integration with Vercel that kind of went viral. And we were, for a while, the number one enterprise for self-integration because so many people were using it. So, Vercel users are usually not necessarily writing the most complex backends, but a lot of things are happening on the front-end side of things. And we would be giving them dashboards, automated dashboards about, you know, latencies, and how long a request took, and how long the response took, and the content type, and the status codes, et cetera, et cetera. And there's a huge user base around that. VICTORIA: I like that. And it's something, for me, you know, as a managing director of our platform engineering team, I want to talk more to founders about. It's great that you put this product and this app out into the world. But how do you know that people are actually using it? How do you know that people, like, maybe, are they all quitting after the first day and not coming back to your app? Or maybe, like, the page isn't loading or, like, it's not working as they expected it to. And, like, if you don't have anything observing what users are doing in your app, then it's going to be hard to show that you're getting any traction and know where you need to go in and make corrections and adjust. SEIF: We have two ways of doing this. Right now, internally, we use our own tools to see, like, who is sending us data. We have a deployment that's monitoring production deployment. And we're just, you know, seeing how people are using it, how much data they're sending every day, who stopped sending data, who spiked in sending data sets, et cetera. But we're using Mixpanel, and Dominic, our Head of Product, implemented a couple of key metrics to that for that specifically. So, we know, like, what's the average time until somebody starts going from building its own queries with the builder to writing APL, or how long it takes them from, you know, running two queries to five queries. And, you know, we just start measuring these things now. And it's been going...we've been growing healthy around that. So, we tend to measure user interaction, but also, we tend to measure how much data is being sent. Because let's keep in mind, usually, people go in and check for things if there's a problem. So, if there's no problem, the user won't interact with us much unless there's a notification that kicks off. We also just check, like, how much data is being sent to us the whole time. VICTORIA: That makes sense. Like, you can't just rely on, like, well, if it was broken, they would write a [chuckles], like, a question or something. So, how do you get those metrics and that data around their interactions? So, that's really interesting. So, I wonder if we can go back and talk about, you know, we already mentioned a little bit about, like, the early days of Axiom and how you got started. Was there anything that you found in the early discovery process that was surprising and made you pivot strategy? SEIF: A couple of things. Basically, people don't really care about the tech as much as they care [inaudible 12:51] and the packaging, so that's something that we had to learn. And number two, continuous feedback. Continuous feedback changed the way we worked completely, right? And, you know, after that, we had a Slack channel, then we opened a Discord channel. And, like, this continuous feedback coming in just helps with iterating, helps us with prioritizing, et cetera. And that changed the way we actually developed product. VICTORIA: You use Slack and Discord? SEIF: No. No Slack anymore. We had a community Slack. We had a community [inaudible 13:19] Slack. Now, there's no community Slack. We only have a community Discord. And the community Slack is...sorry, internally, we use Slack, but there's a community Discord for the community. JOE: But how do you keep that staffed? Is it, like, everybody is in the Discord during working hours? Is it somebody's job to watch out for community questions? SEIF: I think everybody gets involved now just...and you can see it. If you go on our Discord, you will just see it. Just everyone just gets involved. I think just people are passionate about what they're doing. At least most people are involved on Discord, right? Because there's, like, Discord the help sections, and people are just asking questions and other people answering. And now, we reached a point where people in the community start answering the questions for other people in the community. So, that's how we see it's starting to become a healthy community, et cetera. But that is one of my favorite things: when I see somebody from the community answering somebody else, that's a highlight for me. Actually, we hired somebody from that community because they were so active. JOE: Yeah, I think one of the biggest signs that a product is healthy is when there's a healthy ecosystem building up around it. SEIF: Yeah, and Discord reminds me of the old days of open sources like IRC, just with memes now. But because all of us come from the old IRC days, being on Discord and chatting around, et cetera, et cetera, just gives us this momentum back, gave us this momentum back, whereas Slack always felt a bit too businessy to me. JOE: Slack is like IRC with emoji. Discord is IRC with memes. SEIF: I would say Slack reminds me somehow of MSN Messenger, right? JOE: I feel like there's a huge slam on MSN Messenger here. SEIF: [laughs] What do you guys use internally, Slack or? I think you're using Slack, right? Or Teams. Don't tell me you're using Teams. JOE: No, we're using Slack. SEIF: Okay, good, because I shit talk. Like, there is this, I'll sh*t talk here–when I start talking about Teams, so...I remember that one thing Google did once, and that failed miserably. JOE: Google still has, like, seven active chat products. SEIF: Like, I think every department or every, like, group of engineers just uses one of them internally. I'm not sure. Never got to that point. But hey, who am I to judge? VICTORIA: I just feel like I end up using all of them, and then I'm just rotating between different tabs all day long. You maybe talked me into using Discord. I feel like I've been resisting it, but you got me with the memes. SEIF: Yeah, it's definitely worth it. It's more entertaining. More noise, but more entertaining. You feel it's alive, whereas Slack is...also because there's no, like, history is forever. So, you always go back, and you're like, oh my God, what the hell is this? VICTORIA: Yeah, I have, like, all of them. I'll do anything. SEIF: They should be using Axiom in the background. Just send data to Axiom; we can keep your chat history. VICTORIA: Yeah, maybe. I'm so curious because, you know, you mentioned something about how you realized that it didn't matter really how cool the tech was if the product packaging wasn't also appealing to people. Because you seem really excited about what you've built. So, I'm curious, so just tell us a little bit more about how you went about trying to, like, promote this thing you built. Or was, like, the continuous feedback really early on, or how did that all kind of come together? SEIF: The continuous feedback helped us with performance, but actually getting people to sign up and pay money it started early on. But with Vercel, it kind of skyrocketed, right? And that's mostly because we went with the whole zero-config approach where it's just literally two clicks. And all of a sudden, Vercel is sending your data to Axiom, and that's it. We will create [inaudible 16:33]. And we worked very closely with Vercel to do this, to make this happen, which was awesome. Like, yeah, hats off to them. They were fantastic. And just two clicks, three clicks away, and all of a sudden, we created Axiom organization for you, the data set for you. And then we're sending it...and the data from Vercel is being forwarded to it. I think that packaging was so simple that it made people try it out quickly. And then, the experience of actually using Axiom was sticky, so they continued using it. And then the price was so low because we give 500 gigs for free, right? You send us 500 gigs a month of logs for free, and we don't care. And you can start off here with one terabyte for 25 bucks. So, people just start signing up. Now, before that, it was five terabytes a month for $99, and then we changed the plan. But yeah, it was cheap enough, so people just start sending us more and more and more data eventually. They weren't thinking...we changed the way people start thinking of “what am I going to send to Axiom” or “what am I going to send to my logs provider or log storage?” To how much more can I send? And I think that's what we wanted to reach. We wanted people to think, how much more can I send? JOE: You mentioned latency and cost. I'm curious about...the other big challenge we've seen with observability platforms, including logs, is cardinality of labels. Was there anything you had to sacrifice upfront in terms of cardinality to manage either cost or volume? SEIF: No, not really. Because the way we designed it was that we should be able to deal with high cardinality from scratch, right? I mean, there's open-source ways of doing, like, if you look at how, like, a column store, if you look at a column store and every dimension is its own column, it's just that becomes, like, you can limit on the amount of columns you're creating, but you should never limit on the amount of different values in a column could be. So, if you're having something like stat tags, right? Let's say hosting, like, hostname should be a column, but then the different hostnames you have, we never limit that. So, the cardinality on a value is something that is unlimited for us, and we don't really see it in cost. It doesn't really hit us on cost. It reflects a bit on compression if you get into technical details of that because, you know, high cardinality means a lot of different data. So, compression is harder, but it's not repetitive. But then if you look at, you know, oh, I want to send a lot of different types of fields, not values with fields, so you have hostname, and latency, and whatnot, et cetera, et cetera, yeah, that's where limitation starts because then they have...it's like you're going to a wide range of...and a wider dimension. But even that, we, yeah, we can deal with thousands at this point. And we realize, like, most people will not need more than three or four. It's like a Postgres table. You don't need more than 3,000 to 4000 columns; else, you know, you're doing a lot. JOE: I think it's actually pretty compelling in terms of cost, though. Like, that's one of the things we've had to be most careful about in terms of containing cost for metrics and logs is, a lot of providers will...they'll either charge you based on the number of unique metric combinations or the performance suffers greatly. Like, we've used a lot of Prometheus-based solutions. And so, when we're working with developers, even though they don't need more than, you know, a few dozen metric combinations most of the time, it's hard for people to think of what they need upfront. It's much easier after you deploy it to be able to query your data and slice it retroactively based on what you're seeing. SEIF: That's the detail. When you say we're using Prometheus, a lot of the metrics tools out there are using, just like Prometheus, are using the Gorilla data structure. And the real data structure was never designed to deal with high cardinality labels. So, basically, to put it in a simple way, every combination of tags you send for metrics is its own file on disk. That's, like, the very simple way of explaining this. And then, when you're trying to search through everything, right? And you have a lot of these combinations. I actually have to get all these files from this conversion back together, you know, and then they're chunked, et cetera. So, it's a problem. Generally, how metrics are doing it...most metrics products are using it, even VictoriaMetrics, et cetera. What they're doing is they're using either the Prometheus TSDB data structure, which is based on Gorilla. Influx was doing the same thing. They pivoted to using more and more like the ones we use, and Honeycomb uses, right? So, we might not be as fast on metrics side as these highly optimized. But then when it comes to high [inaudible 20:49], once we start dealing with high cardinality, we will be faster than those solutions. And that's on a very technical level. JOE: That's pretty cool. I realize we're getting pretty technical here. Maybe it's worth defining cardinality for the audience. SEIF: Defining cardinality to the...I mean, we just did that, right? JOE: What do you think, Victoria? Do you know what cardinality is now? [laughs] VICTORIA: All right. Now I'm like, do I know? I was like, I think I know what it means. Cardinality is, like, let's say you have a piece of data like an event or a transaction. SEIF: It's like the distinct count on a property that gives you the cardinality of a property. VICTORIA: Right. It's like how many pieces of information you have about that one event, basically, yeah. JOE: But with some traditional metrics stores, it's easy to make mistakes. For example, you could have unbounded cardinality by including response time as one of the labels -- SEIF: Tags. JOE: And then it's just going to -- SEIF: Oh, no, no. Let me give you a better one. I put in timestamp at some point in my life. JOE: Yeah, I feel like everybody has done that one. [laughter] SEIF: I've put a system timestamp at some point in my life. There was the actual timestamp, and there was a system timestamp that I would put because I wanted to know when the...because I couldn't control the timestamp, and the only timestamp I had was a system timestamp. I would always add the actual timestamp of when that event actually happened into a metric, and yeah, that did not scale. MID-ROLL AD: Are you an entrepreneur or start-up founder looking to gain confidence in the way forward for your idea? At thoughtbot, we know you're tight on time and investment, which is why we've created targeted 1-hour remote workshops to help you develop a concrete plan for your product's next steps. Over four interactive sessions, we work with you on research, product design sprint, critical path, and presentation prep so that you and your team are better equipped with the skills and knowledge for success. Find out how we can help you move the needle at tbot.io/entrepreneurs. VICTORIA: Yeah. I wonder if you could maybe share, like, a story about when it's gone wrong, and you've suddenly charged a lot of money [laughs] just to get information about what's happening in the system. Any, like, personal experiences with observability that kind of informed what you did with Axiom? SEIF: Oof, I have a very bad one, like, a very, very bad one. I used to work for a company. We had to deploy Elasticsearch on Windows Servers, and it was US-East-1. So, just a combination of Elasticsearch back in 2013, 2014 together with Azure and Windows Server was not a good idea. So, you see where this is going, right? JOE: I see where it's going. SEIF: Eventually, we had, like, we get all these problems because we used Elasticsearch and Kibana as our, you know, observability platform to measure everything around the product we were building. And funny enough, it cost us more than actually maintaining the infrastructure of the product. But not just that, it also kept me up longer because most of the downtimes I would get were not because of the product going down. It's because my Elasticsearch cluster started going down, and there's reasons for that. Because back then, Microsoft Azure thought that it's okay for any VM to lose connection with the rest of the VMs for 30 seconds per day. And then, all of a sudden, you have Elasticsearch with a split-brain problem. And there was a phase where I started getting alerted so much that back then, my partner threatened to leave me. So I bought a...what I think was a shock bracelet or a shock collar via Bluetooth, and I connected it to phone for any notification. And I bought that off Alibaba, by the way. And I would charge it at night, put it on my wrist, and go to sleep. And then, when alert happens, it will fully discharge the battery on me every time. JOE: Okay, I have to admit, I did not see where that was going. SEIF: Yeah, did that for a while; definitely did not save my relationship either. But eventually, that was the point where, you know, we started looking into other observability tools like Datadog, et cetera, et cetera, et cetera. And that's where the actual journey began, where we moved away from Elasticsearch and Kibana to look for something, okay, that we don't have to maintain ourselves and we can use, et cetera. So, it's not about the costs as much; it was just pain. VICTORIA: Yeah, pain is a real pain point, actual physical [chuckles] and emotional pain point [laughter]. What, like, motivates you to keep going with Axiom and to keep, like, the wind in your sails to keep working on it? SEIF: There's a couple of things. I love working with my team. So, honestly, I just wake up, and I compliment my team. I just love working with them. They're a lot of fun to work with. And they challenge me, and I challenge them back. And I upset them a lot. And they can't upset me, but I upset them. But I love working with them, and I love working with that team. And the other thing is getting, like, having this constant feedback from customers just makes you want to do more and, you know, close sales, et cetera. It's interesting, like, how I'm a very technical person, and I'm more interested in sales because sales means your product works, the product, the technical parts, et cetera. Because if technically it's not working, you can't build a product on top of it. And if you're not selling it, then what's the point? You only sell when the product is good, more or less, unless you're Oracle. VICTORIA: I had someone ask me about Oracle recently, actually. They're like, "Are you considering going back to it?" And I'm maybe a little allergic to it from having a federal consulting background [laughs]. But maybe they'll come back around. I don't know. We'll see. SEIF: Did you sell your soul back then? VICTORIA: You know, I feel like I just grew up in a place where that's what everyone did was all. SEIF: It was Oracle, IBM, or HP back in the day. VICTORIA: Yeah. Well, basically, when you're working on applications that were built in, like, the '80s, Oracle was, like, this hot, new database technology [laughs] that they just got five years ago. So, that's just, yeah, interesting. SEIF: Although, from a database perspective, they did a lot of the innovations. A lot of first innovations could have come from Oracle. From a technical perspective, they're ridiculous. I'm not sure from a product perspective how good they are. But I know their sales team is so big, so huge. They don't care about the product anymore. They can still sell. VICTORIA: I think, you know, everything in tech is cyclical. So, you know, if they have the right strategy and they're making some interesting changes over there, there's always a chance [laughs]. Certain use cases, I mean, I think that's the interesting point about working in technology is that you know, every company is a tech company. And so, there's just a lot of different types of people, personas, and use cases for different types of products. So, I wonder, you know, you kind of mentioned earlier that, like, everyone is interested in Axiom. But, you know, I don't know, are you narrowing the market? Or, like, how are you trying to kind of focus your messaging and your sales for Axiom? SEIF: I'm trying to focus on developers. So, we're really trying to focus on developers because the experience around observability is crap. It's stupid expensive. Sorry for being straightforward, right? And that's what we're trying to change. And we're targeting developers mainly. We want developers to like us. And we'll find all these different types of developers who are using it, and that's the interesting thing. And because of them, we start adding more and more features, like, you know, we added tracing, and now that enables, like, billions of events pushed through for, you know, again, for almost no money, again, $25 a month for a terabyte of data. And we're doing this with metrics next. And that's just to address the developers who have been giving us feedback and the market demand. I will sum it up, again, like, the experience is crap, and it's stupid expensive. I think that's the [inaudible 28:07] of observability is just that's how I would sum it up. VICTORIA: If you could go back in time and talk to yourself when you were still a developer, now that you're CTO, what advice would you give yourself? JOE: Besides avoiding shock collars. VICTORIA: [laughs] Yes. SEIF: Get people's feedback quickly so you know you're on the right track. I think that's very, very, very, very important. Don't just work in the dark, or don't go too long into stealth mode because, eventually, people catch up. Also, ship when you're 80% ready because 100% is too late. I think it's the same thing here. JOE: Ship often and early. SEIF: Yeah, even if it's not fully ready, it's still feedback. VICTORIA: Ship often and early and talk to people [laughs]. Just, do you feel like, as a developer, did you have the skills you needed to be able to get the most out of those feedback and out of those conversations you were having with people around your product? SEIF: I still don't think I'm good enough. You're just constantly learning, right? I just accepted I'm part of a team, and I have my contributions. But as an individual, I still don't think I know enough. I think there's more I need to learn at this point. VICTORIA: I wonder, what questions do you have for me or Joe? SEIF: How did you start your podcast, and why the name? VICTORIA: Oh, man, I hope I can answer. So, the podcast was started...I think it's, like, we're actually about to be at our 500th Episode. So, I've only been a host for the last year. Maybe Joe even knows more than I do. But what I recall is that one person at thoughtbot thought it would be a great idea to start a podcast, and then they did it. And it seems like the whole company is obsessed with robots. I'm not really sure where that came from. There used to be a tiny robot in the office, is what I remember. And people started using that as, like, the mascot. And then, yeah, that's it, that's the whole thing. SEIF: Was the robot doing anything useful or just being cute? JOE: It was just cute, and it's hard to make a robot cute. SEIF: Was it a real robot, or was it like a -- JOE: No, there was, at one point, a toy robot. The name...I actually forget the origin–origin of the name, but the name Giant Robots comes from our blog. So, we named the podcast the same as the blog: Giant Robots Smashing Into Other Giant Robots. SEIF: Yes, it's called transformers. VICTORIA: Yeah, I like it. It's, I mean, now I feel like -- SEIF: [laughs] VICTORIA: We got to get more, like, robot dogs involved [laughs] in the podcast. SEIF: Like, I wanted to add one thing when we talked about, you know, what gets me going. And I want to mention that I have a six-month-old son now. He definitely adds a lot of motivation for me to wake up in the morning and work. But he also makes me wake up regardless if I want to or not. VICTORIA: Yeah, you said you had invented an alarm clock that never turns off. Never snoozes [laughs]. SEIF: Yes, absolutely. VICTORIA: I have the same thing, but it's my dog. But he does snooze, actually. He'll just, like, get tired and go back to sleep [laughs]. SEIF: Oh, I have a question. Do dogs have a Tamagotchi phase? Because, like, my son, the first three months was like a Tamagotchi. It was easy to read him. VICTORIA: Oh yeah, uh-huh. SEIF: Noisy but easy. VICTORIA: Yes, yes. SEIF: Now, it's just like, yeah, I don't know, like, the last month he has opinions at six months. I think it's because I raised him in Europe. I should take him back to the Middle East [laughs]. No opinions. VICTORIA: No, dogs totally have, like, a communication style, you know, I pretty much know what he, I mean, I can read his mind, obviously [laughs]. SEIF: Sure, but that's when they grow a bit. But what when they were very...when the dog was very young? VICTORIA: Yeah, they, I mean, they also learn, like, your stuff, too. So, they, like, learn how to get you to do stuff or, like, I know she'll feed me if I'm sitting here [laughs]. SEIF: And how much is one dog year, seven years? VICTORIA: Seven years. SEIF: Seven years? VICTORIA: Yeah, seven years? SEIF: Yeah. So, basically, in one year, like, three months, he's already...in one month, he's, you know, seven months old. He's like, yeah. VICTORIA: Yeah. In a year, they're, like, teenagers. And then, in two years, they're, like, full adults. SEIF: Yeah. So, the first month is basically going through the first six months of a human being. So yeah, you pass...the first two days or three days are the Tamagotchi phase that I'm talking about. VICTORIA: [chuckles] I read this book, and it was, like, to understand dogs, it's like, they're just like humans that are trying to, like, maximize the number of positive experiences that they have. So, like, if you think about that framing around all your interactions about, like, maybe you're trying to get your son to do something, you can be like, okay, how do I, like, I don't know, train him that good things happen when he does the things I want him to do? [laughs] That's kind of maybe manipulative but effective. So, you're not learning baby sign language? You're just, like, going off facial expressions? SEIF: I started. I know how Mama looks like. I know how Dada looks like. I know how more looks like, slowly. And he already does this thing that I know that when he's uncomfortable, he starts opening and closing his hands. And when he's completely uncomfortable and basically that he needs to go sleep, he starts pulling his own hair. VICTORIA: [laughs] I do the same thing [laughs]. SEIF: You pull your own hair when you go to sleep? I don't have that. I don't have hair. VICTORIA: I think I do start, like, touching my head though, yeah [inaudible 33:04]. SEIF: Azure took the last bit of hair I had! Went away with Azure, Elasticsearch, and the shock collar. VICTORIA: [laughs] SEIF: I have none of them left. Absolutely nothing. I should sue Elasticsearch for this shit. VICTORIA: [laughs] Let me know how that goes. Maybe there's more people who could join your lawsuit, you know, with a class action. SEIF: [laughs] Yeah. Well, one thing I wanted to also just highlight is, right now, one of the things that also makes the company move forward is we realized that in a single domain, we proved ourselves very valuable to specific companies, right? So, that was a big, big thing, milestone for us. And now we're trying to move into a handful of domains and see which one of those work out the best for us. Does that make sense? VICTORIA: Yeah. And I'm curious: what are the biggest challenges or hurdles that you associate with that? SEIF: At this point, you don't want just feedback. You want constructive criticism. Like, you want to work with people who will criticize the applic...and you iterate with them based on this criticism, right? They're just not happy about you and trying to create design partners. So, for us, it was very important to have these small design partners who can work with us to actually prove ourselves as valuable in a single domain. Right now, we need to find a way to scale this across several domains. And how do you do that without sacrificing? Like, how do you open into other domains without sacrificing the original domain you came from? So, there's a lot of things [inaudible 34:28]. And we are in the middle of this. Honestly, I Forrest Gumped my way through half of this, right? Like, I didn't know what I was doing. I had ideas. I think it's more of luck at this point. And I had luck. No, we did work. We did work a lot. We did sleepless nights and everything. But I think, in the last three years, we became more mature and started thinking more about product. And as I said, like, our CEO, Neil, and Dominic, our head of product, are putting everything behind being a product-led organization, not just a tech-led organization. VICTORIA: That's super interesting. I love to hear that that's the way you're thinking about it. JOE: I was just curious what other domains you're looking at pushing into if you can say. SEIF: So, we are going to start moving into ETL a bit more. We're trying to see how we can fit in specific ML scenarios. I can't say more about the other, though. JOE: Do you think you'll take the same approaches in terms of value proposition, like, low cost, good enough latency? SEIF: Yes, that's definitely one thing. But there's also...so, this is the values we're bringing to the customer. But also, now, our internal values are different. Now it's more of move with urgency and high velocity, as we said before, right? Think big, work small. The values in terms of values we're going to take to the customers it's the same ones. And maybe we'll add some more, but it's still going to be low-cost and large-scale. And, internally, we're just becoming more, excuse my French, agile. I hate that word so much. Should be good with Scrum. VICTORIA: It's painful, but everyone knows what you're talking about [laughs], you know, like -- SEIF: See, I have opinions here about Scrum. I think Scrum should be only used in terms of iceScrum [inaudible 36:04], or something like that. VICTORIA: Oh no [laughter]. Well, it's a Rugby term, right? Like, that's where it should probably stay. SEIF: I did not know it's a rugby term. VICTORIA: Yeah, so it should stay there, but -- SEIF: Yes [laughs]. VICTORIA: Yeah, I think it's interesting. Yeah, I like the being flexible. I like the just, like, continuous feedback and how you all have set up to, like, talk with your customers. Because you mentioned earlier that, like, you might open source some of your projects. And I'm just curious, like, what goes into that decision for you when you're going to do that? Like, what makes you think this project would be good for open source or when you think, actually, we need to, like, keep it? SEIF: So, we open source libraries, right? We actually do that already. And some other big organizations use our libraries; even our competitors use our libraries, that we do. The whole product itself or at least a big part of the product, like database, I'm not sure we're going to open source that, at least not anytime soon. And if we open source, it's going to be at a point where the value-add it brings is nothing compared to how well our product is, right? So, if we can replace whatever's at the back with...the storage engine we have in the back with something else and the product doesn't get affected, that's when we open source it. VICTORIA: That's interesting. That makes sense to me. But yeah, thank you for clarifying that. I just wanted to make sure to circle back. Since you have this big history in open source, yeah, I'm curious if you see... SEIF: Burning me out? VICTORIA: Burning you out, yeah [laughter]. Oh, that's a good question. Yeah, like, because, you know, we're about to be in October here. Do you have any advice or strategies as a maintainer for not getting burned out during the next couple of weeks besides, like, hide in a cave and without internet access [laughs]? SEIF: Stay away from Reddit and Hacker News. That's my goal for October now because I'm always afraid of getting too attached to an idea, or too motivated, or excited by an idea that I drift away from what I am actually supposed to be doing. VICTORIA: Last question is, is there anything else you would like to promote? SEIF: Yeah, check out our website; I think it's at axiom.co. Check it out. Sign up. And comment on Discord and talk to me. I don't bite, sometimes grumpy, but that's just because of lack of sleep in the morning. But, you know, around midday, I'm good. And if you're ever in Berlin and you want to hang out, I'm more than willing to hang out. VICTORIA: Whoo, that's awesome. Yeah, Berlin is great. I was there a couple of years ago but no plans to go back anytime soon, but maybe I'll keep that in mind. You can subscribe to the show and find notes along with a complete transcript for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you could find me on Twitter @victori_ousg. And this podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening. See you next time. Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at tbot.io/referral. Or you can email us at referrals@thoughtbot.com with any questions. Special Guests: Joe Ferris and Seif Lotfy.
What does Customer Success look like in a company that uses a product-led growth strategy? So many people seem to think that you're either product-led or human-led. But the reality is that most businesses land somewhere in the middle. Ryan Seams, Senior Director of Customer Success and Services at Mixpanel is here to help us understand how a CS team works when a business is largely product-led. He has spent the last 9 years building the CS business in Mixpanel and has learned a ton of lessons that he is sharing with us, so we don't have to make the same mistakes as we scale Whether your business is exploring a digital segment or considering going full PLG, this podcast will give you some insights on how some of the best in the industry are doing it! Music: Workday by Scott Dugdale
Summary The rapid growth of machine learning, especially large language models, have led to a commensurate growth in the need to store and compare vectors. In this episode Louis Brandy discusses the applications for vector search capabilities both in and outside of AI, as well as the challenges of maintaining real-time indexes of vector data. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack) This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold) You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It's the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it's real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free! If you're a data person, you probably have to jump between different tools to run queries, build visualizations, write Python, and send around a lot of spreadsheets and CSV files. Hex brings everything together. Its powerful notebook UI lets you analyze data in SQL, Python, or no-code, in any combination, and work together with live multiplayer and version control. And now, Hex's magical AI tools can generate queries and code, create visualizations, and even kickstart a whole analysis for you – all from natural language prompts. It's like having an analytics co-pilot built right into where you're already doing your work. Then, when you're ready to share, you can use Hex's drag-and-drop app builder to configure beautiful reports or dashboards that anyone can use. Join the hundreds of data teams like Notion, AllTrails, Loom, Mixpanel and Algolia using Hex every day to make their work more impactful. Sign up today at dataengineeringpodcast.com/hex (https://www.dataengineeringpodcast.com/hex) to get a 30-day free trial of the Hex Team plan! Your host is Tobias Macey and today I'm interviewing Louis Brandy about building vector indexes in real-time for analytics and AI applications Interview Introduction How did you get involved in the area of data management? Can you describe what vector search is and how it differs from other search technologies? What are the technical challenges related to providing vector search? What are the applications for vector search that merit the added complexity? Vector databases have been gaining a lot of attention recently with the proliferation of LLM applications. Is a dedicated database technology required to support vector indexes/vector search queries? What are the use cases for native vector data types that are separate from AI? With the increasing usage of vectors for data and AI/ML applications, who do you typically see as the owner of that problem space? (e.g. data engineers, ML engineers, data scientists, etc.) For teams who are investing in vector search, what are the architectural considerations that they need to be aware of? How does it impact the data pipeline strategies/topologies used? What are the complexities that need to be addressed when updating vector data in a real-time/streaming fashion? How does that influence the client strategies that are querying that data? What are the most interesting, innovative, or unexpected ways that you have seen vector search used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on vector search applications? When is vector search the wrong choice? What do you see as future potential applications for vector indexes/vector search? Contact Info LinkedIn (https://www.linkedin.com/in/lbrandy/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links Rockset (https://rockset.com/) Podcast Episode (https://www.dataengineeringpodcast.com/rockset-serverless-analytics-episode-101/) Vector Index (https://www.datastax.com/guides/what-is-a-vector-index) Vector Search (https://www.datastax.com/guides/what-is-vector-search) Rockset Implementation Explanation (https://rockset.com/videos/vector-search-architecture/) Vector Space (https://en.wikipedia.org/wiki/Vector_space) Euclidean Distance (https://en.wikipedia.org/wiki/Euclidean_distance) OLAP == Online Analytical Processing (https://en.wikipedia.org/wiki/Online_analytical_processing) OLTP == Online Transaction Processing (https://en.wikipedia.org/wiki/Online_transaction_processing) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Summary A significant amount of time in data engineering is dedicated to building connections and semantic meaning around pieces of information. Linked data technologies provide a means of tightly coupling metadata with raw information. In this episode Brian Platz explains how JSON-LD can be used as a shared representation of linked data for building semantic data products. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management This episode is brought to you by Datafold – a testing automation platform for data engineers that finds data quality issues before the code and data are deployed to production. Datafold leverages data-diffing to compare production and development environments and column-level lineage to show you the exact impact of every code change on data, metrics, and BI tools, keeping your team productive and stakeholders happy. Datafold integrates with dbt, the modern data stack, and seamlessly plugs in your data CI for team-wide and automated testing. If you are migrating to a modern data stack, Datafold can also help you automate data and code validation to speed up the migration. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold (https://www.dataengineeringpodcast.com/datafold) Introducing RudderStack Profiles. RudderStack Profiles takes the SaaS guesswork and SQL grunt work out of building complete customer profiles so you can quickly ship actionable, enriched data to every downstream team. You specify the customer traits, then Profiles runs the joins and computations for you to create complete customer profiles. Get all of the details and try the new product today at dataengineeringpodcast.com/rudderstack (https://www.dataengineeringpodcast.com/rudderstack) You shouldn't have to throw away the database to build with fast-changing data. You should be able to keep the familiarity of SQL and the proven architecture of cloud warehouses, but swap the decades-old batch computation model for an efficient incremental engine to get complex queries that are always up-to-date. With Materialize, you can! It's the only true SQL streaming database built from the ground up to meet the needs of modern data products. Whether it's real-time dashboarding and analytics, personalization and segmentation or automation and alerting, Materialize gives you the ability to work with fresh, correct, and scalable results — all in a familiar SQL interface. Go to dataengineeringpodcast.com/materialize (https://www.dataengineeringpodcast.com/materialize) today to get 2 weeks free! If you're a data person, you probably have to jump between different tools to run queries, build visualizations, write Python, and send around a lot of spreadsheets and CSV files. Hex brings everything together. Its powerful notebook UI lets you analyze data in SQL, Python, or no-code, in any combination, and work together with live multiplayer and version control. And now, Hex's magical AI tools can generate queries and code, create visualizations, and even kickstart a whole analysis for you – all from natural language prompts. It's like having an analytics co-pilot built right into where you're already doing your work. Then, when you're ready to share, you can use Hex's drag-and-drop app builder to configure beautiful reports or dashboards that anyone can use. Join the hundreds of data teams like Notion, AllTrails, Loom, Mixpanel and Algolia using Hex every day to make their work more impactful. Sign up today at dataengineeringpodcast.com/hex (https://www.dataengineeringpodcast.com/hex) to get a 30-day free trial of the Hex Team plan! Your host is Tobias Macey and today I'm interviewing Brian Platz about using JSON-LD for building linked-data products Interview Introduction How did you get involved in the area of data management? Can you describe what the term "linked data product" means and some examples of when you might build one? What is the overlap between knowledge graphs and "linked data products"? What is JSON-LD? What are the domains in which it is typically used? How does it assist in developing linked data products? what are the characteristics that distinguish a knowledge graph from What are the layers/stages of applications and data that can/should incorporate JSON-LD as the representation for records and events? What is the level of native support/compatibiliity that you see for JSON-LD in data systems? What are the modeling exercises that are necessary to ensure useful and appropriate linkages of different records within and between products and organizations? Can you describe the workflow for building autonomous linkages across data assets that are modelled as JSON-LD? What are the most interesting, innovative, or unexpected ways that you have seen JSON-LD used for data workflows? What are the most interesting, unexpected, or challenging lessons that you have learned while working on linked data products? When is JSON-LD the wrong choice? What are the future directions that you would like to see for JSON-LD and linked data in the data ecosystem? Contact Info LinkedIn (https://www.linkedin.com/in/brianplatz/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links Fluree (https://flur.ee/) JSON-LD (https://json-ld.org/) Knowledge Graph (https://en.wikipedia.org/wiki/Knowledge_graph) Adjacency List (https://en.wikipedia.org/wiki/Adjacency_list) RDF == Resource Description Framework (https://www.w3.org/RDF/) Semantic Web (https://en.wikipedia.org/wiki/Semantic_Web) Open Graph (https://ogp.me/) Schema.org (https://schema.org/) RDF Triple (https://en.wikipedia.org/wiki/Semantic_triple) IDMP == Identification of Medicinal Products (https://www.fda.gov/industry/fda-data-standards-advisory-board/identification-medicinal-products-idmp) FIBO == Financial Industry Business Ontology (https://spec.edmcouncil.org/fibo/) OWL Standard (https://www.w3.org/OWL/) NP-Hard (https://en.wikipedia.org/wiki/NP-hardness) Forward-Chaining Rules (https://en.wikipedia.org/wiki/Forward_chaining) SHACL == Shapes Constraint Language) (https://www.w3.org/TR/shacl/) Zero Knowledge Cryptography (https://en.wikipedia.org/wiki/Zero-knowledge_proof) Turtle Serialization (https://www.w3.org/TR/turtle/) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Welcome to No Hacks Show, a weekly podcast in which we talk to smart guests about the many ways you can optimize your online presence. Today's guest is Eli Schwartz! Eli is a growth advisor who worked with companies like WordPress, Tinder, Coinbase, Mixpanel - to name a few, book and course author, conference speaker, co-host of the Contrarian Marketing podcast… and a lot more.Listen to learn why SEO is never going to die, if SEO is the right investment for every business, why SEO is a 12-18 month investment, and a lot more. If you're a CRO/experimentation practitioner listening to the podcast - you will love Eli (author of Product-Led SEO book). If you're an SEO human, you probably already do :)Episode links:Eli's LinkedInEli's websiteProduct-Led SEO book on Amazon
Traditionally, Mixpanel has provided information about product usage to product teams.
This week, on the EMEA Partner Channel Podcast, the voice of the EMEA channel. Join Palmer Foster who has a conversation with Guillaume Runser, Head of Sales - Growth Markets and former Head of EMEA Partnerships at Mixpanel. Together they talk about: Building and running a partnership program in EMEA: the good, the bad and the ugly Diversity fostering innovation: how running a partnership program across very diverse regions fosters innovation Privacy/Data Security - critical topic in EMEA. Privacy is cultural in EMEA
Brought to you by Amplitude—Build better products | Miro—A collaborative visual platform where your best work comes to life | Ahrefs—Improve your website's SEO for free—Hila Qu is an Executive in Residence at Reforge as well as a renowned growth advisor, angel investor, and published author (her book about growth was named one of the top 10 business books of 2018 in China). Previously, she served as the Director of Growth at GitLab, where she implemented and scaled their PLG motion, and VP of Growth at Acorns, scaling them from 1 million to 5 million users. In today's episode, we discuss:• The importance of having both a product-led and a sales-led motion for companies of all sizes• A step-by-step process for implementing PLG• Common pitfalls of layering on PLG• How to audit your existing funnel• Conversion, activation, and retention tactics• Structuring your growth organization from day one, and as it scales—Find the full transcript at: https://www.lennyspodcast.com/the-ultimate-guide-to-adding-a-plg-motion-hila-qu-reforge-gitlab/#transcript—Where to find Hila Qu:• Twitter: https://twitter.com/HilaQu• LinkedIn: https://www.linkedin.com/in/hilaqu/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Hila's background(03:26) The outcome of writing guest posts for Lenny's Newsletter(05:12) Why companies should have PLG and sales(07:58) What PLG is and why it's so popular(09:41) Zoom, an example of a PLG company(11:24) Common pitfalls in adding a PLG motion(16:06) The spectrum of when PLG makes sense(20:04) What you need to be successful in a product-led growth strategy(24:52) The first step to adding a PLG motion(30:11) What GitLab does and how the sales funnel and PLG funnel work there(34:07) Mapping out the funnel(35:29) Finding leverage and other next steps(38:24) What an aha moment is and conducting an audit(47:30) Activation and conversion (52:17) Why you should start with activation, and who is doing it well(55:24) Retention, the messy part of the funnel(1:00:34) How Hila made an impact on retention at Acorns(1:03:03) The two buckets of data (1:04:56) Tools for implementing a PLG motion(1:08:47) The importance of data (1:10:20) Tips to get started, and why you need to have good data first(1:12:10) How to do a data audit(1:15:04) Building a PLG team(1:22:40) The core growth squad(1:27:51) Lightning round—Referenced:• Hila's guest post on Lenny's Newsletter: https://www.lennysnewsletter.com/p/five-steps-to-starting-your-plg-motion• Ravi Mehta on Lenny's Podcast: https://www.lennyspodcast.com/building-your-product-strategy-stack-ravi-mehta-tinder-facebook-tripadvisor-outpace/• Amplitude: https://amplitude.com/• GitLab: https://about.gitlab.com/• Lauryn Isford on Lenny's Podcast: https://www.lennyspodcast.com/mastering-onboarding-lauryn-isford-head-of-growth-at-airtable/• Acorns: https://signup.acorns.com/• PostHog: https://posthog.com/• Mixpanel: https://mixpanel.com/• Pendo: https://go.pendo.io/• Optimizely: https://www.optimizely.com/• Eppo: https://www.geteppo.com/• HubSpot: https://www.hubspot.com/• Clearbit: https://clearbit.com/• ZoomInfo: https://www.zoominfo.com/• Endgame: https://www.endgame.io/• Pocus: https://www.pocus.com/• Pace: https://www.paceapp.com/• Toplyne: https://www.toplyne.io/• Crystal Widjaja on Lenny's Podcast: https://www.lennyspodcast.com/how-to-scrappily-hire-for-measure-and-unlock-growth-crystal-widjaja-gojek-and-kumu/• Redshift: https://aws.amazon.com/redshift/• The Almanack of Naval Ravikant: A Guide to Wealth and Happiness: https://www.amazon.com/Almanack-Naval-Ravikant-Wealth-Happiness-ebook/dp/B08FF8MTM6• How Women Rise: https://www.amazon.com/How-Women-Rise-Habits-Holding/dp/1847942253/• 硅谷增长黑客实战笔记 (Hila's best-selling book on growth): https://www.amazon.com/dp/B07BZC8L78?ref_=cm_sw_r_cp_ud_dp_ND87BRFMB0CMWBEVB747• The Wandering Earth II: https://wellgousa.com/films/wandering-earth-ii• The Three-Body Problem: https://www.amazon.com/Three-Body-Problem-Cixin-Liu/dp/0765382032• Lululemon yoga pants: https://shop.lululemon.com/c/women-pants/yoga/• ChatGPT: https://chat.openai.com/chat• Someday: https://www.amazon.com/Someday-Alison-McGhee/dp/1416928111—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
How does product design relate to the company's growth goals? Our guest today is Brent Palmer, the product design manager at Mixpanel. You'll learn about their shaping process, how their design team contributed to the development of their new pricing, how to deal with cross-functional teams, and more.Podcast feed: subscribe to https://feeds.simplecast.com/4MvgQ73R in your favorite podcast app, and follow us on iTunes, Stitcher, or Google Podcasts.Show NotesMixpanelShape Up — a book by Ryan SingerFigJam, Notion — shaping toolsConnect with Brent on LinkedInIC2MGMT — Brent's SubstackCheck out Brent's websiteThis show is brought to you by Userlist — an email automation platform for SaaS companies. It matches the complexity of your customer data, including many-to-many relationships between users and companies. Book your demo call today at userlist.com.Interested in sponsoring an episode? Learn more here.Leave a ReviewReviews are hugely important because they help new people discover this podcast. If you enjoyed listening to this episode, please leave a review on iTunes. Here's how.
In episode 48 of The Product Design Podcast, Seth Coelen interviews Brent Palmer, Product Design Manager at Mixpanel, a leading product analytics software company.During our chat, Brent shares how his career path led him into product design leadership and how he knew that was the right path for him. He advises where he has found success in managing a large design team and nurturing culture at scale while helping his team do their best work and flourish in their career paths. This episode is packed with advice for product designers at any stage of their career path, whether you are a junior looking to grow into a manager or a senior leader looking to improve your management style.During our interview with Brent, you will learn:
Summary With the rise of the web and digital business came the need to understand how customers are interacting with the products and services that are being sold. Product analytics has grown into its own category and brought with it several services with generational differences in how they approach the problem. NetSpring is a warehouse-native product analytics service that allows you to gain powerful insights into your customers and their needs by combining your event streams with the rest of your business data. In this episode Priyendra Deshwal explains how NetSpring is designed to empower your product and data teams to build and explore insights around your products in a streamlined and maintainable workflow. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Join in with the event for the global data community, Data Council Austin. From March 28-30th 2023, they'll play host to hundreds of attendees, 100 top speakers, and dozens of startups that are advancing data science, engineering and AI. Data Council attendees are amazing founders, data scientists, lead engineers, CTOs, heads of data, investors and community organizers who are all working together to build the future of data. As a listener to the Data Engineering Podcast you can get a special discount of 20% off your ticket by using the promo code dataengpod20. Don't miss out on their only event this year! Visit: dataengineeringpodcast.com/data-council (https://www.dataengineeringpodcast.com/data-council) today! RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudder (https://www.dataengineeringpodcast.com/rudder) Your host is Tobias Macey and today I'm interviewing Priyendra Deshwal about how NetSpring is using the data warehouse to deliver a more flexible and detailed view of your product analytics Interview Introduction How did you get involved in the area of data management? Can you describe what NetSpring is and the story behind it? What are the activities that constitute "product analytics" and what are the roles/teams involved in those activities? When teams first come to you, what are the common challenges that they are facing and what are the solutions that they have attempted to employ? Can you describe some of the challenges involved in bringing product analytics into enterprise or highly regulated environments/industries? How does a warehouse-native approach simplify that effort? There are many different players (both commercial and open source) in the product analytics space. Can you share your view on the role that NetSpring plays in that ecosystem? How is the NetSpring platform implemented to be able to best take advantage of modern warehouse technologies and the associated data stacks? What are the pre-requisites for an organization's infrastructure/data maturity for being able to benefit from NetSpring? How have the goals and implementation of the NetSpring platform evolved from when you first started working on it? Can you describe the steps involved in integrating NetSpring with an organization's existing warehouse? What are the signals that NetSpring uses to understand the customer journeys of different organizations? How do you manage the variance of the data models in the warehouse while providing a consistent experience for your users? Given that you are a product organization, how are you using NetSpring to power NetSpring? What are the most interesting, innovative, or unexpected ways that you have seen NetSpring used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on NetSpring? When is NetSpring the wrong choice? What do you have planned for the future of NetSpring? Contact Info LinkedIn (https://www.linkedin.com/in/priyendra-deshwal/) Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ (https://www.pythonpodcast.com) covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast (https://www.themachinelearningpodcast.com) helps you go from idea to production with machine learning. Visit the site (https://www.dataengineeringpodcast.com) to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com (mailto:hosts@dataengineeringpodcast.com)) with your story. To help other people find the show please leave a review on Apple Podcasts (https://podcasts.apple.com/us/podcast/data-engineering-podcast/id1193040557) and tell your friends and co-workers Links NetSpring (https://www.netspring.io/) ThoughtSpot (https://www.thoughtspot.com/) Product Analytics (https://theproductmanager.com/topics/product-analytics-guide/) Amplitude (https://amplitude.com/) Mixpanel (https://mixpanel.com/) Customer Data Platform (https://blog.hubspot.com/service/customer-data-platform-guide) GDPR (https://en.wikipedia.org/wiki/General_Data_Protection_Regulation) CCPA (https://en.wikipedia.org/wiki/California_Consumer_Privacy_Act) Segment (https://segment.com/) Podcast Episode (https://www.dataengineeringpodcast.com/segment-customer-analytics-episode-72/) Rudderstack (https://www.rudderstack.com/) Podcast Episode (https://www.dataengineeringpodcast.com/rudderstack-open-source-customer-data-platform-episode-263/) The intro and outro music is from The Hug (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/Love_death_and_a_drunken_monkey/04_-_The_Hug) by The Freak Fandango Orchestra (http://freemusicarchive.org/music/The_Freak_Fandango_Orchestra/) / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/)
Tom Melbourne is the Founder of OpnMkt (pronounced Open Market) which is a Salesforce-Native utility app on the AppExchange for a marketplace approach to the distribution of new leads or new opportunities. OpnMkt is more than just a sales effectiveness tool. It's completely reimagining the status quo when it comes to maximizing sales capacity.Tom has had an incredible career on his way to starting OpnMkt. He started out as an Account Executive where he was very successful and then went on to leading sales teams for companies like CareerBuilder, Citrix, MixPanel and Sendoso. He's been a VP of Sales 3x over and is also an angel investor and a strategic advisor.In this episode, Tom challenges the way we think about sales structure in terms of pre-defined territories and/or named account lists. He offers an innovative alternative to traditional SDR to AE routing/distribution that can inherently open up more sales capacity.#salesconsultantpodcast #salescapacity #leadrouting #accountdistribution #sdr #salesdevelopment #salesterritory #namedaccountsTime Stamps:[2:00] Tom starts by telling the story of starting a wedding DJ business at 12 which he ran all the way through college. This led to him and a partner creating a wedding planning app right out of college that they sold to companies. [8:00] Explains what he attributes his progressive career to. Tom gives some advice to people who might be earlier in their career; nurture your network.[10:23] Nobody cares about a lead until someone wants to buy something + the origin story of OpnMkt.[13:00] The one thing missing from lead distribution platforms today is involvement from the Account Executive. The more excited a rep is about working a lead the better that lead will be worked.[15:00] The current model of SDR to AE opportunity routing is inherently inconsistent due to the AE's expectation varying based on the health of their funnel. [19:00] Pre structured territories limit the ability to dynamically route leads. We talk through an example where a lead surfaced that was generated via a rep's brand building/social media activity and how unless an exception was made that rep most likely wouldn't get the lead.[22:30] Tom helps us understand what “Sales Capacity” even means. Then he gets into how we inherently limit capacity through the traditional approaches to structuring our orgs and our territories.[26:00] We unpack an alternative approach to just defaulting to the same ol ‘assign everything to the salespeople/Account Executives'. The approach is innovative ad creates optionality which leads to increased sales capacity.[37:00] He explains the inherent challenges to the traditional SDR/AE pod model (i.e., personality conflicts, inconsistent SDR development, etc.) [41:30] The tension between quality and quantity and how SDR compensation changes for the better in this new model. [47:00] Why and how creating optionality in your distribution helps maximize sales capacity.Mentions:OpnMkt - https://www.opnmkt.ioSendoso - https://sendoso.comConnect with us:Tom's LinkedIn - https://www.linkedin.com/in/tommelbourne/OpnMkt - https://www.opnmkt.ioOpnMkt on the Salesforce AppExchange -
Natalie Marcotullio is Head of Growth and Ops at Navattic. I've been working with Natalie and the Navattic team for 4+ months now as a Marketing Advisor so I personally have seen some of the creative work they're putting out, and especially as an early stage B2B startup! Prior, Natalie held various roles including Marketing Director and Chief of Staff at Map My Customers (a Series A SaaS). Navattic was founded in 2020 and is based out of NYC. They've raised $5.4M funding (Seed). Navattic helps you instantly create interactive product demos. Customers include Mixpanel, Google Cloud, DropBox and more. Here's what we cover in this episode: What does “creative” mean; What are some cool, creative approaches and experiments at Navattic; How do you balance creativity with results (HINT: look at down-funnel conversions, you should be getting faster, easier deal cycles); How are startups doing on the “creative meter'' (HINT: a lot of marketers can't be creative because they're doing too much); Why marketing inputs are just as important as outputs; When has Natalie personally been the most creative; Natalie asks me her burning question. You can find Natalie on LinkedIn. You can learn more about Navattic. For more content, subscribe to Modern Startup Marketing on Apple or Spotify or wherever you like to listen, and don't forget to leave a review! And whenever you're ready, there are 3 ways I can help you: 1. Startup marketing strategy, execution and advising (25+ happy clients and mentees) >> www.furmanovmarketing.com 2. Sign up to get my monthly newsletter where I'm sharing playbooks and insights and cracking some jokes that will make you smile guaranteed >> https://share.hsforms.com/1cP1V40x7RGes5gHk1XNgNw47lba 3. Sponsor my Top 10% podcast and get startup founders, marketers and VCs hearing about your brand >> https://anchor.fm/anna-furmanov You can also find me hanging out on LinkedIn every single week: www.linkedin.com/in/annafurmanov --- Send in a voice message: https://anchor.fm/anna-furmanov/message
Brought to you by Pando—Always on employee progression (https://www.pando.com/lenny), Notion—One workspace. Every team (https://www.notion.com/lennyspod), and Lemon.io—A marketplace of vetted software developers (https://lemon.io/lenny).Vijay Iyengar is Head of Product at Mixpanel, and similar to myself, came from an engineering background before transitioning to product. In today's episode, he explains how Mixpanel has evolved its growth strategy from a fast-paced, feature-focused approach to a more deliberate approach that prioritizes design and user experience. He also shares how Mixpanel irons out customer problems, including implementing internal tools that allow engineering and product teams to respond to customer feedback directly. Additionally, Vijay shares his top SaaS products, books, frameworks, and more. Tune in to gain valuable insights from a seasoned product leader.Find the transcript for this episode and all past episodes at: https://www.lennyspodcast.com/episodes/. Today's transcript will be live by 8 a.m. PT.Where to find Vijay Iyengar:• Twitter: https://twitter.com/vijayiyengar• LinkedIn: https://www.linkedin.com/in/vijay4/Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/Referenced:• Mixpanel: https://mixpanel.com/• Figma: https://www.figma.com/• Notion: https://www.notion.so/• “Shape Up: Stop Running in Circles and Ship Work That Matters”: https://basecamp.com/shapeup• The RICE prioritization framework: https://www.productplan.com/glossary/rice-scoring-model/• BigQuery: https://cloud.google.com/bigquery• Census: https://www.getcensus.com/• Zoom: https://zoom.us/• FigJam: https://www.figma.com/figjam/• A Data Stack for PLG teams: https://mixpanel.com/blog/data-analytics-product-led-growth/• Product analytics in the modern data stack: https://mixpanel.com/blog/mixpanel-partners-with-census-to-bring-product-analytics-to-the-modern-data-stack/• Snowflake: https://www.snowflake.com/en/• Amazon Redshift: https://www.amazonaws.cn/en/redshift/• Event-Based Analytics: https://developer.mixpanel.com/docs/under-the-hood• The Goal: A Process of Ongoing Improvement: https://www.amazon.com/Goal-Process-Ongoing-Improvement/dp/0884271951• Cool Gray City of Love: 49 Views of San Francisco: https://www.amazon.com/Cool-Gray-City-Love-Francisco/dp/1608199606• The West Wing Weekly podcast: http://thewestwingweekly.com/• WeCrashed on AppleTV+: https://tv.apple.com/us/show/wecrashed/• Severance on AppleTV+: https://tv.apple.com/us/show/severance/• Gibson Biddle on Lenny's Podcast: https://www.lennyspodcast.com/gibson-biddle-on-his-dhm-product-strategy-framework-gem-roadmap-prioritization-framework-5-netflix-strategy-mini-case-studies-building-a-personal-board-of-directors-and-much-more/• Shishir Mehrotra on Lenny's Podcast: https://www.lennyspodcast.com/the-rituals-of-great-teams-shishir-mehrotra-coda-youtube-microsoft/In this episode, we cover:(00:00) Vijay's background(04:07) How Vijay learned to be more open-minded to new ideas (06:26) Mixpanel's journey(12:40) When to optimize for speed(13:49) The feature phase vs. the design phase(17:02) The importance of not losing focus on your core product(19:52) How Mixpanel organizes teams around buckets of problems(20:43) Mixpanel's most recent six-month time horizon planning cycle(25:08) The RICE framework for prioritization (and when to ignore the C and E)(26:31) The problem with estimations, and why Basecamp suggests using a six-week time box(30:04) How Mixpanel keeps product teams and engineers connected to customers via Slack (33:21) SaaS tools Mixpanel's teams use(34:54) The biggest product analytics mistakes(37:34) The present and future of analytics (41:05) How adopting a product mindset has helped Vijay grow his career(41:47) Lightning roundProduction and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
More Than Just Code podcast - iOS and Swift development, news and advice
This week we discuss the new M2 Max, M2 Pro and Mac mini, MacBook Pros 14 & 16. We follow up on Stable Diffusion, ChatGPT and updated Apple Design Resources. We also cover augmenting accessibility with localized image names and the 2nd generation HomePod. In our Picks; Improving Console Output, SwiftUI Views Life Cycle, SwiftUI 4 adds tap location, DIY iOS Static Analysis, Gitignore.io, Getting Started with Xcode Cloud, and How to professionally say...
Madhavan Ramanujam is a senior partner at Simon-Kucher, where he works with tier-one tech companies like Uber, Asana, and LinkedIn to help them develop their pricing and monetization strategies. He's also the author of the most widely read book on pricing strategy, Monetizing Innovation. In today's podcast, we talk about all the elements that go into your pricing strategy. Madhavan gives real-life examples of having conversations about “willingness to pay,” how segmentation should impact your pricing, and when to start thinking about pricing. He also shares tips on how behavioral pricing impacts your thinking, how to restructure your pricing during a downturn, and much more.—Find the full transcript here: https://www.lennyspodcast.com/the-art-and-science-of-pricing-madhavan-ramanujam-monetizing-innovation-simon-kucher/#transcript—Thank you to our wonderful sponsors for making this episode possible:• Lemon.io—a marketplace of vetted software developers. Get your match within 48h: https://lemon.io/lenny• Mixpanel—product analytics that everyone can trust, use, and afford: https://mixpanel.com/startups• Miro—a collaborative visual platform where your best work comes to life: https://miro.com/lenny—Where to find Madhavan Ramanujam:• Twitter: https://twitter.com/madhavansf• LinkedIn: https://www.linkedin.com/in/madhavan-ramanujam-1533063/• Website: https://www.simon-kucher.com/en-us/leadership—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—Referenced:• Monetizing Innovation: https://www.amazon.com/Monetizing-Innovation-Companies-Design-Product/dp/1119240867• It's Price Before Product. Period: https://review.firstround.com/its-price-before-product-period• Rahul Vohra on NFX podcast: https://www.youtube.com/watch?v=JCiMqVC6Dok• Leaders, Killers, and Fillers framework: https://kevinacohn.medium.com/leaders-fillers-and-killers-creating-bundles-that-work-7f4c7329cf53• Predictably Irrational: https://www.amazon.com/Predictably-Irrational-Revised-Expanded-Decisions/dp/0061353248• Preorder Unlocking Growth: https://www.amazon.com/Maximize-Profits-Prices-Products-Services/dp/1119633060• Confessions of the Pricing Man: https://www.amazon.com/Confessions-Pricing-Man-Affects-Everything/dp/B08TZPRKVY• Simon-Kucher books: https://www.simon-kucher.com/en-us/resources/books• Mastering SaaS pricing (Kyle Poyar): https://www.saas-knowledge-base.com/docs/mastering-saas-pricing-from-mvp-to-ipo• 6 Must-Reads on Pricing a Product: https://review.firstround.com/our-6-must-reads-on-pricing-a-product—In this episode, we cover:(03:24) Madhavan's background(06:29) How Madhavan got into pricing and monetization(08:02 ) Why he wrote Monetizing Innovation(09:43) Why pricing is a cross-functional discipline, but ultimately a function of product(11:27) What “willingness to pay” is, and why founders need to have conversations about it early and often(15:23) How Porsche built their SUV around customer feedback and willingness to pay(18:46) How testing helped a marketplace company avoid building something customers don't value(23:50) Several methods to use to learn willingness to pay(33:38) When and how the willingness-to-pay conversations happen(37:08) How many customers you should be talking to(38:13) When to revisit pricing(39:20) Segmentation strategies(42:42) Why you need to act differently to your segments that have different needs(44:33) When to think about segmentation(47:49) Examples of segmentation done well(52:24) The importance of dynamic segmentation(53:19) The three pricing strategies: maximizing, penetrating, and skimming(55:49) How to use bundling and packaging to unlock segmentation(59:50) Why how you charge is more important than how much(1:03:30) Subscription vs. usage (1:07:40) Pricing options and structures(1:10:22) How to run tests to see which pricing model works best(1:12:06) Focusing on benefits vs. features(1:16:13) What behavioral pricing is and why it's important(1:20:54) Tactics for behavioral pricing(1:26:33) Determining pricing thresholds (1:28:23) Tips for pricing in a depressed market(1:32:50) Madhavan's new book—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Eli Schwartz literally wrote the book on "Product-led SEO," a practice he has applied at leading B2B and B2C companies such as Coinbase, Zendesk, Gusto, MixPanel, Quora and many more. In this conversation he shares practical advice on implementing this strategy along with examples from Zillow, Figma, Canva and others. We also talk about hiring for SEO in a startup and where SEO should live inside the org, as well as AI tools in SEO. Sponsor: This podcast is brought to you by grwth.co. Grwth offers fractional CMOs, paired with best-in-class digital marketing execution to support early-stage startup success. With a focus on seed and series A companies, Grwth has helped a number of SaaS, digital health, and e-commerce startups build their go-to-market function and scale up. To learn more and book a free consultation, go to grwth.co. Episode timestamps: (1:27) What is Product-led SEO? (4:07) Zillow case study (7:24) The three pillars of SEO (9:45) Product-led SEO in B2B vs. B2C companies (11:59) Planting as many seeds as possible (13:18) Leveraging AI in SEO (16:11) Hiring for SEO at the early stages (17:34) Where should SEO live within the organization? (18:49) Houston vs. the Bay area (21:29) When does product-led SEO not work? (22:48) Figma & Canva case studies (see notes below) (24:08) The value of link-building (25:29) Applying this to eCommerce (26:38) Final thoughts Guest contact info: https://www.linkedin.com/in/schwartze/ https://twitter.com/5le Book: https://a.co/d/i8oAiRS Further reading: When you're done the book and want more examples of product-led SEO, check out this article from Kevin Indig (who coincidentally recently started a podcast with Eli called "Contrarian Marketing"): https://www.kevin-indig.com/5-examples-of-product-led-seo/ And here are a couple of great articles on Canva's SEO growth strategy: https://buildd.co/marketing/canva-seo-case-study https://thegrowthplaybook.substack.com/p/canvas-seo-strategy-is-elite Get in touch: https://www.linkedin.com/in/moshehp/ https://twitter.com/MoshehP hello@pmfpod.com www.pmfpod.com
Ethan Smith is the CEO of Graphite, a boutique growth agency that's helped companies like MasterClass, Thumbtack, Robinhood, Medium, and Honey develop and execute their SEO strategies. SEO is one of the least-understood levers for growth, while also one with the biggest payoff. This episode is a true master class on all things SEO. Ethan shares a wealth of information, including when you should begin investing in SEO, how to build an SEO team, and the three main buckets of SEO. He explains the difference between topics and keywords, gives the exact heuristics and tools to help you be successful in developing and implementing your own SEO strategy, and also goes deep on how to deal with roadblocks and advocate for resources.—Find the full transcript here: https://www.lennyspodcast.com/the-ultimate-guide-to-seo-ethan-smith-graphite/#transcript—Where to find Ethan Smith:• Twitter: https://twitter.com/ethan_l_s• LinkedIn: https://www.linkedin.com/in/ethanls/• Graphite: https://www.graphitehq.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—Thank you to our wonderful sponsors for making this episode possible:• Coda: https://coda.io/lenny• Mixpanel: https://mixpanel.com/startups• Lemon.io: https://lemon.io/lenny—Referenced:• Topical Authority Analysis: https://bit.ly/topical-authority-tool• SEO Link Analysis: https://bit.ly/diagnostic-internal-links• SEO Links API: https://bit.ly/graphite-internal-links-api• Screaming Frog: https://www.screamingfrog.co.uk/seo-spider/• Brandon Lee of Power: https://www.linkedin.com/in/brandonhli• Similarweb traffic analysis: https://www.similarweb.com/• MasterClass: https://www.masterclass.com/• BetterUp: https://www.betterup.com/• NerdWallet: https://www.nerdwallet.com/• HubSpot: https://www.hubspot.com/• Ahrefs: https://ahrefs.com/• Semrush: https://www.semrush.com/• Google Search Console: https://search.google.com/search-console/about• Clearscope: https://www.clearscope.io/• Yuriy Timen on Lenny's Podcast: https://www.lennyspodcast.com/how-to-grow-a-subscription-business-yuriy-timen-grammarly-canva-airtable/• Gokul Rajaram on Lenny's Podcast: https://www.lennyspodcast.com/gokul-rajaram-on-designing-your-product-development-process-when-and-how-to-hire-your-first-pm-a-playbook-for-hiring-leaders-getting-ahead-in-you-career-how-to-get-started-angel-investing-more/• Luc Levesque on Twitter: https://twitter.com/luclevesque• Search Off the Record: https://podcasts.apple.com/us/podcast/search-off-the-record/id1512522198• GPT-3: https://gpt3demo.com/apps/openai-gpt-3-playground—In this episode, we cover:(03:53) Ethan's background(07:53) Why technical audits are the biggest myth in SEO(10:05) When to invest in SEO(16:09) Heuristics to determine if SEO is worth it(18:36) The three buckets of SEO: programmatic, editorial, and technical(23:30) The process for creating an SEO strategy(27:00) Why you shouldn't be too formulaic (28:33) What is site engagement?(29:31) Which pages need to be indexed(31:49) Topics vs. keywords(36:33) How to mine competitors' sites for information(37:41) Useful tools for developing your SEO strategy(40:14) How long will it take to see results?(45:16) Factors to consider when looking to hire an SEO person(47:33) The functions of a programmatic SEO person(49:19) How to do testing(54:06) Editorial SEO strategy(57:14) How to scale based on the size of the site(59:51) Page types(1:01:53) How to win in a topic category(1:03:12) How to build solid hypotheses and test them (1:06:13) How to deal with roadblocks and advocate for resources(1:08:54) How topical and domain authority are determined(1:16:43) The power of internal links(1:24:32) Why AI is not usually useful for content creation(1:28:31) Final tips for getting started with SEO—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Petra Wille is an independent product leadership coach who's been helping product teams expand their skill sets since 2013. She's also the author of Strong Product People, which she published in 2021. Alongside her freelance work, Petra curates and co-organizes Mind The Product Engage Hamburg. She started her career as a software developer and in 2008 went to work at Xing, a German social media site, where she learned from two incredible product leaders: Marty Cagan and Jason Goldberg. In today's podcast, we talk about Petra's book, and how to help your team grow as a product leader. Petra also shares how to improve your storytelling skills, get better at public speaking, and why community is so important for product managers.—Find the full transcript here: https://www.lennyspodcast.com/how-to-be-the-best-coach-to-product-people-petra-wille-strong-product-people/#transcript—Where to find Petra Wille:• Twitter: https://twitter.com/loomista• LinkedIn: https://www.linkedin.com/in/petra-wille-b8b1329/?originalSubdomain=de• Website: https://www.petra-wille.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—Thank you to our wonderful sponsors for making this episode possible:• Flatfile: https://www.flatfile.com/lenny• Mixpanel: https://mixpanel.com/startups• AssemblyAI: https://www.assemblyai.com/?utm_source=lennyspodcast&utm_medium=podcast&utm_campaign=nov27—Referenced:• PMwheel framework: https://www.strongproductpeople.com/pmwheel• Marty Cagan's assessment: https://www.svpg.com/coaching-tools-the-assessment/• PM Daisy: https://pmdaisy.com/• The Eisenhower matrix for prioritization: https://www.productplan.com/glossary/eisenhower-matrix/• Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value: https://www.amazon.com/Continuous-Discovery-Habits-Discover-Products/dp/1736633309• Mochary Method Curriculum: https://docs.google.com/document/d/18FiJbYn53fTtPmphfdCKT2TMWH-8Y2L-MLqDk-MFV4s/edit• Matt Mochary on Lenny's podcast: https://www.lennyspodcast.com/videos/how-to-fire-people-with-grace-work-through-fear-and-nurture-innovation-matt-mochary/• Hans Rosling's Ted talks: https://www.ted.com/speakers/hans_rosling• Sarah Kay: If I should have a daughter: https://www.ted.com/talks/sarah_kay_if_i_should_have_a_daughter?• Nobody Wants to Read Your Sh*t: Why That is and What You Can Do About It: https://www.amazon.com/Nobody-Wants-Read-Your-Sh-ebook/dp/B01GZ1TJBI• Selling the Dream: https://www.amazon.com/Selling-Dream-Guy-Kawasaki/dp/0887306004• Nancy Duarte's website: https://www.duarte.com/• The 72 Rules of Storytelling: https://www.linkedin.com/pulse/72-rules-commercial-storytelling-jeremy-waite/• The Art of Thinking Clearly:https://www.amazon.com/Art-Thinking-Clearly-Rolf-Dobelli/dp/0062219693• Outcomes Over Output: https://www.amazon.com/Outcomes-Over-Output-customer-behavior/dp/1091173265• Martin Erickson's Decision Stack: https://martineriksson.com/the-decision-stack• Present Yourself Kickstarter: https://www.kickstarter.com/projects/womentalkdesign/present-yourself-a-public-speaking-book• The Product Experience podcast: https://www.mindtheproduct.com/the-product-experience/• Product podcast in German: https://www.produktmenschen.de/• Watch New Amsterdam on Peacock: https://www.peacocktv.com/stream-tv/new-amsterdam• Harvest bookkeeping and time tracking: https://www.getharvest.com/• Qanto: https://qonto.com/en—In this episode, we cover:(03:35) Petra's background(05:51) The things leaders of product teams don't always understand(09:33) Why Petra wrote the book Strong Product People to help managers of product teams (11:21) The five ingredient coaching method(17:00) Why Petra usually recommends starting coaching with a development plan(19:31) Why weekly time should be carved out for ‘people development'(21:16) How to define a competent PM in your organization and tools to help you(24:06) Petra's PM Wheel and how she developed it(27:36) Other info product leads will find useful in Petra's book(30:46) Tips for coaching your team(35:17) How to improve your storytelling(40:56) How to get better at public speaking(44:45) Why it's important to develop good storytelling and public speaking skills (53:36) The importance of a community of practice for product people(56:14) Why people tend to stick around when they are supported and growing in a community(57:53) What to look for in a community(1:06:48) Lightning round—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Ian McAllister is the Senior Director of Product for Vehicles at Uber. Before moving to Uber, Ian spent over a decade directing teams at Amazon, where he created and led Amazon Smile. He was also Director of Product Management at Airbnb, where I was lucky enough to have worked alongside him. In today's episode, we discuss Ian's famous document about the essential attributes of the top 1% of product managers. Ian outlines the most important skills to focus on for entry-level PMs and how to broaden your experience and diversify skills as you move up the ladder. He also shares what he learned working with Jeff Wilke, Jeff Bezos, and other leaders at Amazon, and goes in depth on Amazon's working-backwards framework. —Find the full transcript here: https://www.lennyspodcast.com/what-it-takes-to-become-a-top-1-pm-ian-mcallister-uber-amazon-airbnb/#transcript—Where to find Ian McAllister:• Newsletter: https://ianmcallister.substack.com/• Twitter: https://twitter.com/ianmcall• LinkedIn: https://www.linkedin.com/in/ianmcallister/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—Thank you to our wonderful sponsors for making this episode possible:• Mixpanel: https://mixpanel.com/startups• Athletic Greens: https://athleticgreens.com/lenny• AssemblyAI: https://www.assemblyai.com/?utm_source=lennyspodcast&utm_medium=podcast&utm_campaign=nov20—Referenced:• What distinguishes the top 1% of product managers from the top 10%, on Substack: https://ianmcallister.substack.com/p/what-distinguishes-the-top-1-of-product• What distinguishes the top 1% of product managers from the top 10%, on Quora: https://www.quora.com/What-distinguishes-the-Top-1-of-product-managers-from-the-Top-10• Amazon's working-backwards method: https://www.productplan.com/glossary/working-backward-amazon-method/• Jeff Wilke on Twitter: https://twitter.com/jeffawilke• Getting Real: The Smarter, Faster, Easier Way to Build a Successful Web Application: https://www.amazon.com/Getting-Real-Smarter-Successful-Application/dp/0578012812• Wool (Wool trilogy #1): https://www.amazon.com/Wool-Trilogy-Howey-25-Apr-2013-Paperback/dp/B011T7ACU0/• Energy and Civilization: A History: https://www.amazon.com/Energy-Civilization-History-MIT-Press/dp/0262035774• How I Built This podcast: https://www.npr.org/series/490248027/how-i-built-this• EV News Daily podcast: https://www.evnewsdaily.com/• Yellowstone on Peacock: https://www.peacocktv.com/stream-tv/yellowstone• Everything Everywhere All at Once on Showtime: https://www.sho.com/titles/3493875/everything-everywhere-all-at-once• Gibson Biddle's website: https://www.gibsonbiddle.com/• Gibson Biddle on Lenny's Podcast: https://www.lennyspodcast.com/gibson-biddle-on-his-dhm-product-strategy-framework-gem-roadmap-prioritization-framework-5-netflix-strategy-mini-case-studies-building-a-personal-board-of-directors-and-much-more/• Gibson Biddle's Ask Gib newsletter: https://askgib.substack.com/—In this episode, we cover:(03:54) What Ian expected from his initial post on product management(05:30) How the post impacted Ian's career(07:06) How writing can help you crystallize your thoughts(08:26) Ian's background(10:57) Attributes of the top 1% of PMs(14:32) The top three skills for new PMs to perfect(20:32) Tips on strengthening communication and prioritization(23:06) How to level up as a PM(26:37) What kind of impact should new PMs expect to make?(29:36) How to broaden your view and think big(33:06) How to earn the trust of others(34:30) How Ian could have done more to earn trust at Airbnb(37:27) Why people tend to stick around Amazon for a while (39:53) What Ian learned from Bezos and Wilke(46:38) How teams get working backwards wrong(53:51) The two parts of working backwards and how Ian utilizes it at Uber(58:57) Lightning round—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
For this episode, we have gathered a group of top product leaders from Booking.com (Sanchit Juneja), Mixpanel (Vijay Iyengar), Indicative (Esmeralda Martinez), and Quantum Metric (Alex Thomson) to talk about leading and continuing to build successful products during uncertain times. Come learn about what it takes to be a strong leader no matter what comes your way. Get the FREE Product Book and check out our curated list of free Product Management resources here.Want to see how users experience your website or app? FullStory's award-winning platform gathers data on user experiences in real-time, allowing product teams to better understand issues and successes in aggregate. Get started at fullstory.com.
What tools do you use for sales and support? Which tools work best for project management? How do you know who to hire first if you're a bootstrapped company? Brian and Jordan asked for your questions and Twitter delivered. Today, they are answering Twitter's questions about what tools they use, and who to hire, and when. If you have any questions, comments, or topic ideas for Bootstrapped Web, leave us a message here. “It's interesting the way we think of tools and how we sell tools to other companies.” – Jordan Powered By the Tweet This PluginTweet This Here are today's conversation Points: NoSnow TinyConfTools that we use: Techstack (Laravel, Angular, React, Ruby on Rails, PHP, Tailwind)Project Management (Github, Notion, MixPanel, Confluence, Jira)Marketing Changelog Sales, Marketing, and Support (Pipedrive, Asana, Hubspot, Salesforce, Intercom, Customer.io, HelpSpace, HelpScout)Metrics and Internal Tracking (Grafana, Profitwell, ChartMogul, MixPanel, Amplitude, Google Analytics, Fathom Analytics, Plausible Analytics)Communication (Slack) Hiring when you're bootstrapped, versus when you're fundedWho to hire firstWho they are hiring now
The people who rise fastest in product know how to sell their ideas to customers, and also to their coworkers. Casey Winters, the Chief Product Officer at Eventbrite (previously at Grubhub, Pinterest, and advisor to dozens of companies) shares what it takes to be successful as you rise in the ranks within product. In this episode we’ll talk about how to land presentations, how to win over executives with strategic communication, the skill sets that are most in demand in product, and new growth trends. Join us.— Thank you to our wonderful sponsors for making this episode possible: • Coda: http://coda.io/lenny • Mixpanel: https://mixpanel.com/startups• Whimsical: https://whimsical.com/lenny — Where to find Casey Winters: • Twitter: https://twitter.com/onecaseman• LinkedIn: https://www.linkedin.com/in/caseywinters/— Where to find Lenny: • Newsletter: https://www.lennysnewsletter.com• Twitter: https://twitter.com/lennysan • LinkedIn: https://www.linkedin.com/in/lennyrachitsky/ — In this episode, learn:[00:00] What to expect in this episode with Casey Winters [03:23] An overview of Casey’s career [06:18] A look into the most-fulfilling and challenging roles Casey has energized [06:50] Communicating upward[11:18] How to derisk meetings[13:53] Are you properly preparing for your meetings?[19:09] Striving for perceived simplicity [24:22] Justifying non-sexy product improvements [27:47] Protecting what you’ve built vs continuously scaling [31:03] The downfall of functional ops roles[35:21] The CPO role: what it is and how to get there[40:44] The spectrum of product people[45:11] How to level up your skills[47:01] New growth trends, tactics, and strategies [50:32] Casey’s two stages of growth: kindle strategies and fire strategies[51:51] Under appreciated growth strategies [54:02] Where to find Casey Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Nickey Skarstad is a Director of Product Management at Duolingo, where she is leading a stealth 0 → 1 product. Prior to Duolingo, she was VP of Product at The Wing, Product Lead at Airbnb, where she led much of the Experiences product team, Product Lead at Shopify, and Director of Product Management at Etsy.—Thank you to our wonderful sponsors for making this episode possible:• Mixpanel: https://mixpanel.com/startups• Dovetail: https://dovetailapp.com/lenny• Unit: https://unit.co/lenny—In this episode:[3:32] An overview of Nickey’s career[7:39] What she learned from building product at Airbnb[8:42] How to maintain and operationalize product quality[9:44] Metrics that help you maintain quality[20:08] Which company has most informed her product development approach[21:57] How to structure your product org[24:47] Should you go GM vs. functional[27:18] How you set vision, translate that into goals, and then execute on it[32:30] Brainstorming advice[35:04] How to use OKRs effectively[37:57] How to get better at influence as a PM[41:23] How to know if a decision is a one-way or two-way door[42:29] Second-order decisions, and second-order thinking[46:35] Operationalizing principles[47:17] Getting your team on board with your strategy[49:39] Designing a product review meeting[54:08] Tips for working remotely as a PM[56:44] Lightning round—Where to find Nickey:• Newsletter: https://nickey.substack.com/• TikTok: https://www.tiktok.com/@nickeyskarstad• LinkedIn: https://www.linkedin.com/in/nickeyskarstad/• Twitter: https://twitter.com/NickeySkarstad—Referenced:• Thinking in Systems by Donella H. Meadows• Anne Helen Petersen• Superhuman• Loom Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Manik Gupta has led two of the most successful consumer products in history—Google Maps, where he was Director of product for the Maps team, and Uber, where he was CPO. After leaving Uber, he spent some time working on a product to help people avoid getting COVID called CVKey, and most recently he took on a role at Microsoft as Corporate Vice President leading many of their consumer efforts.—Thank you to our wonderful sponsors for making this episode possible:• Mixpanel: https://mixpanel.com/startups• Coda: http://coda.io/lenny• Unit: https://unit.co/lenny—In this episode, we cover:[3:55] Patterns for career success[7:19] Why it’s valuable to be optimistic about technology[13:54] Challenges and mistakes through Manik’s career[17:28] How you learn the most about yourself through challenges[20:25] What Manik’s learned about building successful consumer apps[26:18] The importance of company-product fit[30:02] “The consumer stack”—what your company needs to have in place to build a successful consumer product[36:22] The path from PM to CPO[39:19] Evolution of CPO role[44:40] What leads to promotions in a PM career[47:58] What creates inflections in one’s PM career [52:05] How PMs shoot themselves in the foot[55:05] What it’s like to work at Google vs. Uber vs. Microsoft[1:01:35] What he wished he built into Google Maps—Where to find Manik:• LinkedIn: https://www.linkedin.com/in/manikg/• Twitter: https://twitter.com/manikgupta Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Merci Grace has been a founder, an investor (at Lightspeed Ventures), head of product and growth (at Slack), and is now a founder again (Panobi). She’s also one of the co-founders of Women in Product, and Fast Company named her one of the Most Creative People in 2017.—Thank you to our wonderful sponsors for making this episode possible:• Dovetail: https://dovetailapp.com/lenny• Mixpanel: https://mixpanel.com/startups• Whimsical: https://whimsical.com/lenny—In this episode, we cover:[3:41] Merci’s path to Head of Product and Growth at Slack[4:42] What Merci learned from being a VC that helps her be a better founder[6:50] How to tell a compelling story[9:43] What most people don’t know about Slack[10:27] Why Slack hasn’t created a consumer/social product[15:14] How Slack innovated the PLG motion[17:14] Slack’s early growth strategy[19:57] Slack’s activation point[22:10] Why it’s important to find connectors within a company[26:40] Lessons from optimizing Slack’s onboarding flow[32:12] Most common mistakes in going PLG[35:56] Signs you can go PLG[38:10] PLG vs. bottom-up[40:23] Importance of day-zero value in your tool[42:17] When to bring in your first salesperson[44:47] How to hire amazing people [50:21] Storytelling and Slack’s culture[51:04] How and when to build a growth team[52:08] How to build a more diverse team—Where to find Merci:• Panobi: https://panobi.com/• LinkedIn: https://www.linkedin.com/in/merci/• Twitter: https://twitter.com/merci• Website: https://mercigrace.co/• Women in Product: https://www.womenpm.org/ Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Steph has a baby update and thoughts on movies, plus a question for Chris related to migrating Test Unit tests to RSpec. Chris watched a video from Google I/O where Chrome devs talked about a new feature called Page Transitions. He's also been working with a tool called Customer.io, an omnichannel communication whiz-bang adventure! Page transitions Overview (https://youtu.be/JCJUPJ_zDQ4) Using yield_self for composable ActiveRecord relations (https://thoughtbot.com/blog/using-yieldself-for-composable-activerecord-relations) A Case for Query Objects in Rails (https://thoughtbot.com/blog/a-case-for-query-objects-in-rails) Customer.io (https://customer.io/) Turning the database inside-out with Apache Samza | Confluent (https://www.confluent.io/blog/turning-the-database-inside-out-with-apache-samza/) Datomic (https://www.datomic.com/) About CRDTs • Conflict-free Replicated Data Types (https://crdt.tech/) Apache Kafka (https://kafka.apache.org/) Resilient Management | A book for new managers in tech (https://resilient-management.com/) Mixpanel: Product Analytics for Mobile, Web, & More (https://mixpanel.com/) Become a Sponsor (https://thoughtbot.com/sponsorship) of The Bike Shed! Transcript: CHRIS: Golden roads are golden. Okay, everybody's got golden roads. You have golden roads, yes? That is what we're -- STEPH: Oh, I have golden roads, yes. [laughter] CHRIS: You might should inform that you've got golden roads, yeah. STEPH: Yeah, I don't know if I say might should as much but might could. I have been called out for that one a lot; I might could do that. CHRIS: [laughs] STEPH: That one just feels so natural to me than normal. Anytime someone calls it out, I'm like, yeah, what about it? [laughter] CHRIS: Do you want to fight? STEPH: Yeah, are we going to fight? CHRIS: I might could fight you. STEPH: I might could. I might should. [laughter] CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. STEPH: And I'm Steph Viccari. CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world? STEPH: Hey, Chris. I have a couple of fun updates. I have a baby Viccari update, so little baby weighs about two pounds now, two pounds. I'm 25 weeks along. So not that I actually know the exact weight, I'm using all those apps that estimate based on how far along you are, so around two pounds, which is novel. Oh, and then the other thing I'm excited to tell you about...I'm not sure how I should feel that I just got more excited about this other thing. I'm very excited about baby Viccari. But the other thing is there's a new Jurassic Park movie coming out, and I'm very excited. I think it's June 10th is when it comes out. And given how much we have sung that theme song to each other and make references to what a clever girl, I needed to share that with you. Maybe you already know, maybe you're already in the loop, but if you don't, it's coming. CHRIS: Yeah, the internet likes to yell things like that. Have you watched all of the most recent ones? There are like two, and I think this will be the third in the revisiting or whatever, the Jurassic World version or something like that. But have you watched the others? STEPH: I haven't seen all of them. So I've, of course, seen the first one. I saw the one that Chris Pratt was in, and now he's in the latest one. But I think I've missed...maybe there's like two in the middle there. I have not watched those. CHRIS: There are three in the original trilogy, and then there are three now in the new trilogy, which now it's ending, and they got everybody. STEPH: Oh, I'm behind. CHRIS: They got people from the first one, and they got the people from the second trilogy. They just got everybody, and that's exciting. You know, it's that thing where they tap into nostalgia, and they take advantage of us via it. But I'm fine. I'm here for it. STEPH: I'm here for it, especially for Jurassic Park. But then there's also a new Top Gun movie coming out, which, I'll be honest, I'm totally going to watch. But I really didn't remember the first one. I don't know that I've really ever watched the first Top Gun. So Tim, my partner, and I watched that recently, and it's such a bad movie. I'm going to say it; [laughs] it's a bad movie. CHRIS: I mean, I don't want to disagree, but the volleyball scene, come on, come on, the volleyball scene. [laughter] STEPH: I mean, I totally had a good time watching the movie. But the one part that I finally kept complaining about is because every time they showed the lead female character, I can't think of her character name or the actress's name, but they kept playing that song, Take My Breath Away. And I was like, can we just get past the song? [laughs] Because if you go back and watch that movie, I swear they play it like six different times. It was a lot. It was too much. So I moved it into bad movie category but bad movie totally worth watching, whatever category that is. CHRIS: Now I kind of want to revisit it. I feel like the drinking game writes itself. But at a minimum, anytime Take My Breath Away plays, yeah. Well, all right, good to know. [laughs] STEPH: Well, if you do that, let me know how many shots or beers you drink because I think it will be a fair amount. I think it will be more than five. CHRIS: Yeah, it involves a delicate calibration to get that right. I don't think it's the sort of thing you just freehand. It writes itself but also, you want someone who's tried it before you so that you're not like, oh no, it's every time they show a jet. That was too many. You can't drink that much while watching this movie. STEPH: Yeah, that would be death by Top Gun. CHRIS: But not the normal way, the different, indirect death by Top Gun. STEPH: I don't know what the normal way is. [laughs] CHRIS: Like getting shot down by a Top Gun pilot. [laughter] STEPH: Yeah, that makes sense. [laughs] CHRIS: You know, the dogfighting in the plane. STEPH: The actual, yeah, going to war away. Just sitting on your couch and you drink too much poison away, yeah, that one. All right, that got weird. Moving on, [laughs] there's a new Jurassic Park movie. We're going to land on that note. And in the more technical world, I've got a couple of things on my mind. One of them is I have a question for you. I'm very excited to run this by you because I could use a friend in helping me decide what to do. So I am still on that journey where I am migrating Test::Unit test over to RSpec. And as I'm going through, it's going pretty well, but it's a little complicated because some of the Test::Unit tests have different setup than, say, the RSpec do. They might run different scripts beforehand where they're loading data. That's perhaps a different topic, but that's happening. And so that has changed a few things. But then overall, I've just been really just porting everything over, like, hey, if it exists in the Test::Unit, let's just bring it to RSpec, and then I'm going to change these asserts to expects and really not make any changes from there. But as I'm doing that, I'm seeing areas that I want to improve and things that I want to clear up, even if it's just extracting a variable name. Or, as I'm moving some of these over in Test::Unit, it's not clear to me exactly what the test is about. Like, it looks more like a method name in the way that the test is being described, but the actual behavior isn't clear to me as if I were writing this in RSpec, I think it would have more of a clear description. Maybe that's not specific to the actual testing framework. That might just be how these tests are set up. But I'm at that point where I'm questioning should I keep going in terms of where I am just copying everything over from Test::Unit and then moving it over to RSpec? Because ultimately, that is the goal, to migrate over. Or should I also include some time to then go back and clean up and try to add some clarity, maybe extract some variable names, see if I can reduce some lets, maybe even reduce some of the test helpers that I'm bringing over? How much cleanup should be involved, zero, lots? I don't know. I don't know what that...[laughs] I'm sure there's a middle ground in there somewhere. But I'm having trouble discerning for myself what's the right amount because this feels like one of those areas where if I don't do any cleanup, I'm not coming back to it, like, that's just the truth. So it's either now, or I have no idea when and maybe never. CHRIS: I'll be honest, the first thing that came to mind in this most recent time that you mentioned this is, did we consider just deleting these tests entirely? Is that on...like, there are very few of them, right? Like, are they even providing enough value? So that was question one, which let me pause to see what your thoughts there were. [chuckles] STEPH: I don't know if we specifically talked about that on the mic, but yes, that has been considered. And the team that owns those tests has said, "No, please don't delete them. We do get value from them." So we can port them over to RSpec, but we don't have time to port them over to RSpec. So we just need to keep letting them go on. But yet, not porting them conflicts with my goal of then trying to speed up CI. And so it'd be nice to collapse these Test::Unit tests over to RSpec because then that would bring our CI build down by several meaningful minutes. And also, it would reduce some of the complexity in the CI setup. CHRIS: Gotcha. Okay, so now, having set that aside, I always ask the first question of like, can you just put Derek Prior's phone number on the webpage and call it an app? Is that the MVP of this app? No? Okay, all right, we have to build more. But yeah, I think to answer it and in a general way of trying to answer a broader set of questions here... I think this falls into a category of like if you find yourself having to move around some code, if that code is just comfortably running and the main thing you need to do is just to get it ported over to RSpec, I would probably do as little other work as possible. With the one consideration that if you find yourself needing to deeply load up the context of these tests like actually understand them in order to do the porting, then I would probably take advantage of that context because it's hard to get your head into a given piece of code, test or otherwise. And so if you're in there and you're like, well, now that I'm here, I can definitely see that we could rearrange some stuff and just definitively make it better, if you get to that place, I would consider it. But if this ends up being mostly a pretty rote transformation like you said, asserts become expects, and lets get switched around, you know, that sort of stuff, if it's a very mechanical process of getting done, I would probably say very minimal. But again, if there is that, like, you know what? I had to understand the test in order to port them anyway, so while I'm here, let me take advantage of that, that's probably the thing that I would consider. But if not that, then I would say even though it's messy and whatnot and your inclination would be to clean it, I would say leave it roughly as is. That's my guess or how I would approach it. STEPH: Yeah, I love that. I love how you pointed out, like, did you build up the context? Because you're right, in a lot of these test cases, I'm not, or I'm trying really hard to not build up context. I'm trying very hard to just move them over and, if I have to, mainly to find test descriptions. That's the main area I'm struggling to...how can I more explicitly state what this test does so the next person reading this will have more comprehension than I do? But otherwise, I'm trying hard to not have any real context around it. And that's such a good point because that's often...when someone else is in the middle of something, and they're deciding whether to include that cleanup or refactor or improvement, one of my suggestions is like, hey, we've got the context now. Let's go with it. But if you've built up very little context, then that's not a really good catalyst or reason to then dig in deeper and apply that cleanup. That's super helpful. Thank you. That will help reinforce what I'm going to do, which is exactly let's migrate RSpec and not worry about cleanup, which feels terrible; I'm just going to say that into the world. But it also feels like the right thing to do. CHRIS: Well, I'm happy to have helped. And I share the like, and it feels terrible. I want to do the right thing, but sometimes you got to pick a battle sort of thing. STEPH: Cool. Well, that's a huge help to me. What's going on in your world? CHRIS: What's going on in my world? I watched a great video the other day from the Google I/O. I think it's an event; I'm not actually sure, conference or something like that. But it was some Google Chrome developers talking about a new feature that's coming to the platform called Page Transitions. And I've kept an eye on this for a while, but it seems like it's more real. Like, I think they put out an RFC or an initial sort of set of ideas a while back. And the web community was like, "Oh, that's not going to work out so well." So they went back to the drawing board, revisited. I've actually implemented in Chrome Canary a version of the API. And then, in the video that I watched, which we'll include a show notes link to, they demoed the functionality of the Page Transitions API and showed what you can do. And it's super cool. It allows for the sort of animations that you see in a lot of native mobile apps where you're looking at a ListView, you click on one of the items, and it grows to fill the whole screen. And now you're on the detail screen for that item that you were looking at. But there was this very continuous animated transition that allows you to keep context in your head and all of those sorts of nice things. And this just really helps to bridge that gap between, like, the web often lags behind the native mobile platforms in terms of the experiences that we can build. So it was really interesting to see what they've been able to pull off. The demo is a pretty short video, but it shows a couple of different variations of what you can build with it. And I was like, yeah, these look like cool native app transitions, really nifty. One thing that's very interesting is the actual implementation of this. So it's like you have one version of the page, and then you transition to a new version of the page, and in doing so, you want to animate between them. And the way that they do it is they have the first version of the page. They take a screenshot of it like the browser engine takes a screenshot of it. And then they put that picture on top of the actual browser page. Then they do the same thing with the next version of the page that they're going to transition to. And then they crossfade, like, change the opacity and size and whatnot between the two different images, and then you're there. And in the back of my mind, I'm like, I'm sorry, what now? You did which? I'm like, is this the genius solution that actually makes this work and is performant? And I wonder if there are trade-offs. Like, do you lose interactivity between those because you've got some images that are just on the screen? And what is that like? But as they were going through it, I was just like, wait, I'm sorry, you did what? This is either the best idea I've ever heard, or I'm not so sure about this. STEPH: That's fascinating. You had me with the first part in terms of they take a screenshot of the page that you're leaving. I'm like, yeah, that's a great idea. And then talking about taking a picture of the other page because then you have to load it but not show it to the user that it's loaded. And then take a picture of it, and then show them the picture of the loaded page. But then actually, like you said, then crossfade and then bring in the real functionality. I am...what am I? [laughter] CHRIS: What am I actually? STEPH: [laughs] What am I? I'm shocked. I'm surprised that that is so performant. Because yeah, I also wouldn't have thought of that, or I would have immediately have thought like, there's no way that's going to be performant enough. But that's fascinating. CHRIS: For me, performance seems more manageable, but it's the like, what are you trading off for that? Because that sounds like a hack. That sounds like the sort of thing I would recommend if we need to get an MVP out next week. And I'm like, what if we just tried this? Listen, it's got some trade-offs. So I'm really interested to see are those trade-offs present? Because it's the browser engine. It's, you know, the low-level platform that's actually managing this. And there are some nice hooks that allow you to control it. And at a CSS level, you can manage it and use keyframe animations to control the transition more directly. There's a JavaScript API to instrument the sequencing of things. And so it's giving you the right primitives and the right hooks. And the fact that the implementation happens to use pictures or screenshots, to use a slightly different word, it's like, okey dokey, that's what we're doing. Sounds fun. So I'm super interested because the functionality is deeply, deeply interesting to me. Svelte actually has a version of this, the crossfade utility, but you have to still really think about how do you sequence between the two pages and how do you do the connective tissue there? And then Svelte will manage it for you if you do all the right stuff. But the wiring up is somewhat complicated. So having this in the browser engine is really interesting to me. But yeah, pictures. STEPH: This is one of those ideas where I can't decide if this was someone who is very new to the team and new to the idea and was like, "Have we considered screenshots? Have we considered pictures?" Or if this is like the uber senior person on the team that was like, "Yeah, this will totally work with screenshots." I can't decide where in that range this idea falls, which I think makes me love it even more. Because it's very straightforward of like, hey, what if we just tried this? And it's working, so cool, cool, cool. CHRIS: There's a fantastic meme that's been making the rounds where it's a bell curve, and it's like, early in your career, middle of your career, late in your career. And so early in your career, you're like, everything in one file, all lines of code that's just where they go. And then in the middle of your career, you're like, no, no, no, we need different concerns, and files, and organizational structures. And then end of your career...and this was coming up in reference to the TypeScript team seems to have just thrown everything into one file. And it's the thing that they've migrated to over time. And so they have this many, many line file that is basically the TypeScript engine all in one file. And so it was a joke of like, they definitely know what they're doing with programming. They're not just starting last week sort of thing. And so it's this funny arc that certain things can go through. So I think that's an excellent summary there [laughs] of like, I think it was folks who have thought about this really hard. But I kind of hope it was someone who was just like, "I'm new here. But have we thought about pictures? What about pictures?" I also am a little worried that I just deeply misunderstood [laughs] the representation but glossed over it in the video, and I'm like, that sounds interesting. So hopefully, I'm not just wildly off base here. [laughs] But nonetheless, the functionality looks very interesting. STEPH: That would be a hilarious tweet. You know, I've been waiting for that moment where I've said something that I understood into the mic and someone on Twitter just being like, well, good try, but... [laughs] CHRIS: We had a couple of minutes where we tried to figure out what the opposite of ranting was, and we came up with pranting and made up a word instead of going with praising or raving. No, that's what it is, raving. [laughs] STEPH: No, raving. I will never forget now, raving. [laughs] CHRIS: So, I mean, we've done this before. STEPH: That's true. Although they were nice, I don't think they tweeted. I think they sent in an email. They were like, "Hey, friends." [laughter] CHRIS: Actually, we got a handful of emails on that. [laughter] STEPH: Did you know the English language? CHRIS: Thank you, kind Bikeshed audience, for not shaming us in public. I mean, feel free if you feel like it. [laughs] But one other thing that came up in this video, though, is the speaker was describing single-page apps are very common, and you want to have animated transitions and this and that. And I was like, single-page app, okay, fine. I don't like the terminology but whatever. I would like us to call it the client-side app or client-side routing or something else. But the fact that it's a single page is just a technical consideration that no user would call it that. Users are like; I go to the web app. I like that it has URLs. Those seem different to me. Anyway, this is my hill. I'm going to die on it. But then the speaker in the video, in contrast to single-page app referenced multi-page app, and I was like, oh, come on, come on. I get it. Like, yes, there are just balls of JavaScript that you can download on the internet and have a dynamic graphics editor. But I think almost all good things on the web should have URLs, and that's what I would call the multiple pages. But again, that's just me griping about some stuff. And to name it, I don't think I'm just griping for griping sake. Like, again, I like to think about things from the user perspective, and the URL being so important. And having worked with plenty of apps that are implemented in JavaScript and don't take the URL or the idea that we can have different routable resources seriously and everything is just one URL, that's a failure mode in my mind. We missed an opportunity here. So I think I'm saying a useful thing here and not just complaining on the internet. But with that, I will stop complaining on the internet and send it back over to you. What else is new in your world, Steph? STEPH: I do remember the first time that you griped about it, and you were griping about URLs. And there was a part of me that was like, what is he talking about? [laughter] And then over time, I was like, oh, I get it now as I started actually working more in that world. But it took me a little bit to really appreciate that gripe and where you're coming from. And I agree; I think you're coming from a reasonable place, not that I'm biased at all as your co-host, but you know. CHRIS: I really like the honest summary that you're giving of, like, honestly, the first time you said this, I let you go for a while, but I did not know what you were talking about. [laughs] And I was like, okay, good data point. I'm going to store that one away and think about it a bunch. But that's fine. I'm glad you're now hanging out with me still. [laughter] STEPH: Don't do that. Don't think about it a bunch. [laughs] Let's see, oh, something else that's going on in my world. I had a really fun pairing session with another thoughtboter where we were digging into query objects and essentially extracting some logic out of an ActiveRecord model and then giving that behavior its own space and elevated namespace in a query object. And one of the questions or one of the things that came up that we needed to incorporate was optional filters. So say if you are searching for a pizza place that's nearby and you provide a city, but you don't provide what's the optional zip code, then we want to make sure that we don't apply the zip code in the where clause because then you would return all the pizza places that have a nil zip code, and that's just not what you want. So we need to respect the fact that not all the filters need to be applied. And there are a couple of ways to go about it. And it was a fun journey to see the different ways that we could structure it. So one of the really good starting points is captured in a blog post by Derek Prior, which we'll include a link to in the show notes, and it's using yield_self for composable ActiveRecord relations. But essentially, it starts out with an example where it shows that you're assigning a value to then the result of an if statement. So it's like, hey, if the zip code is present, then let's filter by zip code; if not, then just give us back the original relation. And then you can just keep building on it from there. And then there's a really nice implementation that Derek built on that then uses yield_self to pass the relation through, which then provides a really nice readability for as you are then stepping through each filter and which one should and shouldn't be applied. And now there's another blog post, and this one's written by Thiago Silva, A Case for Query Objects in Rails. And this one highlighted an approach that I haven't used before. And I initially had some mixed feelings about it. But this approach uses the extending method, which is a method that's on ActiveRecord query methods. And it's used to extend the scope with additional methods. You can either do this by providing the name of a module or by providing a block. It's only going to apply to that instance or to that specific scope when you're using it. So it's not going to be like you're running an include or something like that where all instances are going to now have access to these methods. So by using that method, extending, then you can create a module that says, "Hey, I want to create this by zip code filter that will then check if we have a zip code, let's apply it, if not, return the relation. And it also creates a really pretty chaining experience of like, here's my original class name. Let's extend with these specific scopes, and then we can say by zip code, by pizza topping, whatever else it is that we're looking to filter by. And I was initially...I saw the extending, and it made me nervous because I was like, oh, what all does this apply to? And is it going to impact anything outside of this class? But the more I've looked at it, the more I really like it. So I think you've seen this blog post too. And I'm curious, what are your thoughts about this? CHRIS: I did see this blog post come through. I follow that thoughtbot blog real close because it turns out some of the best writing on the internet is on there. But I saw this...also, as an aside, I like that we've got two Derek Prior references in one episode. Let's see if we can go for three before the end. But one thing that did stand out to me in it is I have historically avoided scopes using scope like ActiveRecord macro thing. It's a class method, but like, it's magic. It does magic. And a while ago, class methods and scopes became roughly equivalent, not exactly equivalent, but close enough. And for me, I want to use methods because I know stuff about methods. I know about default arguments. And I know about all of these different subtleties because they're just methods at the end of the day, whereas scopes are special; they have certain behavior. And I've never really known of the behavior beyond the fact that they get implemented in a different way. And so I was never really sold on them. And they're different enough from methods, and I know methods well. So I'm like, let's use the normal stuff where we can. The one thing that's really interesting, though, is the returning nil that was mentioned in this blog post. If you return nil in a scope, it will handle that for you. Whereas all of my query objects have a like, well, if this thing applies, then scope dot or relation dot where blah, blah, blah, else return relation unchanged. And the fact that that natively exists within scope is interesting enough to make me reconsider my stance on scopes versus class methods. I think I'm still doing class method. But it is an interesting consideration that I was unaware of before. STEPH: Yeah, it's an interesting point. I hadn't really considered as much whether I'm defining a class-level method versus a scope in this particular case. And I also didn't realize that scopes handle that nil case for you. That was one of the other things that I learned by reading through this blog post. I was like, oh, that is a nicety. Like, that is something that I get for free. So I agree. I think this is one of those things that I like enough that I'd really like to try it out more and then see how it goes and start to incorporate it into my process. Because this feels like one of those common areas of where I get to it, and I'm like, how do I do this again? And yield_self was just complicated enough in terms of then using the fancy method method to then be able to call the method that I want that I was like, I don't remember how to do this. I had to look it up each time. But including this module with extending and then being able to use scopes that way feels like something that would be intuitive for me that then I could just pick up and run with each time. CHRIS: If it helps, you can use then instead of yield_self because we did upgrade our Ruby a while back to have that change. But I don't think that actually solves the thing that you're describing. I'd have liked the ampersand method and then simple method name magic incantation that is part of the thing that Derek wrote up. I do use it when I write query objects, but I have to think about it or look it up each time and be like, how do I do that? All right, that's how I do that. STEPH: Yeah, that's one of the things that I really appreciate is how often folks will go back and update blog posts, or they will add an addition to them to say, "Hey, there's something new that came out that then is still relevant to this topic." So then you can read through it and see the latest and the greatest. It's a really nice touch to a number of our blog posts. But yeah, that's what was on my mind regarding query objects. What else is going on in your world? CHRIS: I have this growing feeling that I don't quite know what to do with. I think I've talked about it across some of our conversations in the world of observability. But broadly, I'm starting to like...I feel like my brain has shifted, and I now see the world slightly differently, and I can't go back. But I also don't know how to stick the landing and complete this transition in my brain. So it's basically everything's an event stream; this feels true. That's life. The arrow of time goes in one direction as far as I understand it. And I'm now starting to see it manifest in the code that we're writing. Like, we have code to log things, and we have places where we want to log more intentionally. Then occasionally, we send stuff off to Sentry. And Sentry tells us when there are errors, that's great. But in a lot of places, we have both. Like, we will warn about something happening, and we'll send that to the logs because we want to have that in the logs, which is basically the whole history of what's happened. But we also have it in Sentry, but Sentry's version is just this expanded version that has a bunch more data about the user, and things, and the browser that they were in. But they're two variations on the same event. And then similarly, analytics is this, like, third leg of well, this thing happened, and we want to know about it in the context. And what's been really interesting is we're working with a tool called Customer.io, which is an omnichannel communication whiz-bang adventure. For us, it does email, SMS, and push notifications. And it's integrated into our segment pipeline, so events flow in, events and users essentially. So we have those two different primitives within it. And then within it, we can say like, when a user does X, then send them an email with this copy. As an aside, Customer.io is a fantastic platform. I'm super-duper impressed. We went through a search for a tool like it, and we ended up on a lot of sales demos with folks that were like, hey, so yeah, starting point is $25,000 per year. And, you know, we can talk about it, but it's only going to go up from there when we talk about it, just to be clear. And it's a year minimum contract, and you're going to love it. And we're like, you do have impressive platforms, but okay, that's a bunch. And then, we found Customer.io, and it's month-by-month pricing. And it had a surprisingly complete feature set. So overall, I've been super impressed with Customer.io and everything that they've afforded. But now that I'm seeing it, I kind of want to move everything into that world where like, Customer.io allows non-engineer team members to interact with that event stream and then make things happen. And that's what we're doing all the time. But I'm at that point where I think I see the thing that I want, but I have no idea how to get there. And it might not even be tractable either. There's the wonderful Turning the Database Inside Out talk, which describes how everything is an event stream. And what if we actually were to structure things that way and do materialized views on top of it? And the actual UI that you're looking at is a materialized view on top of the database, which is a materialized view on top of that event stream. So I'm mostly in this, like, I want to figure this out. I want to start to unify all this stuff. And analytics pipes to one tool that gets a version of this event stream, and our logs are just another, and our error system is another variation on it. But they're all sort of sampling from that one event stream. But I have no idea how to do that. And then when you have a database, then you're like, well, that's also just a static representation of a point in time, which is the opposite of an event stream. So what do you do there? So there are folks out there that are doing good thinking on this. So I'm going to keep my ear to the ground and try and see what's everybody thinking on this front? But I can't shake the feeling that there's something here that I'm missing that I want to stitch together. STEPH: I'm intrigued on how to take this further because everything you're saying resonates in terms of having these event streams that you're working with. But yet, I can't mentally replace that with the existing model that I have in my mind of where there are still certain ideas of records or things that exist in the world. And I want to encapsulate the data and store that in the database. And maybe I look it up based on when it happens; maybe I don't. Maybe I'm looking it up by something completely different. So yeah, I'm also intrigued by your thoughts, but I'm also not sure where to take it. Who are some of the folks that are doing some of the thinking in this area that you're interested in, or where might you look next? CHRIS: There's the Kafka world of we have an event log, and then we're processing on top of that, and we're building stream processing engines as the core. They seem to be closest to the Turning the Database Inside Out talk that I was thinking or that I mentioned earlier. There's also the idea of CRDTs, which are Conflict-free Replicated Data Types, which are really interesting. I see them used particularly in real-time application. So it's this other tool, but they are basically event logs. And then you can communicate them well and have two different people working on something collaboratively. And these event logs then have a natural way to come together and produce a common version of the document on either end. That's at least my loose understanding of it, but it seems like a variation on this theme. So I've been looking at that a little bit. But again, I can't see how to map that to like, but I know how to make a Rails app with a Postgres database. And I think I'm reasonably capable at it, or at least I've been able to produce things that are useful to humans using it. And so it feels like there is this pretty large gap. Because what makes sense in my head is if you follow this all the way, it fundamentally re-architects everything. And so that's A, scary, and B; I have no idea how to get there, but I am intrigued. Like, I feel like there's something there. There's also Datomic is the other thing that comes to mind, which is a database engine in the Clojure world that stores the versions of things over time; that idea of the user is active. It's like, well, yeah, but when were they not? That's an event. That transition is an event that Postgres does not maintain at this point. And so, all I know is that the user is active. Maybe I store a timestamp because I'm thinking proactively about this. But Datomic is like no, no, fundamentally, as a primitive in this database; that's how we organize and think about stuff. And I know I've talked about Datomic on here before. So I've circled around these ideas before. And I'm pretty sure I'm just going to spend a couple of minutes circling and then stop because I have no idea how to connect the dots. [laughs] But I want to figure this out. STEPH: I have not worked with Kafka. But one of the main benefits I understand with Kafka is that by storing everything as a stream, you're never going to lose like a message. So if you are sending a message to another system and then that message gets lost in transit, you don't actually know if it got acknowledged or what happened with it, and replaying is really hard. Where do you pick up again? While using something like Kafka, you know exactly what you sent last, and then you're not going to have that uncertainty as to what messages went through and which ones didn't. And then the ability to replay is so important. I'm curious, as you continue to explore this, do you have a particular pain point in mind that you'd like to see improve? Or is it more just like, this seems like a really cool, novel idea; how can I incorporate more of this into my world? CHRIS: I think it's the latter. But I think the thing that I keep feeling is we keep going back and re-instrumenting versions of this. We're adding more logging or more analytics events over the wire or other things. But then, as I send these analytics events over the wire, we have Mixpanel downstream as an analytics visualization and workflow tool or Customer.io. At this point, those applications, I think, have a richer understanding of our users than our core Rails app. And something about that feels wrong to me. We're also streaming everything that goes through segment to S3 because I had a realization of this a while back. I'm like, that event stream is very interesting. I don't want to lose it. I'm going to put it somewhere that I get to keep it. So even if we move off of either Mixpanel or Customer.io or any of those other platforms, we still have our data. That's our data, and we're going to hold on to it. But interestingly, Customer.io, when it sends a message, will push an event back into segments. So it's like doubly connected to segment, which is managing this sort of event bus of data. And so Mixpanel then gets an even richer set there, and the Rails app is like, I'm cool. I'm still hanging out, and I'm doing stuff; it's fine. But the fact that the Rails app is fundamentally less aware of the things that have happened is really interesting to me. And I am not running into issues with it, but I do feel odd about it. STEPH: That touched on a theme that is interesting to me, the idea that I hadn't really considered it in those terms. But yeah, our application provides the tool in which people can interact with. But then we outsource the behavior analysis of our users and understanding what that flow is and what they're going through. I hadn't really thought about those concrete terms and where someone else owns the behavior of our users, but yet we own all the interaction points. And then we really need both to then make decisions about features and things that we're building next. But that also feels like building a whole new product, that behavior analysis portion of it, so it's interesting. My consulting brain is going wild at the moment between like, yeah, it would be great to own that. But that the other time if there's this other service that has already built that product and they're doing it super well, then let's keep letting them manage that portion of our business until we really need to bring it in-house. Because then we need to incorporate it more into our application itself so then we can surface things to the user. That's the part where then I get really interested, or that's the pain point that I could see is if we wanted more of that behavior analysis, that then we want to surface that in the app, then always having to go to a third-party would start to feel tedious or could feel more brittle. CHRIS: Yeah, I'm definitely 100% on not rebuilding Mixpanel in our app and being okay with the fact that we're sending that. Again, the thing that I did to make myself feel better about this is stream the data to S3 so that I have a version of it. And if we want to rebuild the data warehouse down the road to build some sort of machine learning data pipeline thing, we've got some raw data to work with. But I'm noticing lots of places where we're transforming a side effect, a behavior that we have in the system to dispatching an event. And so right now, we have a bunch of stuff that we pipe over to Slack to inform our admin team, hey, this thing happened. You should probably intervene. But I'm looking at that, and we're doing it directly because we can control the message in Slack a little bit better. But I had this thought in the back of my mind; it's like, could we just send that as an event, and then some downstream tool can configure messages and alerts into Slack? Because then the admin team could actually instrument this themselves. And they could be like; we are no longer interested in this event. Users seem fine on that front. But we do care about this new event. And all we need to do as the engineering team is properly instrument all of that event stream tapping. Every event just needs to get piped over. And then lots of powerful tools downstream from that that can allow different consumers of that data to do things, and broadly, that dispatch events, consume them on the other side, do fun stuff. That's the story. That's the dream. But I don't know; I can't connect all the dots. It's probably going to take me a couple of weeks to connect all these dots, or maybe years, or maybe my entire career, something like that. But, I don't know, I'm going to keep trying. STEPH: This feels like a fun startup narrative, though, where you start by building the thing that people can interact with. As more people start to interact with it, how do we start giving more of our team the ability to then manage the product that then all of these users are interacting with? And then that's the part that you start optimizing for. So there are always different interesting bits when you talk about the different stages of Sagewell, and like, what's the thing you're optimizing for? And I'm sure it's still heavily users. But now there's also this addition of we are also optimizing for our team to now manage the product. CHRIS: Yes, you're 100%. You're spot on there. We have definitely joked internally about spinning out a small company to build this analytics alerting tool [laughs] but obviously not going to do that because that's a distraction. And it is interesting, like, we want to build for the users the best thing that we can and where the admin team fits within that. To me, they're very much customers of engineering. Our job is to build the thing for the users but also, to be honest, we have to build a thing for the admins to support the users and exactly where that falls. Like, you and I have talked a handful of times about maybe the admin isn't as polished in design as other things. But it's definitely tested because that's a critical part of how this application works. Maybe not directly for a user but one step removed for a user, so it matters. Absolutely we're writing tests to cover that behavior. And so yeah, those trade-offs are always interesting to me and exploring that space. But 100%: our admin team are core customers of the work that we're doing in engineering. And we try and stay very close and very friendly with them. STEPH: Yeah, I really appreciate how you're framing that. And I very much agree and believe with you that our admin users are incredibly important. CHRIS: Well, thank you. Yeah, we're trying over here. But yeah, I think I can wrap up that segment of me rambling about ideas that are half-formed in my mind but hopefully are directionally important. Anyway, what else is up with you? STEPH: So, not that long ago, I asked you a question around how the heck to manage themes that I have going on. So we've talked about lots of fun productivity things around managing to-dos, and emails, and all that stuff. And my latest one is thinking about, like, I have a theme that I want to focus on, maybe it's this week, maybe it's for a couple of months. And how do I capture that and surface it to myself and see that I'm making progress on that? And I don't have an answer to that. But I do have a theme that I wanted to share. And the one that I'm currently focused on is building up management skills and team lead skills. That is something that I'm focused on at the moment and partially because I was inspired to read the book Resilient Management written by Lara Hogan. And so I think that is what has really set the idea. But as I picked up the book, I was like, this is a really great book, and I'd really like to share some of this. And then so that grew into like, well, let's just go ahead and make this a theme where I'm learning this, and I'm sharing this with everyone else. So along that note, I figured I would share that here. So we use Basecamp at thoughtbot. And so, I've been sharing some Basecamp posts around what I'm learning in each chapter. But to bring some of that knowledge here as well, some of the cool stuff that I have learned so far...and this is just still very early on in the book. There are a couple of different topics that Laura covers in the first chapter, and one of them is humans' core needs at work. And then there's also the concept of meeting your team, some really good questions that you can ask during your first one-on-one to get to know the person that then you're going to be managing. The part that really resonated with me and something that I would like to then coach myself to try is helping the team get to know you. So as a manager, not only are you going out of your way to really get to know that person, but how are you then helping them get to know you as well? Because then that's really going to help set that relationship in regards of they know what kind of things that you're optimizing for. Maybe you're optimizing for a deadline, or for business goals, or maybe it's for transparency, or maybe it would be helpful to communicate to someone that you're managing to say, "Hey, I'm trying some new management techniques. Let me know how this goes." [chuckles] So there's a healthier relationship of not only are you learning them, but they're also learning you. So some of the questions that Laura includes as examples as something that you can share with your team is what do you optimize for in your role? So is it that you're optimizing for specific financial goals or building up teammates? Or maybe it's collaboration, so you're really looking for opportunities for people to pair together. What do you want your teammates to lean on you for? I really liked that question. Like, what are some of the areas that bring you joy or something that you feel really skilled in that then you want people to come to you for? Because that's something that before I was a manager...but it's just as you are growing as a developer, that's such a great question of like, what do you want to be known for? What do you want to be that thing that when people think of, they're like, oh, you should go see Chris about this, or you should go see Steph about this? And two other good questions include what are your work styles and preferences? And what management skills are you currently working on learning or improving? So I really like this concept of how can I share more of myself? And the great thing about this book that I'm learning too is while it is geared towards people that are managers, I think it's so wonderful for people who are non-managers or aspiring managers to read this as well because then it can help you manage whoever's managing you. So then that way, you can have some upward management. So we had recent conversations around when you are new to a team and getting used to a manager, or maybe if you're a junior, you have to take a lot of self-advocacy into your role to make sure things are going well. And I think this book does a really good job for people that are looking to not only manage others but also manage themselves and manage upward. So that's some of the journeys from the first chapter. I'll keep you posted on the other chapters as I'm learning more. And yeah, if anybody hasn't read this book or if you're interested, I highly recommend it. I'll make sure to include a link in the show notes. CHRIS: That was just the first chapter? STEPH: Yeah, that was just the first chapter. CHRIS: My goodness. STEPH: And I shortened it drastically. [laughs] CHRIS: Okay. All right, off to the races. But I think the summary that you gave there, particularly these are true when you're managing folks but also to manage yourself and to manage up, like, this is relevant to everyone in some capacity in some shape or form. And so that feels very true. STEPH: I will include one more fun aspect from the book, and that's circling back to the humans' core needs at work. And she references Paloma Medina, a coach, and trainer who came up with this acronym. The acronym is BICEPS, and it stands for belonging, improvement, choice, equality, predictability, and significance. And then details how each of those are important to us in our work and how when one of those feels threatened, then that can lead to some problems at work or just even in our personal life. But the fun example that she gave was not when there's a huge restructuring of the organization and things like that are going on as being the most concerning in terms of how many of these needs are going to be threatened or become vulnerable. But changing where someone sits at work can actually hit all of these, and it can threaten each of these needs. And it made me think, oh, cool, plus-one for being remote because we can sit wherever we want. [laughs] But that was a really fun example of how someone's needs at work, I mean, just moving their desk, which resonates, too, because I've heard that from other people. Some of the friends that I have that work in more of a People Ops role talk about when they had to shift people around how that caused so much grief. And they were just shocked that it caused so much grief. And this explains why that can be such a big deal. So that was a fun example to read through. CHRIS: I'm now having flashbacks to times where I was like, oh, I love my spot in the office. I love the people I'm sitting with. And then there was that day, and I had to move. Yeah, no, those were days. This is true. STEPH: It triggered all the core BICEPS, all the things that you need to work. It threatened all of them. Or it could have improved them; who knows? CHRIS: There were definitely those as well, yeah. Although I think it's harder to know that it's going to be great on the way in, so it's mostly negative. I think it has that weird bias because you're like, this was a thing, I knew it. I at least understood it. And then you're in a new space, and you're like, I don't know, is this going to be terrible or great? I mean, hopefully, it's only great because you work with great people, and it's a great office. [laughs] But, like, the unknown, you're moving into the unknown, and so I think it has an inherent at least questioning bias to it. STEPH: Agreed. On that note, shall we wrap up? CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm. STEPH: This show is produced and edited by Mandy Moore. CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari. CHRIS: And I'm @christoomey. STEPH: Or you can reach us at hosts@bikeshed.fm via email. CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
In this episode, Craig sits down with Dale Chang, Operating Partner with Silicon Valley venture capital firm, Scale Venture Partners. Known as “Scale Dale,” he will share the key elements of the playbook required to drive explosive yet efficient growth.Dale Chang is the Operating Partner at Scale Venture Partners. In his role, Dale is a resource for guidance on evolving go-to-market strategies as well as providing best practices and benchmarks across the portfolio.Previously, Dale was a director at the Alexander Group, a consulting firm specialized in go-to-market strategy. At the Alexander Group, he was the leader of the cloud practice and advised companies including Salesforce, LinkedIn, Box, DocuSign, New Relic, Optimizely and Mixpanel. Prior to his career in consulting, he worked in sales and sales operations roles at startups, including Proofpoint and Jaspersoft.
How do you create thought leadership content at scale even if you don't have access to leaders? By broadening your definition of what a thought leader is, and taking a holistic approach to content creation, that's how. This week on the Flying Cat Marketing podcast, we talk to content marketer Brooklin Nash about how he takes expert voices and elevates them through B2B SaaS brands. Brooklin is a six-figure freelancer, aspiring seven-figure agency owner, and trusted by companies like Mixpanel, Skillsoft, and CloudShare, so we figure his advice is worth listening to! Thought leadership has become a bit of a well-worn phrase in recent years but, as Brooklin explains, genuine thought leadership is way more than CEO-driven opinion pieces. He defines it as “Putting expert voices front and center in your content that match who your audience is and who you're actually selling to, whether they're your business users, end-users of your product, or decision-makers.” With that out of the way, Brooklin walked us through his approach to getting time with subject matter experts, and how he makes the most of it to produce a true content web of knowledge that drives demand generation. Finally, he explained when and why content should be divorced from demand generation, and how to improve sales-marketing alignment to create sales collateral that can be used at all funnel stages.All this and more in the full episode! In this episode: Brooklin's definition of what thought leadership isHis content creation process, and how he scales it How to avoid asking the same 101 questions in interviews with subject matter expertsHow to improve sales and marketing alignment His experience creating content for Mixpanel and Hopin How he gets time with internal and external experts When and why content needs to be divorced from the productAbout the guest:Brooklin is a content marketer for B2B SaaS companies. He became an "accidental" six figure freelancer, and is now working to become a "purposeful" seven figure agency owner. He works with companies like Mixpanel, Skillsoft, and CloudShare, usually out of his home in Antigua, Guatemala.Timestamps01:37: Brooklin's definition of what thought leadership is03:29: His content creation process, and how he scales it 04:01: How to avoid asking the same 101 questions in interviews with subject matter experts07:40: How to improve sales and marketing alignment 11:03: His experience creating content for Mixpanel and Hopin 14:36: How he gets time with internal and external experts 16:41: When and why content needs to be divorced from the productConnect with Brooklin Nash on LinkedIn: https://www.linkedin.com/in/brooklin-nash/Follow Flying Cat Marketing on the following channels to get more tips, tactics, & knowledge on content marketingInstagram: https://www.instagram.com/flyingcatmarketing/Facebook: https://www.facebook.com/flyingcatmarketingLinkedin: https://www.linkedin.com/company/flying-cat-marketing/
Highlights from this week's conversation include:Neil's programming hobby turned into a career and how he cold-contacted Mixpanel for a job (2:28)Lessons learned from nine years at Mixpanel (5:05)Defining product analytics (8:06)How Mixpanel has evolved into the product it is today (10:56)The importance of Mixpanel's real-time analysis (19:52)Looking at Arb, Mixpanel's own arbitrary segmentation database (23:44)The business impact that the rise of the cloud data warehouse had on Mixpanel (34:56)Sub-second latencies and real-time use cases (49:05)Career advice from Neil (1:02:02)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.