Podcasts about devrel

  • 283PODCASTS
  • 1,127EPISODES
  • 42mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • May 22, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about devrel

Show all podcasts related to devrel

Latest podcast episodes about devrel

web3 with a16z
Marketing 101 for Startups: Token Launches, Memes, Reaching Devs & More

web3 with a16z

Play Episode Listen Later May 22, 2025 65:15


with @kimbatronic @amandatylerj @clairekartWelcome to web3 with a16z. Since our show covers both tech trends and company building, today's topic is all about marketing — including differences between marketing in crypto and traditional tech.The conversation shares a candid look at what works — and what doesn't — when it comes to building reputation and community, attracting developers, hiring teams and agencies, launching tokens, raising founder profiles, and more.Our experts are:... Amanda Tyler, who was most recently Head of Marketing at the Optimism Foundation (and was formerly at Polygon, Coinbase, and Google);... Claire Kart, Chief Marketing Officer at Aztec (who previously was at Risc Zero and SoFi);... in conversation with Kim Milosevich, CMO at a16z crypto (who was formerly VP of Comms at Coinbase, and who has spent decades in tech at a16z, Skype, Yahoo, and elsewhere).Timestamps(0:00) Introduction(1:41) The Role of Marketers(4:52) Tech Marketing vs. Crypto Marketing(6:34) Understanding the Core Audience(10:56) Marketing for Ethereum and Layer 2 Projects(16:09) The Role of Community Managers and Developer Relations(25:21) Token Launch Strategies(34:42) Building Founders' Profiles (Without Being Cringe)(38:53) How to Support Founders(40:55) When to Hire (43:05) Consultants vs. Agencies(46:08) Structuring a Marketing Team(48:27) Finding and Hiring Talent(50:36) Building an Editorial Content Operation(53:39) International Marketing Strategies(56:41) The Role of Events(1:01:48) Memes and Crypto Culture (1:04:57) ConclusionAs a reminder, none of the content should be taken as investment, business, legal, or tax advice; please see a16z.com/disclosures for more important information, including a link to a list of our investments.

Fireside with Voxgig
Episode 245 Thorsten Schaeff, Developer Experience Engineer at ElevenLabs

Fireside with Voxgig

Play Episode Listen Later May 8, 2025 37:38


In this episode, we're drilling more into the vocational aspect of DevRel, with our guest Thorsten Schaeff. Thor has recently become the Developer Experience Engineer at ElevenLabs, an AI Audio research and deployment company, and he's based out of Singapore, having moved there with Stripe six years ago. Thor tells us that the common thread amongst his various roles has been the learning and teaching aspects, and he's been lucky enough to be able to follow his interests for the majority of his career. We agree with him on the point that if you are driven by wanting to help and teach people, then DevRel is the place for you. We also talk about his multi-modal approach to publishing content. Not only do you need to be targeting all platforms, with both long and short form content, these channels all need to interact with each other as well, and present as a cohesive front. He leaves us with a lovely reminder of just how rewarding it can be to see someone create something great with a technology you taught them to use. Reach out to Thorsten here: https://www.linkedin.com/in/thorwebdev/ Check out ElevenLabs: https://elevenlabs.io/ Find out more and listen to previous podcasts here: https://www.voxgig.com/podcast Subscribe to our newsletter for weekly updates and information about upcoming meetups: https://voxgig.substack.com/ Join the Dublin DevRel Meetup group here: www.devrelmeetup.com

Fireside with Voxgig
Episode 244 Karl Hughes Founder of Draft.dev and The Podcast Consultant

Fireside with Voxgig

Play Episode Listen Later May 2, 2025 47:49


Some people spend their whole lives looking for their “superpower”, but if you're a DevRel, you've probably already found yours. Today we're speaking to Founder of Draft.dev, Karl Hughes about his path through the startup world, and the ever-changing landscape it operates in. Draft.dev is a developer focused content agency, and with Karl we learn about the epic highs and lows that he and the company have been through in this industry. Any small business owners or startup founders can relate to the realities of Karl's journey - one day your cash is flowing, the next it's a dead stop and only those with true skill and dedication can run a company that successfully traverses both eras. We discuss the effects that AI is having on DevRel, as well as the return to in-person events. Karl has been the very definition of hands-on with his projects, and it has allowed him to gain an extremely well rounded understanding of the technical aspects of running a company. Be sure to give it a listen! Reach out to Karl here: https://www.linkedin.com/in/karllhughes/ Check out Draft.dev: https://draft.dev/ Find out more and listen to previous podcasts here: https://www.voxgig.com/podcast Subscribe to our newsletter for weekly updates and information about upcoming meetups: https://voxgig.substack.com/ Join the Dublin DevRel Meetup group here: www.devrelmeetup.com

MLOps.community
GraphBI: Expanding Analytics to All Data Through the Combination of GenAI, Graph, & Visual Analytics // Paco Nathan & Weidong Yang // #310

MLOps.community

Play Episode Listen Later Apr 29, 2025 74:01


GraphBI: Expanding Analytics to All Data Through the Combination of GenAI, Graph, & Visual Analytics // MLOps Podcast #310 with Paco Nathan, Principal DevRel Engineer at Senzing & Weidong Yang, CEO of Kineviz.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter // AbstractExisting BI and big data solutions depend largely on structured data, which makes up only about 20% of all available information, leaving the vast majority untapped. In this talk, we introduce GraphBI, which aims to address this challenge by combining GenAI, graph technology, and visual analytics to unlock the full potential of enterprise data.Recent technologies like RAG (Retrieval-Augmented Generation) and GraphRAG leverage GenAI for tasks such as summarization and Q&A, but they often function as black boxes, making verification challenging. In contrast, GraphBI uses GenAI for data pre-processing—converting unstructured data into a graph-based format—enabling a transparent, step-by-step analytics process that ensures reliability.We will walk through the GraphBI workflow, exploring best practices and challenges in each step of the process: managing both structured and unstructured data, data pre-processing with GenAI, iterative analytics using a BI-focused graph grammar, and final insight presentation. This approach uniquely surfaces business insights by effectively incorporating all types of data.// BioPaco NathanPaco Nathan is a "player/coach" who excels in data science, machine learning, and natural language, with 40 years of industry experience. He leads DevRel for the Entity Resolved Knowledge Graph practice area at Senzing.com and advises Argilla.io, Kurve.ai, KungFu.ai, and DataSpartan.co.uk, and is lead committer for the pytextrank​ and kglab​ open source projects. Formerly: Director of Learning Group at O'Reilly Media; and Director of Community Evangelism at Databricks.Weidong YangWeidong Yang, Ph.D., is the founder and CEO of Kineviz, a San Francisco-based company that develops interactive visual analytics based solutions to address complex big data problems. His expertise spans Physics, Computer Science and Performing Art, with significant contributions to the semiconductor industry and quantum dot research at UC, Berkeley and Silicon Valley. Yang also leads Kinetech Arts, a 501(c) non-profit blending dance, science, and technology. An eloquent public speaker and performer, he holds 11 US patents, including the groundbreaking Diffraction-based Overlay technology, vital for sub-10-nm semiconductor production.// Related LinksWebsite: https://www.kineviz.com/Blog: https://medium.com/kinevizWebsite: https://derwen.ai/pacohttps://huggingface.co/pacoidhttps://github.com/ceterihttps://neo4j.com/developer-blog/entity-resolved-knowledge-graphs/~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Weidong on LinkedIn: /yangweidong/Connect with Paco on LinkedIn: /ceteri/

A Bootiful Podcast
Java Champion, Tessl Devrel head, friend, Virtual JUG co-founder Simon Maple

A Bootiful Podcast

Play Episode Listen Later Apr 24, 2025 77:17


Hi, Spring fans! In this episode, we catch up with Java Champion, Tessl Devrel head, Virtual JUG co-founder, and friend Simon Maple! This episode was recorded at the amazing ArcOfAI conference held in amazing Austin, TX!

Scaling DevTools
Sunil Pai on AI agents, Cloudflare and React

Scaling DevTools

Play Episode Listen Later Apr 24, 2025 50:29 Transcription Available


This episode is with Sunil Pai. He works at Cloudflare after his startup PartyKit was acquired. Previously he was on the React core team at Meta.He's a great guy. And obsessed with AI agents. This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links:- Sunil Pai on X - Sunil Pai's site- Building agents with Cloudflare - PartyKit - Durable objects 

Scaling DevTools
Raycast founder Thomas Paul Mann - quality, YC and AI

Scaling DevTools

Play Episode Listen Later Apr 17, 2025 45:08 Transcription Available


Thomas Paul Mann is the cofounder of Raycast. I use Raycast every day as a replacement for Spotlight. For me, shortcuts are the most useful feature. I put curl requests I commonly use as well as random things like email snippets. It's a massive time saver and really well built.Raycast is a genuinely well built product so Thomas talks quality, getting feedback and how they ship features. We also talk about their unique YC experience and how they've been building AI into Raycast. This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links:RaycastRaycast Extensions StoreTerminal Coffee x RaycastThomas on Twitter/X

The Joe Reis Show
Tim Berglund - The Art of Developer Relations, Hardware Hacking, and More

The Joe Reis Show

Play Episode Listen Later Apr 16, 2025 54:52


Tim Berglund is the OG of DevRel. We chat about the art and craft of developer relations, hacking on hardware, and much more.

Community Pulse
The Decline of Technical Influencers (Ep 96)

Community Pulse

Play Episode Listen Later Apr 15, 2025 50:52


There has been a lot of chat about the decline of the tech influencer. Where have they all gone? Is tech influence too heavy or too light? PJ, Mary, Wesley, and Jason share their opinions about “capital I” Influencers and where DevRel falls into all of this. Checkouts PJ Hagerty * Take vacations. * Fyre Festival 2 is real and can hurt you (https://www.fyre.mx/). Jason Hand * Stitched video (https://www.tiktok.com/@javavvitch/video/7484337665979157806?_r=1&_t=ZP-8v4542WIxhP) * Original video (https://www.tiktok.com/@_jenniferopal/video/7483187087668235542) * AI “slop” article (https://www.404media.co/ai-slop-is-a-brute-force-attack-on-the-algorithms-that-control-reality/) * AI Tools Lab (https://ai-tools-lab.com/) Mary Thengvall awesome (fictional) books that have stuck with me lately: * When Women Were Dragons (https://amzn.to/3XCdANJ) by Kelly Barnhill * The Midnight Library (https://www.amazon.com/Midnight-Library-Novel-Matt-Haig/dp/0525559493?crid=2XC9NV2G9FSZ3&dib=eyJ2IjoiMSJ9.2X1VMX4VBN13gI1Fm3eUtvFfYDDrB1UgW6o8pimHCKMRsUdZljuYA8UPt0uNEWQpezPL4jgGeQOKhNUUDKDiZCL70hlev8QQoAFODLSCYYHRcGHaWH6c-SIUfl-9hlWwCg4pgNfLmAi4U-PiNz9mY8AjEtRk7A1DT94rKHkb_11rxAPhs7gjEfTKIrjryhjr4OwIkmpGCpN-Pb4zNCJO8TaRKWh3fUlWuTtpFangRA8.liV0Ba6DaeVkONNImws4TX39AMvsfGnTdjU8aGbGQkg&dib_tag=se&keywords=the+midnight+library&qid=1743186669&s=books&sprefix=the+midngith,stripbooks,257&sr=1-1&linkCode=sl1&tag=persea-20&linkId=4d6bfa9b106a788cfcdd7a6b09838212&language=en_US&ref_=as_li_ss_tl) by Matt Haig * Station Eleven (https://amzn.to/3E1xUl1) by Emily St. John Mandel Cover art photo by Diggity Marketing on Unsplash. Enjoy the podcast? Please take a few moments to leave us a review on iTunes (https://itunes.apple.com/us/podcast/community-pulse/id1218368182?mt=2) and follow us on Spotify (https://open.spotify.com/show/3I7g5W9fMSgpWu38zZMjet?si=eb528c7de12b4d7a&nd=1&dlsi=b0c85248dabc48ce), or leave a review on one of the other many podcasting sites that we're on! Your support means a lot to us and helps us continue to produce episodes every month. Like all things Community, this too takes a village.

Scaling DevTools
The startup behind ChatGPT voice - Russ d'Sa from LiveKit

Scaling DevTools

Play Episode Listen Later Apr 10, 2025 53:52 Transcription Available


Russ D'Sa is the founder of LiveKit. They are an open source tool for real time audio and video for LLM applications and they power the voice chat for ChatGPT and Character AI.We discuss:- How lightning works (using ChatGPT/LiveKit)- How LiveKit started working with OpenAI- Why Russ turned down an early 20m acquisition offer- What it's like to work with the fastest growing company (ever?)- How to prepare for massive scale challenges- Russ's 3 letter twitter handleThis episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign-On and audit logs. Links:- LiveKit  - Russ's Twitter 

COMPRESSEDfm
202 | Framework Trade-offs: What Laravel Offers JavaScript Developers

COMPRESSEDfm

Play Episode Listen Later Apr 8, 2025 53:26


Josh Cirre joins us to discuss his transition from the JavaScript ecosystem to Laravel, revealing why PHP frameworks can offer a compelling alternative for full-stack development. We explore the "identity crisis" many frontend developers face when needing robust backend solutions, how Laravel's batteries-included approach compares to piecing together JavaScript services, and the trade-offs between serverless and traditional hosting environments. Josh also shares insights on Laravel's developer experience, front-end integration options, and his thoughts on what JavaScript frameworks could learn from Laravel's approach to abstraction and infrastructure.Show Notes0:00 - Intro1:02 - Sponsor: Wix Studio1:46 - Introduction to Laravel2:25 - Josh's Journey from Frontend to Backend5:40 - Building the Same Project Across Frameworks6:32 - Josh's Breakthrough with Laravel8:20 - Laravel's Frontend Options10:25 - React Server Components Comparison12:00 - Livewire and Volt13:41 - Josh's Course on Laracasts14:08 - Laravel's DX and Ecosystem16:46 - MVC Structure Explained for JavaScript Developers18:25 - Type Safety Between PHP and JavaScript21:12 - Laravel Pain Points and Criticisms22:40 - Laravel Team's Response to Feedback24:50 - Laravel's Limitations and Use Cases26:10 - Laravel's Developer Products27:20 - Option Paralysis in Laravel30:46 - Laravel's Driver System33:14 - Web Dev Challenge Experience33:38 - TanStack Start Exploration34:50 - Server Functions in TanStack37:38 - Infrastructure Agnostic Development41:02 - Serverless vs. Serverful Cost Comparison44:50 - JavaScript Framework Evolution46:46 - Framework Ecosystems Comparison48:25 - Picks and Plugs Links Mentioned in the EpisodeLaravel - PHP frameworkTanStack Start - React meta-framework Josh created a YouTube video aboutLivewire - Laravel's HTML-over-the-wire front-end frameworkInertia.js - Framework for creating single-page appsVolt - Single file component system for LivewireLaravel Cloud - Managed hosting solution for Laravel applicationsHerd - Laravel's tool for setting up PHP development environmentsForge - Laravel's server management toolEnvoyer - Laravel's zero-downtime deployment toolLaracasts - Where Josh has a course on LivewireJosh Cirre's YouTube channelHTMX - Frontend library Josh compared to LivewireWeb Dev Challenge with Jason Lengstorf (featuring Josh and Amy)Josh Cirre's BlueSky account (@joshcirre)Amy's BlueSky accountBrad's BlueSky account Additional ResourcesLaravel DocumentationSvelte's new starter kit (mentioned as a good example)Nightwatch - Latest product from LaravelLaravel Vapor - Serverless deployment platform for LaravelTheo's Laravel exploration (discussed in the criticism section)Laravel BreezeLaravel JetstreamLaravel Fortify (authentication package mentioned)Adonis.js (JavaScript framework compared to Laravel)Anker USB powered hub (Josh's pick)Grether's Sugar Free Black Currant Pastilles (Josh's pick)JBL Portable Speaker (Amy's pick)

Scaling DevTools
Chris Evans & Pete Hamilton: Incident.io cofounders

Scaling DevTools

Play Episode Listen Later Apr 3, 2025 49:06 Transcription Available


Pete Hamilton and Chris Evans are cofounders of Incident.io. Incident is an incident management tool. We discuss:How they think about brand and how it comes from their deep understanding of incident cultureLawrence's article asking for new macbooks that went viralGallows humor in incidents Why incident.io started on Heroku despite being an incident response platform—and why “shipping fast” mattered more than “scaling perfectly.”The benefit of building for users who are just like youHow Incident is using GenAIThis episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign-On and audit logs. Links:Pete Hamilton on Twitter Chris Evans on TwitterIncident Macbook articleThe flight plan that brought UK airspace to its kneesHow Netflix drives reliability across their organizationNote: this was recorded on 13th December 2024.

Angular Master Podcast
AMP 69: Dawid Ostrowski - GDE Deep Dive, Building a Program That Developers Love

Angular Master Podcast

Play Episode Listen Later Mar 30, 2025 21:07


Welcome to a brand new episode of the Angular Master Podcast, where we explore the intersection of technology, community building, and developer relations. In this episode, I'm joined by Dawid Ostrowski, Head of Product Engagement in the Google Developer Ecosystem team, and former lead of the Google Developer Experts (GDE) program.Don't miss out – explore the full platform here:https://goo.gle/google-for-developersWe go deep into the world of global developer programs and uncover the key principles behind building a community that developers not only join—but genuinely love being part of.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you're in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!If you're not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!We are SO excited to share our conversation with Dharmesh Shah, co-founder of HubSpot and creator of Agent.ai.A particularly compelling concept we discussed is the idea of "hybrid teams" - the next evolution in workplace organization where human workers collaborate with AI agents as team members. Just as we previously saw hybrid teams emerge in terms of full-time vs. contract workers, or in-office vs. remote workers, Dharmesh predicts that the next frontier will be teams composed of both human and AI members. This raises interesting questions about team dynamics, trust, and how to effectively delegate tasks between human and AI team members.The discussion of business models in AI reveals an important distinction between Work as a Service (WaaS) and Results as a Service (RaaS), something Dharmesh has written extensively about. While RaaS has gained popularity, particularly in customer support applications where outcomes are easily measurable, Dharmesh argues that this model may be over-indexed. Not all AI applications have clearly definable outcomes or consistent economic value per transaction, making WaaS more appropriate in many cases. This insight is particularly relevant for businesses considering how to monetize AI capabilities.The technical challenges of implementing effective agent systems are also explored, particularly around memory and authentication. Shah emphasizes the importance of cross-agent memory sharing and the need for more granular control over data access. He envisions a future where users can selectively share parts of their data with different agents, similar to how OAuth works but with much finer control. This points to significant opportunities in developing infrastructure for secure and efficient agent-to-agent communication and data sharing.Other highlights from our conversation* The Evolution of AI-Powered Agents – Exploring how AI agents have evolved from simple chatbots to sophisticated multi-agent systems, and the role of MCPs in enabling that.* Hybrid Digital Teams and the Future of Work – How AI agents are becoming teammates rather than just tools, and what this means for business operations and knowledge work.* Memory in AI Agents – The importance of persistent memory in AI systems and how shared memory across agents could enhance collaboration and efficiency.* Business Models for AI Agents – Exploring the shift from software as a service (SaaS) to work as a service (WaaS) and results as a service (RaaS), and what this means for monetization.* The Role of Standards Like MCP – Why MCP has been widely adopted and how it enables agent collaboration, tool use, and discovery.* The Future of AI Code Generation and Software Engineering – How AI-assisted coding is changing the role of software engineers and what skills will matter most in the future.* Domain Investing and Efficient Markets – Dharmesh's approach to domain investing and how inefficiencies in digital asset markets create business opportunities.* The Philosophy of Saying No – Lessons from "Sorry, You Must Pass" and how prioritization leads to greater productivity and focus.Timestamps* 00:00 Introduction and Guest Welcome* 02:29 Dharmesh Shah's Journey into AI* 05:22 Defining AI Agents* 06:45 The Evolution and Future of AI Agents* 13:53 Graph Theory and Knowledge Representation* 20:02 Engineering Practices and Overengineering* 25:57 The Role of Junior Engineers in the AI Era* 28:20 Multi-Agent Systems and MCP Standards* 35:55 LinkedIn's Legal Battles and Data Scraping* 37:32 The Future of AI and Hybrid Teams* 39:19 Building Agent AI: A Professional Network for Agents* 40:43 Challenges and Innovations in Agent AI* 45:02 The Evolution of UI in AI Systems* 01:00:25 Business Models: Work as a Service vs. Results as a Service* 01:09:17 The Future Value of Engineers* 01:09:51 Exploring the Role of Agents* 01:10:28 The Importance of Memory in AI* 01:11:02 Challenges and Opportunities in AI Memory* 01:12:41 Selective Memory and Privacy Concerns* 01:13:27 The Evolution of AI Tools and Platforms* 01:18:23 Domain Names and AI Projects* 01:32:08 Balancing Work and Personal Life* 01:35:52 Final Thoughts and ReflectionsTranscriptAlessio [00:00:04]: Hey everyone, welcome back to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hello, and today we're super excited to have Dharmesh Shah to join us. I guess your relevant title here is founder of Agent AI.Dharmesh [00:00:20]: Yeah, that's true for this. Yeah, creator of Agent.ai and co-founder of HubSpot.swyx [00:00:25]: Co-founder of HubSpot, which I followed for many years, I think 18 years now, gonna be 19 soon. And you caught, you know, people can catch up on your HubSpot story elsewhere. I should also thank Sean Puri, who I've chatted with back and forth, who's been, I guess, getting me in touch with your people. But also, I think like, just giving us a lot of context, because obviously, My First Million joined you guys, and they've been chatting with you guys a lot. So for the business side, we can talk about that, but I kind of wanted to engage your CTO, agent, engineer side of things. So how did you get agent religion?Dharmesh [00:01:00]: Let's see. So I've been working, I'll take like a half step back, a decade or so ago, even though actually more than that. So even before HubSpot, the company I was contemplating that I had named for was called Ingenisoft. And the idea behind Ingenisoft was a natural language interface to business software. Now realize this is 20 years ago, so that was a hard thing to do. But the actual use case that I had in mind was, you know, we had data sitting in business systems like a CRM or something like that. And my kind of what I thought clever at the time. Oh, what if we used email as the kind of interface to get to business software? And the motivation for using email is that it automatically works when you're offline. So imagine I'm getting on a plane or I'm on a plane. There was no internet on planes back then. It's like, oh, I'm going through business cards from an event I went to. I can just type things into an email just to have them all in the backlog. When it reconnects, it sends those emails to a processor that basically kind of parses effectively the commands and updates the software, sends you the file, whatever it is. And there was a handful of commands. I was a little bit ahead of the times in terms of what was actually possible. And I reattempted this natural language thing with a product called ChatSpot that I did back 20...swyx [00:02:12]: Yeah, this is your first post-ChatGPT project.Dharmesh [00:02:14]: I saw it come out. Yeah. And so I've always been kind of fascinated by this natural language interface to software. Because, you know, as software developers, myself included, we've always said, oh, we build intuitive, easy-to-use applications. And it's not intuitive at all, right? Because what we're doing is... We're taking the mental model that's in our head of what we're trying to accomplish with said piece of software and translating that into a series of touches and swipes and clicks and things like that. And there's nothing natural or intuitive about it. And so natural language interfaces, for the first time, you know, whatever the thought is you have in your head and expressed in whatever language that you normally use to talk to yourself in your head, you can just sort of emit that and have software do something. And I thought that was kind of a breakthrough, which it has been. And it's gone. So that's where I first started getting into the journey. I started because now it actually works, right? So once we got ChatGPT and you can take, even with a few-shot example, convert something into structured, even back in the ChatGP 3.5 days, it did a decent job in a few-shot example, convert something to structured text if you knew what kinds of intents you were going to have. And so that happened. And that ultimately became a HubSpot project. But then agents intrigued me because I'm like, okay, well, that's the next step here. So chat's great. Love Chat UX. But if we want to do something even more meaningful, it felt like the next kind of advancement is not this kind of, I'm chatting with some software in a kind of a synchronous back and forth model, is that software is going to do things for me in kind of a multi-step way to try and accomplish some goals. So, yeah, that's when I first got started. It's like, okay, what would that look like? Yeah. And I've been obsessed ever since, by the way.Alessio [00:03:55]: Which goes back to your first experience with it, which is like you're offline. Yeah. And you want to do a task. You don't need to do it right now. You just want to queue it up for somebody to do it for you. Yes. As you think about agents, like, let's start at the easy question, which is like, how do you define an agent? Maybe. You mean the hardest question in the universe? Is that what you mean?Dharmesh [00:04:12]: You said you have an irritating take. I do have an irritating take. I think, well, some number of people have been irritated, including within my own team. So I have a very broad definition for agents, which is it's AI-powered software that accomplishes a goal. Period. That's it. And what irritates people about it is like, well, that's so broad as to be completely non-useful. And I understand that. I understand the criticism. But in my mind, if you kind of fast forward months, I guess, in AI years, the implementation of it, and we're already starting to see this, and we'll talk about this, different kinds of agents, right? So I think in addition to having a usable definition, and I like yours, by the way, and we should talk more about that, that you just came out with, the classification of agents actually is also useful, which is, is it autonomous or non-autonomous? Does it have a deterministic workflow? Does it have a non-deterministic workflow? Is it working synchronously? Is it working asynchronously? Then you have the different kind of interaction modes. Is it a chat agent, kind of like a customer support agent would be? You're having this kind of back and forth. Is it a workflow agent that just does a discrete number of steps? So there's all these different flavors of agents. So if I were to draw it in a Venn diagram, I would draw a big circle that says, this is agents, and then I have a bunch of circles, some overlapping, because they're not mutually exclusive. And so I think that's what's interesting, and we're seeing development along a bunch of different paths, right? So if you look at the first implementation of agent frameworks, you look at Baby AGI and AutoGBT, I think it was, not Autogen, that's the Microsoft one. They were way ahead of their time because they assumed this level of reasoning and execution and planning capability that just did not exist, right? So it was an interesting thought experiment, which is what it was. Even the guy that, I'm an investor in Yohei's fund that did Baby AGI. It wasn't ready, but it was a sign of what was to come. And so the question then is, when is it ready? And so lots of people talk about the state of the art when it comes to agents. I'm a pragmatist, so I think of the state of the practical. It's like, okay, well, what can I actually build that has commercial value or solves actually some discrete problem with some baseline of repeatability or verifiability?swyx [00:06:22]: There was a lot, and very, very interesting. I'm not irritated by it at all. Okay. As you know, I take a... There's a lot of anthropological view or linguistics view. And in linguistics, you don't want to be prescriptive. You want to be descriptive. Yeah. So you're a goals guy. That's the key word in your thing. And other people have other definitions that might involve like delegated trust or non-deterministic work, LLM in the loop, all that stuff. The other thing I was thinking about, just the comment on Baby AGI, LGBT. Yeah. In that piece that you just read, I was able to go through our backlog and just kind of track the winter of agents and then the summer now. Yeah. And it's... We can tell the whole story as an oral history, just following that thread. And it's really just like, I think, I tried to explain the why now, right? Like I had, there's better models, of course. There's better tool use with like, they're just more reliable. Yep. Better tools with MCP and all that stuff. And I'm sure you have opinions on that too. Business model shift, which you like a lot. I just heard you talk about RAS with MFM guys. Yep. Cost is dropping a lot. Yep. Inference is getting faster. There's more model diversity. Yep. Yep. I think it's a subtle point. It means that like, you have different models with different perspectives. You don't get stuck in the basin of performance of a single model. Sure. You can just get out of it by just switching models. Yep. Multi-agent research and RL fine tuning. So I just wanted to let you respond to like any of that.Dharmesh [00:07:44]: Yeah. A couple of things. Connecting the dots on the kind of the definition side of it. So we'll get the irritation out of the way completely. I have one more, even more irritating leap on the agent definition thing. So here's the way I think about it. By the way, the kind of word agent, I looked it up, like the English dictionary definition. The old school agent, yeah. Is when you have someone or something that does something on your behalf, like a travel agent or a real estate agent acts on your behalf. It's like proxy, which is a nice kind of general definition. So the other direction I'm sort of headed, and it's going to tie back to tool calling and MCP and things like that, is if you, and I'm not a biologist by any stretch of the imagination, but we have these single-celled organisms, right? Like the simplest possible form of what one would call life. But it's still life. It just happens to be single-celled. And then you can combine cells and then cells become specialized over time. And you have much more sophisticated organisms, you know, kind of further down the spectrum. In my mind, at the most fundamental level, you can almost think of having atomic agents. What is the simplest possible thing that's an agent that can still be called an agent? What is the equivalent of a kind of single-celled organism? And the reason I think that's useful is right now we're headed down the road, which I think is very exciting around tool use, right? That says, okay, the LLMs now can be provided a set of tools that it calls to accomplish whatever it needs to accomplish in the kind of furtherance of whatever goal it's trying to get done. And I'm not overly bothered by it, but if you think about it, if you just squint a little bit and say, well, what if everything was an agent? And what if tools were actually just atomic agents? Because then it's turtles all the way down, right? Then it's like, oh, well, all that's really happening with tool use is that we have a network of agents that know about each other through something like an MMCP and can kind of decompose a particular problem and say, oh, I'm going to delegate this to this set of agents. And why do we need to draw this distinction between tools, which are functions most of the time? And an actual agent. And so I'm going to write this irritating LinkedIn post, you know, proposing this. It's like, okay. And I'm not suggesting we should call even functions, you know, call them agents. But there is a certain amount of elegance that happens when you say, oh, we can just reduce it down to one primitive, which is an agent that you can combine in complicated ways to kind of raise the level of abstraction and accomplish higher order goals. Anyway, that's my answer. I'd say that's a success. Thank you for coming to my TED Talk on agent definitions.Alessio [00:09:54]: How do you define the minimum viable agent? Do you already have a definition for, like, where you draw the line between a cell and an atom? Yeah.Dharmesh [00:10:02]: So in my mind, it has to, at some level, use AI in order for it to—otherwise, it's just software. It's like, you know, we don't need another word for that. And so that's probably where I draw the line. So then the question, you know, the counterargument would be, well, if that's true, then lots of tools themselves are actually not agents because they're just doing a database call or a REST API call or whatever it is they're doing. And that does not necessarily qualify them, which is a fair counterargument. And I accept that. It's like a good argument. I still like to think about—because we'll talk about multi-agent systems, because I think—so we've accepted, which I think is true, lots of people have said it, and you've hopefully combined some of those clips of really smart people saying this is the year of agents, and I completely agree, it is the year of agents. But then shortly after that, it's going to be the year of multi-agent systems or multi-agent networks. I think that's where it's going to be headed next year. Yeah.swyx [00:10:54]: Opening eyes already on that. Yeah. My quick philosophical engagement with you on this. I often think about kind of the other spectrum, the other end of the cell spectrum. So single cell is life, multi-cell is life, and you clump a bunch of cells together in a more complex organism, they become organs, like an eye and a liver or whatever. And then obviously we consider ourselves one life form. There's not like a lot of lives within me. I'm just one life. And now, obviously, I don't think people don't really like to anthropomorphize agents and AI. Yeah. But we are extending our consciousness and our brain and our functionality out into machines. I just saw you were a Bee. Yeah. Which is, you know, it's nice. I have a limitless pendant in my pocket.Dharmesh [00:11:37]: I got one of these boys. Yeah.swyx [00:11:39]: I'm testing it all out. You know, got to be early adopters. But like, we want to extend our personal memory into these things so that we can be good at the things that we're good at. And, you know, machines are good at it. Machines are there. So like, my definition of life is kind of like going outside of my own body now. I don't know if you've ever had like reflections on that. Like how yours. How our self is like actually being distributed outside of you. Yeah.Dharmesh [00:12:01]: I don't fancy myself a philosopher. But you went there. So yeah, I did go there. I'm fascinated by kind of graphs and graph theory and networks and have been for a long, long time. And to me, we're sort of all nodes in this kind of larger thing. It just so happens that we're looking at individual kind of life forms as they exist right now. But so the idea is when you put a podcast out there, there's these little kind of nodes you're putting out there of like, you know, conceptual ideas. Once again, you have varying kind of forms of those little nodes that are up there and are connected in varying and sundry ways. And so I just think of myself as being a node in a massive, massive network. And I'm producing more nodes as I put content or ideas. And, you know, you spend some portion of your life collecting dots, experiences, people, and some portion of your life then connecting dots from the ones that you've collected over time. And I found that really interesting things happen and you really can't know in advance how those dots are necessarily going to connect in the future. And that's, yeah. So that's my philosophical take. That's the, yes, exactly. Coming back.Alessio [00:13:04]: Yep. Do you like graph as an agent? Abstraction? That's been one of the hot topics with LandGraph and Pydantic and all that.Dharmesh [00:13:11]: I do. The thing I'm more interested in terms of use of graphs, and there's lots of work happening on that now, is graph data stores as an alternative in terms of knowledge stores and knowledge graphs. Yeah. Because, you know, so I've been in software now 30 plus years, right? So it's not 10,000 hours. It's like 100,000 hours that I've spent doing this stuff. And so I've grew up with, so back in the day, you know, I started on mainframes. There was a product called IMS from IBM, which is basically an index database, what we'd call like a key value store today. Then we've had relational databases, right? We have tables and columns and foreign key relationships. We all know that. We have document databases like MongoDB, which is sort of a nested structure keyed by a specific index. We have vector stores, vector embedding database. And graphs are interesting for a couple of reasons. One is, so it's not classically structured in a relational way. When you say structured database, to most people, they're thinking tables and columns and in relational database and set theory and all that. Graphs still have structure, but it's not the tables and columns structure. And you could wonder, and people have made this case, that they are a better representation of knowledge for LLMs and for AI generally than other things. So that's kind of thing number one conceptually, and that might be true, I think is possibly true. And the other thing that I really like about that in the context of, you know, I've been in the context of data stores for RAG is, you know, RAG, you say, oh, I have a million documents, I'm going to build the vector embeddings, I'm going to come back with the top X based on the semantic match, and that's fine. All that's very, very useful. But the reality is something gets lost in the chunking process and the, okay, well, those tend, you know, like, you don't really get the whole picture, so to speak, and maybe not even the right set of dimensions on the kind of broader picture. And it makes intuitive sense to me that if we did capture it properly in a graph form, that maybe that feeding into a RAG pipeline will actually yield better results for some use cases, I don't know, but yeah.Alessio [00:15:03]: And do you feel like at the core of it, there's this difference between imperative and declarative programs? Because if you think about HubSpot, it's like, you know, people and graph kind of goes hand in hand, you know, but I think maybe the software before was more like primary foreign key based relationship, versus now the models can traverse through the graph more easily.Dharmesh [00:15:22]: Yes. So I like that representation. There's something. It's just conceptually elegant about graphs and just from the representation of it, they're much more discoverable, you can kind of see it, there's observability to it, versus kind of embeddings, which you can't really do much with as a human. You know, once they're in there, you can't pull stuff back out. But yeah, I like that kind of idea of it. And the other thing that's kind of, because I love graphs, I've been long obsessed with PageRank from back in the early days. And, you know, one of the kind of simplest algorithms in terms of coming up, you know, with a phone, everyone's been exposed to PageRank. And the idea is that, and so I had this other idea for a project, not a company, and I have hundreds of these, called NodeRank, is to be able to take the idea of PageRank and apply it to an arbitrary graph that says, okay, I'm going to define what authority looks like and say, okay, well, that's interesting to me, because then if you say, I'm going to take my knowledge store, and maybe this person that contributed some number of chunks to the graph data store has more authority on this particular use case or prompt that's being submitted than this other one that may, or maybe this one was more. popular, or maybe this one has, whatever it is, there should be a way for us to kind of rank nodes in a graph and sort them in some, some useful way. Yeah.swyx [00:16:34]: So I think that's generally useful for, for anything. I think the, the problem, like, so even though at my conferences, GraphRag is super popular and people are getting knowledge, graph religion, and I will say like, it's getting space, getting traction in two areas, conversation memory, and then also just rag in general, like the, the, the document data. Yeah. It's like a source. Most ML practitioners would say that knowledge graph is kind of like a dirty word. The graph database, people get graph religion, everything's a graph, and then they, they go really hard into it and then they get a, they get a graph that is too complex to navigate. Yes. And so like the, the, the simple way to put it is like you at running HubSpot, you know, the power of graphs, the way that Google has pitched them for many years, but I don't suspect that HubSpot itself uses a knowledge graph. No. Yeah.Dharmesh [00:17:26]: So when is it over engineering? Basically? It's a great question. I don't know. So the question now, like in AI land, right, is the, do we necessarily need to understand? So right now, LLMs for, for the most part are somewhat black boxes, right? We sort of understand how the, you know, the algorithm itself works, but we really don't know what's going on in there and, and how things come out. So if a graph data store is able to produce the outcomes we want, it's like, here's a set of queries I want to be able to submit and then it comes out with useful content. Maybe the underlying data store is as opaque as a vector embeddings or something like that, but maybe it's fine. Maybe we don't necessarily need to understand it to get utility out of it. And so maybe if it's messy, that's okay. Um, that's, it's just another form of lossy compression. Uh, it's just lossy in a way that we just don't completely understand in terms of, because it's going to grow organically. Uh, and it's not structured. It's like, ah, we're just gonna throw a bunch of stuff in there. Let the, the equivalent of the embedding algorithm, whatever they called in graph land. Um, so the one with the best results wins. I think so. Yeah.swyx [00:18:26]: Or is this the practical side of me is like, yeah, it's, if it's useful, we don't necessarilyDharmesh [00:18:30]: need to understand it.swyx [00:18:30]: I have, I mean, I'm happy to push back as long as you want. Uh, it's not practical to evaluate like the 10 different options out there because it takes time. It takes people, it takes, you know, resources, right? Set. That's the first thing. Second thing is your evals are typically on small things and some things only work at scale. Yup. Like graphs. Yup.Dharmesh [00:18:46]: Yup. That's, yeah, no, that's fair. And I think this is one of the challenges in terms of implementation of graph databases is that the most common approach that I've seen developers do, I've done it myself, is that, oh, I've got a Postgres database or a MySQL or whatever. I can represent a graph with a very set of tables with a parent child thing or whatever. And that sort of gives me the ability, uh, why would I need anything more than that? And the answer is, well, if you don't need anything more than that, you don't need anything more than that. But there's a high chance that you're sort of missing out on the actual value that, uh, the graph representation gives you. Which is the ability to traverse the graph, uh, efficiently in ways that kind of going through the, uh, traversal in a relational database form, even though structurally you have the data, practically you're not gonna be able to pull it out in, in useful ways. Uh, so you wouldn't like represent a social graph, uh, in, in using that kind of relational table model. It just wouldn't scale. It wouldn't work.swyx [00:19:36]: Uh, yeah. Uh, I think we want to move on to MCP. Yeah. But I just want to, like, just engineering advice. Yeah. Uh, obviously you've, you've, you've run, uh, you've, you've had to do a lot of projects and run a lot of teams. Do you have a general rule for over-engineering or, you know, engineering ahead of time? You know, like, because people, we know premature engineering is the root of all evil. Yep. But also sometimes you just have to. Yep. When do you do it? Yes.Dharmesh [00:19:59]: It's a great question. This is, uh, a question as old as time almost, which is what's the right and wrong levels of abstraction. That's effectively what, uh, we're answering when we're trying to do engineering. I tend to be a pragmatist, right? So here's the thing. Um, lots of times doing something the right way. Yeah. It's like a marginal increased cost in those cases. Just do it the right way. And this is what makes a, uh, a great engineer or a good engineer better than, uh, a not so great one. It's like, okay, all things being equal. If it's going to take you, you know, roughly close to constant time anyway, might as well do it the right way. Like, so do things well, then the question is, okay, well, am I building a framework as the reusable library? To what degree, uh, what am I anticipating in terms of what's going to need to change in this thing? Uh, you know, along what dimension? And then I think like a business person in some ways, like what's the return on calories, right? So, uh, and you look at, um, energy, the expected value of it's like, okay, here are the five possible things that could happen, uh, try to assign probabilities like, okay, well, if there's a 50% chance that we're going to go down this particular path at some day, like, or one of these five things is going to happen and it costs you 10% more to engineer for that. It's basically, it's something that yields a kind of interest compounding value. Um, as you get closer to the time of, of needing that versus having to take on debt, which is when you under engineer it, you're taking on debt. You're going to have to pay off when you do get to that eventuality where something happens. One thing as a pragmatist, uh, so I would rather under engineer something than over engineer it. If I were going to err on the side of something, and here's the reason is that when you under engineer it, uh, yes, you take on tech debt, uh, but the interest rate is relatively known and payoff is very, very possible, right? Which is, oh, I took a shortcut here as a result of which now this thing that should have taken me a week is now going to take me four weeks. Fine. But if that particular thing that you thought might happen, never actually, you never have that use case transpire or just doesn't, it's like, well, you just save yourself time, right? And that has value because you were able to do other things instead of, uh, kind of slightly over-engineering it away, over-engineering it. But there's no perfect answers in art form in terms of, uh, and yeah, we'll, we'll bring kind of this layers of abstraction back on the code generation conversation, which we'll, uh, I think I have later on, butAlessio [00:22:05]: I was going to ask, we can just jump ahead quickly. Yeah. Like, as you think about vibe coding and all that, how does the. Yeah. Percentage of potential usefulness change when I feel like we over-engineering a lot of times it's like the investment in syntax, it's less about the investment in like arc exacting. Yep. Yeah. How does that change your calculus?Dharmesh [00:22:22]: A couple of things, right? One is, um, so, you know, going back to that kind of ROI or a return on calories, kind of calculus or heuristic you think through, it's like, okay, well, what is it going to cost me to put this layer of abstraction above the code that I'm writing now, uh, in anticipating kind of future needs. If the cost of fixing, uh, or doing under engineering right now. Uh, we'll trend towards zero that says, okay, well, I don't have to get it right right now because even if I get it wrong, I'll run the thing for six hours instead of 60 minutes or whatever. It doesn't really matter, right? Like, because that's going to trend towards zero to be able, the ability to refactor a code. Um, and because we're going to not that long from now, we're going to have, you know, large code bases be able to exist, uh, you know, as, as context, uh, for a code generation or a code refactoring, uh, model. So I think it's going to make it, uh, make the case for under engineering, uh, even stronger. Which is why I take on that cost. You just pay the interest when you get there, it's not, um, just go on with your life vibe coded and, uh, come back when you need to. Yeah.Alessio [00:23:18]: Sometimes I feel like there's no decision-making in some things like, uh, today I built a autosave for like our internal notes platform and I literally just ask them cursor. Can you add autosave? Yeah. I don't know if it's over under engineer. Yep. I just vibe coded it. Yep. And I feel like at some point we're going to get to the point where the models kindDharmesh [00:23:36]: of decide where the right line is, but this is where the, like the, in my mind, the danger is, right? So there's two sides to this. One is the cost of kind of development and coding and things like that stuff that, you know, we talk about. But then like in your example, you know, one of the risks that we have is that because adding a feature, uh, like a save or whatever the feature might be to a product as that price tends towards zero, are we going to be less discriminant about what features we add as a result of making more product products more complicated, which has a negative impact on the user and navigate negative impact on the business. Um, and so that's the thing I worry about if it starts to become too easy, are we going to be. Too promiscuous in our, uh, kind of extension, adding product extensions and things like that. It's like, ah, why not add X, Y, Z or whatever back then it was like, oh, we only have so many engineering hours or story points or however you measure things. Uh, that least kept us in check a little bit. Yeah.Alessio [00:24:22]: And then over engineering, you're like, yeah, it's kind of like you're putting that on yourself. Yeah. Like now it's like the models don't understand that if they add too much complexity, it's going to come back to bite them later. Yep. So they just do whatever they want to do. Yeah. And I'm curious where in the workflow that's going to be, where it's like, Hey, this is like the amount of complexity and over-engineering you can do before you got to ask me if we should actually do it versus like do something else.Dharmesh [00:24:45]: So you know, we've already, let's like, we're leaving this, uh, in the code generation world, this kind of compressed, um, cycle time. Right. It's like, okay, we went from auto-complete, uh, in the GitHub co-pilot to like, oh, finish this particular thing and hit tab to a, oh, I sort of know your file or whatever. I can write out a full function to you to now I can like hold a bunch of the context in my head. Uh, so we can do app generation, which we have now with lovable and bolt and repletage. Yeah. Association and other things. So then the question is, okay, well, where does it naturally go from here? So we're going to generate products. Make sense. We might be able to generate platforms as though I want a platform for ERP that does this, whatever. And that includes the API's includes the product and the UI, and all the things that make for a platform. There's no nothing that says we would stop like, okay, can you generate an entire software company someday? Right. Uh, with the platform and the monetization and the go-to-market and the whatever. And you know, that that's interesting to me in terms of, uh, you know, what, when you take it to almost ludicrous levels. of abstract.swyx [00:25:39]: It's like, okay, turn it to 11. You mentioned vibe coding, so I have to, this is a blog post I haven't written, but I'm kind of exploring it. Is the junior engineer dead?Dharmesh [00:25:49]: I don't think so. I think what will happen is that the junior engineer will be able to, if all they're bringing to the table is the fact that they are a junior engineer, then yes, they're likely dead. But hopefully if they can communicate with carbon-based life forms, they can interact with product, if they're willing to talk to customers, they can take their kind of basic understanding of engineering and how kind of software works. I think that has value. So I have a 14-year-old right now who's taking Python programming class, and some people ask me, it's like, why is he learning coding? And my answer is, is because it's not about the syntax, it's not about the coding. What he's learning is like the fundamental thing of like how things work. And there's value in that. I think there's going to be timeless value in systems thinking and abstractions and what that means. And whether functions manifested as math, which he's going to get exposed to regardless, or there are some core primitives to the universe, I think, that the more you understand them, those are what I would kind of think of as like really large dots in your life that will have a higher gravitational pull and value to them that you'll then be able to. So I want him to collect those dots, and he's not resisting. So it's like, okay, while he's still listening to me, I'm going to have him do things that I think will be useful.swyx [00:26:59]: You know, part of one of the pitches that I evaluated for AI engineer is a term. And the term is that maybe the traditional interview path or career path of software engineer goes away, which is because what's the point of lead code? Yeah. And, you know, it actually matters more that you know how to work with AI and to implement the things that you want. Yep.Dharmesh [00:27:16]: That's one of the like interesting things that's happened with generative AI. You know, you go from machine learning and the models and just that underlying form, which is like true engineering, right? Like the actual, what I call real engineering. I don't think of myself as a real engineer, actually. I'm a developer. But now with generative AI. We call it AI and it's obviously got its roots in machine learning, but it just feels like fundamentally different to me. Like you have the vibe. It's like, okay, well, this is just a whole different approach to software development to so many different things. And so I'm wondering now, it's like an AI engineer is like, if you were like to draw the Venn diagram, it's interesting because the cross between like AI things, generative AI and what the tools are capable of, what the models do, and this whole new kind of body of knowledge that we're still building out, it's still very young, intersected with kind of classic engineering, software engineering. Yeah.swyx [00:28:04]: I just described the overlap as it separates out eventually until it's its own thing, but it's starting out as a software. Yeah.Alessio [00:28:11]: That makes sense. So to close the vibe coding loop, the other big hype now is MCPs. Obviously, I would say Cloud Desktop and Cursor are like the two main drivers of MCP usage. I would say my favorite is the Sentry MCP. I can pull in errors and then you can just put the context in Cursor. How do you think about that abstraction layer? Does it feel... Does it feel almost too magical in a way? Do you think it's like you get enough? Because you don't really see how the server itself is then kind of like repackaging theDharmesh [00:28:41]: information for you? I think MCP as a standard is one of the better things that's happened in the world of AI because a standard needed to exist and absent a standard, there was a set of things that just weren't possible. Now, we can argue whether it's the best possible manifestation of a standard or not. Does it do too much? Does it do too little? I get that, but it's just simple enough to both be useful and unobtrusive. It's understandable and adoptable by mere mortals, right? It's not overly complicated. You know, a reasonable engineer can put a stand up an MCP server relatively easily. The thing that has me excited about it is like, so I'm a big believer in multi-agent systems. And so that's going back to our kind of this idea of an atomic agent. So imagine the MCP server, like obviously it calls tools, but the way I think about it, so I'm working on my current passion project is agent.ai. And we'll talk more about that in a little bit. More about the, I think we should, because I think it's interesting not to promote the project at all, but there's some interesting ideas in there. One of which is around, we're going to need a mechanism for, if agents are going to collaborate and be able to delegate, there's going to need to be some form of discovery and we're going to need some standard way. It's like, okay, well, I just need to know what this thing over here is capable of. We're going to need a registry, which Anthropic's working on. I'm sure others will and have been doing directories of, and there's going to be a standard around that too. How do you build out a directory of MCP servers? I think that's going to unlock so many things just because, and we're already starting to see it. So I think MCP or something like it is going to be the next major unlock because it allows systems that don't know about each other, don't need to, it's that kind of decoupling of like Sentry and whatever tools someone else was building. And it's not just about, you know, Cloud Desktop or things like, even on the client side, I think we're going to see very interesting consumers of MCP, MCP clients versus just the chat body kind of things. Like, you know, Cloud Desktop and Cursor and things like that. But yeah, I'm very excited about MCP in that general direction.swyx [00:30:39]: I think the typical cynical developer take, it's like, we have OpenAPI. Yeah. What's the new thing? I don't know if you have a, do you have a quick MCP versus everything else? Yeah.Dharmesh [00:30:49]: So it's, so I like OpenAPI, right? So just a descriptive thing. It's OpenAPI. OpenAPI. Yes, that's what I meant. So it's basically a self-documenting thing. We can do machine-generated, lots of things from that output. It's a structured definition of an API. I get that, love it. But MCPs sort of are kind of use case specific. They're perfect for exactly what we're trying to use them for around LLMs in terms of discovery. It's like, okay, I don't necessarily need to know kind of all this detail. And so right now we have, we'll talk more about like MCP server implementations, but We will? I think, I don't know. Maybe we won't. At least it's in my head. It's like a back processor. But I do think MCP adds value above OpenAPI. It's, yeah, just because it solves this particular thing. And if we had come to the world, which we have, like, it's like, hey, we already have OpenAPI. It's like, if that were good enough for the universe, the universe would have adopted it already. There's a reason why MCP is taking office because marginally adds something that was missing before and doesn't go too far. And so that's why the kind of rate of adoption, you folks have written about this and talked about it. Yeah, why MCP won. Yeah. And it won because the universe decided that this was useful and maybe it gets supplanted by something else. Yeah. And maybe we discover, oh, maybe OpenAPI was good enough the whole time. I doubt that.swyx [00:32:09]: The meta lesson, this is, I mean, he's an investor in DevTools companies. I work in developer experience at DevRel in DevTools companies. Yep. Everyone wants to own the standard. Yeah. I'm sure you guys have tried to launch your own standards. Actually, it's Houseplant known for a standard, you know, obviously inbound marketing. But is there a standard or protocol that you ever tried to push? No.Dharmesh [00:32:30]: And there's a reason for this. Yeah. Is that? And I don't mean, need to mean, speak for the people of HubSpot, but I personally. You kind of do. I'm not smart enough. That's not the, like, I think I have a. You're smart. Not enough for that. I'm much better off understanding the standards that are out there. And I'm more on the composability side. Let's, like, take the pieces of technology that exist out there, combine them in creative, unique ways. And I like to consume standards. I don't like to, and that's not that I don't like to create them. I just don't think I have the, both the raw wattage or the credibility. It's like, okay, well, who the heck is Dharmesh, and why should we adopt a standard he created?swyx [00:33:07]: Yeah, I mean, there are people who don't monetize standards, like OpenTelemetry is a big standard, and LightStep never capitalized on that.Dharmesh [00:33:15]: So, okay, so if I were to do a standard, there's two things that have been in my head in the past. I was one around, a very, very basic one around, I don't even have the domain, I have a domain for everything, for open marketing. Because the issue we had in HubSpot grew up in the marketing space. There we go. There was no standard around data formats and things like that. It doesn't go anywhere. But the other one, and I did not mean to go here, but I'm going to go here. It's called OpenGraph. I know the term was already taken, but it hasn't been used for like 15 years now for its original purpose. But what I think should exist in the world is right now, our information, all of us, nodes are in the social graph at Meta or the professional graph at LinkedIn. Both of which are actually relatively closed in actually very annoying ways. Like very, very closed, right? Especially LinkedIn. Especially LinkedIn. I personally believe that if it's my data, and if I would get utility out of it being open, I should be able to make my data open or publish it in whatever forms that I choose, as long as I have control over it as opt-in. So the idea is around OpenGraph that says, here's a standard, here's a way to publish it. I should be able to go to OpenGraph.org slash Dharmesh dot JSON and get it back. And it's like, here's your stuff, right? And I can choose along the way and people can write to it and I can prove. And there can be an entire system. And if I were to do that, I would do it as a... Like a public benefit, non-profit-y kind of thing, as this is a contribution to society. I wouldn't try to commercialize that. Have you looked at AdProto? What's that? AdProto.swyx [00:34:43]: It's the protocol behind Blue Sky. Okay. My good friend, Dan Abramov, who was the face of React for many, many years, now works there. And he actually did a talk that I can send you, which basically kind of tries to articulate what you just said. But he does, he loves doing these like really great analogies, which I think you'll like. Like, you know, a lot of our data is behind a handle, behind a domain. Yep. So he's like, all right, what if we flip that? What if it was like our handle and then the domain? Yep. So, and that's really like your data should belong to you. Yep. And I should not have to wait 30 days for my Twitter data to export. Yep.Dharmesh [00:35:19]: you should be able to at least be able to automate it or do like, yes, I should be able to plug it into an agentic thing. Yeah. Yes. I think we're... Because so much of our data is... Locked up. I think the trick here isn't that standard. It is getting the normies to care.swyx [00:35:37]: Yeah. Because normies don't care.Dharmesh [00:35:38]: That's true. But building on that, normies don't care. So, you know, privacy is a really hot topic and an easy word to use, but it's not a binary thing. Like there are use cases where, and we make these choices all the time, that I will trade, not all privacy, but I will trade some privacy for some productivity gain or some benefit to me that says, oh, I don't care about that particular data being online if it gives me this in return, or I don't mind sharing this information with this company.Alessio [00:36:02]: If I'm getting, you know, this in return, but that sort of should be my option. I think now with computer use, you can actually automate some of the exports. Yes. Like something we've been doing internally is like everybody exports their LinkedIn connections. Yep. And then internally, we kind of merge them together to see how we can connect our companies to customers or things like that.Dharmesh [00:36:21]: And not to pick on LinkedIn, but since we're talking about it, but they feel strongly enough on the, you know, do not take LinkedIn data that they will block even browser use kind of things or whatever. They go to great, great lengths, even to see patterns of usage. And it says, oh, there's no way you could have, you know, gotten that particular thing or whatever without, and it's, so it's, there's...swyx [00:36:42]: Wasn't there a Supreme Court case that they lost? Yeah.Dharmesh [00:36:45]: So the one they lost was around someone that was scraping public data that was on the public internet. And that particular company had not signed any terms of service or whatever. It's like, oh, I'm just taking data that's on, there was no, and so that's why they won. But now, you know, the question is around, can LinkedIn... I think they can. Like, when you use, as a user, you use LinkedIn, you are signing up for their terms of service. And if they say, well, this kind of use of your LinkedIn account that violates our terms of service, they can shut your account down, right? They can. And they, yeah, so, you know, we don't need to make this a discussion. By the way, I love the company, don't get me wrong. I'm an avid user of the product. You know, I've got... Yeah, I mean, you've got over a million followers on LinkedIn, I think. Yeah, I do. And I've known people there for a long, long time, right? And I have lots of respect. And I understand even where the mindset originally came from of this kind of members-first approach to, you know, a privacy-first. I sort of get that. But sometimes you sort of have to wonder, it's like, okay, well, that was 15, 20 years ago. There's likely some controlled ways to expose some data on some member's behalf and not just completely be a binary. It's like, no, thou shalt not have the data.swyx [00:37:54]: Well, just pay for sales navigator.Alessio [00:37:57]: Before we move to the next layer of instruction, anything else on MCP you mentioned? Let's move back and then I'll tie it back to MCPs.Dharmesh [00:38:05]: So I think the... Open this with agent. Okay, so I'll start with... Here's my kind of running thesis, is that as AI and agents evolve, which they're doing very, very quickly, we're going to look at them more and more. I don't like to anthropomorphize. We'll talk about why this is not that. Less as just like raw tools and more like teammates. They'll still be software. They should self-disclose as being software. I'm totally cool with that. But I think what's going to happen is that in the same way you might collaborate with a team member on Slack or Teams or whatever you use, you can imagine a series of agents that do specific things just like a team member might do, that you can delegate things to. You can collaborate. You can say, hey, can you take a look at this? Can you proofread that? Can you try this? You can... Whatever it happens to be. So I think it is... I will go so far as to say it's inevitable that we're going to have hybrid teams someday. And what I mean by hybrid teams... So back in the day, hybrid teams were, oh, well, you have some full-time employees and some contractors. Then it was like hybrid teams are some people that are in the office and some that are remote. That's the kind of form of hybrid. The next form of hybrid is like the carbon-based life forms and agents and AI and some form of software. So let's say we temporarily stipulate that I'm right about that over some time horizon that eventually we're going to have these kind of digitally hybrid teams. So if that's true, then the question you sort of ask yourself is that then what needs to exist in order for us to get the full value of that new model? It's like, okay, well... You sort of need to... It's like, okay, well, how do I... If I'm building a digital team, like, how do I... Just in the same way, if I'm interviewing for an engineer or a designer or a PM, whatever, it's like, well, that's why we have professional networks, right? It's like, oh, they have a presence on likely LinkedIn. I can go through that semi-structured, structured form, and I can see the experience of whatever, you know, self-disclosed. But, okay, well, agents are going to need that someday. And so I'm like, okay, well, this seems like a thread that's worth pulling on. That says, okay. So I... So agent.ai is out there. And it's LinkedIn for agents. It's LinkedIn for agents. It's a professional network for agents. And the more I pull on that thread, it's like, okay, well, if that's true, like, what happens, right? It's like, oh, well, they have a profile just like anyone else, just like a human would. It's going to be a graph underneath, just like a professional network would be. It's just that... And you can have its, you know, connections and follows, and agents should be able to post. That's maybe how they do release notes. Like, oh, I have this new version. Whatever they decide to post, it should just be able to... Behave as a node on the network of a professional network. As it turns out, the more I think about that and pull on that thread, the more and more things, like, start to make sense to me. So it may be more than just a pure professional network. So my original thought was, okay, well, it's a professional network and agents as they exist out there, which I think there's going to be more and more of, will kind of exist on this network and have the profile. But then, and this is always dangerous, I'm like, okay, I want to see a world where thousands of agents are out there in order for the... Because those digital employees, the digital workers don't exist yet in any meaningful way. And so then I'm like, oh, can I make that easier for, like... And so I have, as one does, it's like, oh, I'll build a low-code platform for building agents. How hard could that be, right? Like, very hard, as it turns out. But it's been fun. So now, agent.ai has 1.3 million users. 3,000 people have actually, you know, built some variation of an agent, sometimes just for their own personal productivity. About 1,000 of which have been published. And the reason this comes back to MCP for me, so imagine that and other networks, since I know agent.ai. So right now, we have an MCP server for agent.ai that exposes all the internally built agents that we have that do, like, super useful things. Like, you know, I have access to a Twitter API that I can subsidize the cost. And I can say, you know, if you're looking to build something for social media, these kinds of things, with a single API key, and it's all completely free right now, I'm funding it. That's a useful way for it to work. And then we have a developer to say, oh, I have this idea. I don't have to worry about open AI. I don't have to worry about, now, you know, this particular model is better. It has access to all the models with one key. And we proxy it kind of behind the scenes. And then expose it. So then we get this kind of community effect, right? That says, oh, well, someone else may have built an agent to do X. Like, I have an agent right now that I built for myself to do domain valuation for website domains because I'm obsessed with domains, right? And, like, there's no efficient market for domains. There's no Zillow for domains right now that tells you, oh, here are what houses in your neighborhood sold for. It's like, well, why doesn't that exist? We should be able to solve that problem. And, yes, you're still guessing. Fine. There should be some simple heuristic. So I built that. It's like, okay, well, let me go look for past transactions. You say, okay, I'm going to type in agent.ai, agent.com, whatever domain. What's it actually worth? I'm looking at buying it. It can go and say, oh, which is what it does. It's like, I'm going to go look at are there any published domain transactions recently that are similar, either use the same word, same top-level domain, whatever it is. And it comes back with an approximate value, and it comes back with its kind of rationale for why it picked the value and comparable transactions. Oh, by the way, this domain sold for published. Okay. So that agent now, let's say, existed on the web, on agent.ai. Then imagine someone else says, oh, you know, I want to build a brand-building agent for startups and entrepreneurs to come up with names for their startup. Like a common problem, every startup is like, ah, I don't know what to call it. And so they type in five random words that kind of define whatever their startup is. And you can do all manner of things, one of which is like, oh, well, I need to find the domain for it. What are possible choices? Now it's like, okay, well, it would be nice to know if there's an aftermarket price for it, if it's listed for sale. Awesome. Then imagine calling this valuation agent. It's like, okay, well, I want to find where the arbitrage is, where the agent valuation tool says this thing is worth $25,000. It's listed on GoDaddy for $5,000. It's close enough. Let's go do that. Right? And that's a kind of composition use case that in my future state. Thousands of agents on the network, all discoverable through something like MCP. And then you as a developer of agents have access to all these kind of Lego building blocks based on what you're trying to solve. Then you blend in orchestration, which is getting better and better with the reasoning models now. Just describe the problem that you have. Now, the next layer that we're all contending with is that how many tools can you actually give an LLM before the LLM breaks? That number used to be like 15 or 20 before you kind of started to vary dramatically. And so that's the thing I'm thinking about now. It's like, okay, if I want to... If I want to expose 1,000 of these agents to a given LLM, obviously I can't give it all 1,000. Is there some intermediate layer that says, based on your prompt, I'm going to make a best guess at which agents might be able to be helpful for this particular thing? Yeah.Alessio [00:44:37]: Yeah, like RAG for tools. Yep. I did build the Latent Space Researcher on agent.ai. Okay. Nice. Yeah, that seems like, you know, then there's going to be a Latent Space Scheduler. And then once I schedule a research, you know, and you build all of these things. By the way, my apologies for the user experience. You realize I'm an engineer. It's pretty good.swyx [00:44:56]: I think it's a normie-friendly thing. Yeah. That's your magic. HubSpot does the same thing.Alessio [00:45:01]: Yeah, just to like quickly run through it. You can basically create all these different steps. And these steps are like, you know, static versus like variable-driven things. How did you decide between this kind of like low-code-ish versus doing, you know, low-code with code backend versus like not exposing that at all? Any fun design decisions? Yeah. And this is, I think...Dharmesh [00:45:22]: I think lots of people are likely sitting in exactly my position right now, coming through the choosing between deterministic. Like if you're like in a business or building, you know, some sort of agentic thing, do you decide to do a deterministic thing? Or do you go non-deterministic and just let the alum handle it, right, with the reasoning models? The original idea and the reason I took the low-code stepwise, a very deterministic approach. A, the reasoning models did not exist at that time. That's thing number one. Thing number two is if you can get... If you know in your head... If you know in your head what the actual steps are to accomplish whatever goal, why would you leave that to chance? There's no upside. There's literally no upside. Just tell me, like, what steps do you need executed? So right now what I'm playing with... So one thing we haven't talked about yet, and people don't talk about UI and agents. Right now, the primary interaction model... Or they don't talk enough about it. I know some people have. But it's like, okay, so we're used to the chatbot back and forth. Fine. I get that. But I think we're going to move to a blend of... Some of those things are going to be synchronous as they are now. But some are going to be... Some are going to be async. It's just going to put it in a queue, just like... And this goes back to my... Man, I talk fast. But I have this... I only have one other speed. It's even faster. So imagine it's like if you're working... So back to my, oh, we're going to have these hybrid digital teams. Like, you would not go to a co-worker and say, I'm going to ask you to do this thing, and then sit there and wait for them to go do it. Like, that's not how the world works. So it's nice to be able to just, like, hand something off to someone. It's like, okay, well, maybe I expect a response in an hour or a day or something like that.Dharmesh [00:46:52]: In terms of when things need to happen. So the UI around agents. So if you look at the output of agent.ai agents right now, they are the simplest possible manifestation of a UI, right? That says, oh, we have inputs of, like, four different types. Like, we've got a dropdown, we've got multi-select, all the things. It's like back in HTML, the original HTML 1.0 days, right? Like, you're the smallest possible set of primitives for a UI. And it just says, okay, because we need to collect some information from the user, and then we go do steps and do things. And generate some output in HTML or markup are the two primary examples. So the thing I've been asking myself, if I keep going down that path. So people ask me, I get requests all the time. It's like, oh, can you make the UI sort of boring? I need to be able to do this, right? And if I keep pulling on that, it's like, okay, well, now I've built an entire UI builder thing. Where does this end? And so I think the right answer, and this is what I'm going to be backcoding once I get done here, is around injecting a code generation UI generation into, the agent.ai flow, right? As a builder, you're like, okay, I'm going to describe the thing that I want, much like you would do in a vibe coding world. But instead of generating the entire app, it's going to generate the UI that exists at some point in either that deterministic flow or something like that. It says, oh, here's the thing I'm trying to do. Go generate the UI for me. And I can go through some iterations. And what I think of it as a, so it's like, I'm going to generate the code, generate the code, tweak it, go through this kind of prompt style, like we do with vibe coding now. And at some point, I'm going to be happy with it. And I'm going to hit save. And that's going to become the action in that particular step. It's like a caching of the generated code that I can then, like incur any inference time costs. It's just the actual code at that point.Alessio [00:48:29]: Yeah, I invested in a company called E2B, which does code sandbox. And they powered the LM arena web arena. So it's basically the, just like you do LMS, like text to text, they do the same for like UI generation. So if you're asking a model, how do you do it? But yeah, I think that's kind of where.Dharmesh [00:48:45]: That's the thing I'm really fascinated by. So the early LLM, you know, we're understandably, but laughably bad at simple arithmetic, right? That's the thing like my wife, Normies would ask us, like, you call this AI, like it can't, my son would be like, it's just stupid. It can't even do like simple arithmetic. And then like we've discovered over time that, and there's a reason for this, right? It's like, it's a large, there's, you know, the word language is in there for a reason in terms of what it's been trained on. It's not meant to do math, but now it's like, okay, well, the fact that it has access to a Python interpreter that I can actually call at runtime, that solves an entire body of problems that it wasn't trained to do. And it's basically a form of delegation. And so the thought that's kind of rattling around in my head is that that's great. So it's, it's like took the arithmetic problem and took it first. Now, like anything that's solvable through a relatively concrete Python program, it's able to do a bunch of things that I couldn't do before. Can we get to the same place with UI? I don't know what the future of UI looks like in a agentic AI world, but maybe let the LLM handle it, but not in the classic sense. Maybe it generates it on the fly, or maybe we go through some iterations and hit cache or something like that. So it's a little bit more predictable. Uh, I don't know, but yeah.Alessio [00:49:48]: And especially when is the human supposed to intervene? So, especially if you're composing them, most of them should not have a UI because then they're just web hooking to somewhere else. I just want to touch back. I don't know if you have more comments on this.swyx [00:50:01]: I was just going to ask when you, you said you got, you're going to go back to code. What

Scaling DevTools
David Cramer, founder of Sentry - why you should consider M&A

Scaling DevTools

Play Episode Listen Later Mar 27, 2025 54:19 Transcription Available


David Cramer, co-founder of Sentry talks M&As and why they should be utilized more when you don't achieve huge success. Plus we talk about the importance of good branding.We discuss:The biggest mistake small startup founders make by not exploring potential acquisitions.The role of ego in startupsProduct-market-fitHiring entrepreneurial talent and why acqui-hiring is so big.The significance of branding beyond just marketing – how it builds trust, recognition, and demand.Sentry's approach to branding, emphasizing authenticity, community, and accessibility.What DevTools can learn from Liquid Death and PorscheWhy brand mattersThis episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign-On and audit logs. https://workos.com/Links:David Cramer's blogDavid Cramer on XSentry

Rocket Ship
#063 - From Idea to App using Replit with Matt Palmer

Rocket Ship

Play Episode Listen Later Mar 25, 2025 57:06


In this conversation, Simon Grimm and Matt Palmer discuss the capabilities and evolution of Replit, a platform that allows developers to quickly turn ideas into applications using AI tools. They explore the features of Replit, including its ability to create full stack applications, the integration of AI, and the unique advantages it offers compared to other development tools. The discussion also touches on the possibilities and limitations of using Replit for various types of projects. In this conversation, Simon and Matt discuss the challenges of managing Python environments and the advantages of using Replit for development. They explore how developers can integrate various tools into their workflows, the benefits of building with AI for rapid prototyping, and the importance of effective prompt engineering. The discussion also touches on the future collaboration between Replit and Expo, highlighting the evolving landscape of software development.Learn React Native - https://galaxies.devMatt PalmerMatt leads developer relations and product marketing at Replit, creating everything from tutorials to technical content. He got his start in data, working as a product analyst at AllTrails before moving to data engineering and eventually DevRel. He's worked on content with companies like LinkedIn, O'Reilly Media, xAI and Y Combinator. Outside of work, you can find him lifting weights or exploring the outdoors. Matt currently lives in San Francisco, but hails from Asheville, North Carolina.https://x.com/mattppalhttps://youtube.com/@mattpalmerhttps://www.linkedin.com/in/matt-palmer/https://mattpalmer.io/LinksReplit: https://replit.com/Replit X: https://x.com/replitReplit YouTube: https://www.youtube.com/@replitReplit Expo / React Native template: https://replit.com/@replit/ExpoReplit Sign-up:  https://replit.comExpo tutorial: https://www.youtube.com/playlist?list=PLto9KpJAqHMRuHwQ9OUjkVgZ69efpvslMExpo Blog: https://expo.dev/blog/from-idea-to-app-with-replit-and-expoTakeawaysReplit allows developers to create applications quickly and efficiently.AI integration in Replit enhances the development process.The platform supports multiple programming languages, primarily JavaScript and Python.Replit's workspace is designed for ease of use, requiring no installations.Users can deploy applications with a single click.Replit is evolving rapidly with advancements in AI technology.The platform is suitable for both beginners and experienced developers.Replit's unique features set it apart from other development tools.The community around Replit is growing, with increasing interest and usage.Building complex applications still requires significant effort and planning.  Python environments can be cumbersome for developers.Replit excels in managing single directory projects.AI can significantly speed up the prototyping process.Disposable software allows for quick iterations and testing.Effective prompt engineering can enhance AI outputs.Developers should focus on minimum viable prompts for efficiency.Replit's integration with Expo is a promising development.AI tools can help in learning and understanding code better.Collaboration between tools can streamline the development process.Keeping up with new tools and technologies is essential for developers.

Community Pulse
The Importance of Humility and Sincerity in DevRel (Ep 95)

Community Pulse

Play Episode Listen Later Mar 21, 2025 36:12


Communicating the message is only part of the job of a DevRel practitioner - there's also the method. Ensuring you share the same alignment and you are seen as a member of the community is even more important than being able to educate a community. In this episode we'll look at the importance of keeping yourself humble and keeping the message sincere in order to find success within the tech world. "Sit down. Be Humble” - Kendrick Lamar Checkouts Chris DeMars * Off the Hook (https://www.youtube.com/@offthehookdetroit) Wesley Faulkner * Wear Extra Fingers - Life Hack (https://x.com/weirddalle/status/1746674550891291055) PJ Hagerty * Shoalin - the WuTang board game (https://www.chillbgames.com/shaolin?gad_source=1&gclid=Cj0KCQiAz6q-BhCfARIsAOezPxnUbw2cvBC79gHZyq5NAgYiZN3ItzQji069Bc-iCRG9CBrDIdZKLmMaAi3XEALw_wcB) * WuTang Final Tour with Run the Jewels (https://www.thewutangclan.com/tour/) Enjoy the podcast? Please take a few moments to leave us a review on iTunes (https://itunes.apple.com/us/podcast/community-pulse/id1218368182?mt=2) and follow us on Spotify (https://open.spotify.com/show/3I7g5W9fMSgpWu38zZMjet?si=eb528c7de12b4d7a&nd=1&dlsi=b0c85248dabc48ce), or leave a review on one of the other many podcasting sites that we're on! Your support means a lot to us and helps us continue to produce episodes every month. Like all things Community, this too takes a village. Special Guest: Chris DeMars.

Scaling DevTools
raylib founder Ramon Santamaria - #2 most popular open-source game-engine in the world

Scaling DevTools

Play Episode Listen Later Mar 20, 2025 32:57


Ramon, creator of Raylib, joins us to discuss his journey from building an educational tool to establishing one of the most popular open-source game engines. As of February 2025, Raylib is the second most popular open-source game engine behind Godot, boasting 25,000 GitHub stars, 13,000 Discord community members, and over 8,000 subreddit members. Ramon has transitioned from lecturing and consulting to focusing on his paid tools built around Raylib.We discuss:How Raylib started as a teaching project to help art students learn programming through simple and intuitive function naming.The active community behind Raylib and how Ramon personally engages with new members, contributing to the project's growth.Why simplicity and not making assumptions about prior knowledge can create a strong foundation for both beginners and experienced developers.The benefits of using a low-level library like Raylib versus higher-level game engines like Unity, particularly for small indie games.Ramon's approach to managing his workload as a solo developer, emphasizing organization, automation, and using his own tools to build tools.His method of testing new tools by quickly launching them, observing market response, and iterating on the most successful ones.The importance of enjoying the process of building an open-source project rather than focusing solely on commercial success.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. https://workos.com/Links:Raylib (https://www.raylib.com/)Cat and Onion game (https://store.steampowered.com/app/2781210/CAT__ONION/)Raylib GitHub (https://github.com/raysan5/raylib)Raylib Discord (https://discord.gg/raylib)Raylib Subreddit (https://www.reddit.com/r/raylib/)Ramon's Tools (https://raylibtech.com/tools/)

php[podcast] episodes from php[architect]
Community Corner: DevRel With Tessa Kriesel

php[podcast] episodes from php[architect]

Play Episode Listen Later Mar 14, 2025 15:47


In this episode, Scott talks with Tessa Kriesel about #DevRel and her talk at #phptek 2025. PHP Tek is looking for sponsors! Contact see https://phptek.io/blog/elevate-your-brand-sponsorship-phptek-2025 for more information. Links: HoneyBadger.io – https://HoneyBadger.io Our Discord – https://discord.gg/aMTxunVx Molly's Social Media: Website – https://www.tessakriesel.com Twitter – https://x.com/tessak22 LinkedIn – https://www.linkedin.com/in/tessak22/ Scott's Social Media: Bluesky – https://bsky.app/profile/scottkeckwarren.bsky.social Mastodon […] The post Community Corner: DevRel With Tessa Kriesel appeared first on PHP Architect.

Scaling DevTools
Temporal founders: Samar Abbas and Maxim Fateev

Scaling DevTools

Play Episode Listen Later Mar 13, 2025 49:19 Transcription Available


Maxim Fateev and Samar Abbas from Temporal join us to discuss how their durable execution platform ensures processes complete reliably at scale.We discuss:How Temporal gained enterprise adoption with companies like Airbnb, HashiCorp, and Snapchat.Why Temporal compensates salespeople based on customer consumption.Temporal's role in Snapchat's story processing and Taco Bell's Taco Tuesday scalability.How Temporal earns enterprise trust through security, reliability, and scalability.The structure of Temporal's sales team and their focus on long-term customer success.Exciting trends in AI and low-code/no-code development.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links: Temporal Temporal GitHub

Scaling DevTools
Nikita Shamgunov - founder of Neon: storytelling, pricing and hiring execs

Scaling DevTools

Play Episode Listen Later Mar 6, 2025 47:07 Transcription Available


Nikita Shamgunov is the founder of Neon, an open-source serverless Postgres company. Before Neon, Nikita co-founded MemSQL, now SingleStore, which is valued at over a billion dollars. He has also worked as a VC at Khosla Ventures and held engineering roles at Meta and Microsoft. Nikita is known for his strategic thinking and transparency about his decision-making process.We discuss:The importance of storytelling and providing a clear narrative for your companyWhen to introduce a sales team and how to build a sales and marketing "machine"Pricing strategies, including pricing for storage and compute in the data and analytics spaceThe evolution of revenue models in DevTools: from selling seats and storage/compute to selling tokensLessons learned from hiring MongoDB's VP of Engineering, focusing on improving reliability and building strong team management processesThe benefits of using a high-quality recruiting firm and avoiding the pitfalls of bad hiresBalancing competitiveness with respect for competitors to maintain credibility, particularly in the developer tools marketThe idea of “developing your taste” in product development, inspired by Guillermo Rauch from VercelHow modern dev tools can monetize through seats, storage/compute, or tokens, with tokens currently being the most profitableWhy Nikita advises DevTools founders to understand the business model framework and align it with their strategyThis episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links:NeonSingleStore Khosla Ventures Fusion Talent 

Scaling DevTools
How to name your startup: David Placek - named Vercel, Azure & Blackberry

Scaling DevTools

Play Episode Listen Later Feb 27, 2025 47:07 Transcription Available


 David Placek from Lexicon - the man who named Vercel and Azure - explains the importance of selecting a name that goes beyond simply describing what a product does. He shares what you can do to come up with a great name. We cover:Common Naming Pitfalls: Discusses why names that merely describe a product or service fail to capture imagination and differentiation.The Strategic Impact of a Name: Explains how a well-chosen name can deliver significant returns on investment by reinforcing brand behavior and market positioning.Sound Symbolism and Cognitive Science: Covers research into how letter sounds (for example, the “V” in Vercel) influence perception and contribute to a name's effectiveness.The Naming Process: Details the rigorous process behind naming—from trademark searches and legal reviews to global linguistic evaluations and whiteboard sessions with clients.Advice for Early-Stage Founders: Encourages startups to first define their market behavior and the change they intend to create. The right name will emerge from a clear strategic vision.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links:Lexicon BrandingVercelPG .com quote

Modern Web
Is DevRel Really Worth It for Most Organizations?

Modern Web

Play Episode Listen Later Feb 26, 2025 41:32


On this episode of the Modern Web Podcast, Rob Ocel and Danny Thompson sit down with Marc Backes, a freelance full-stack engineer with a wild journey through Vue.js, Nuxt, and DevRel. Marc shares what makes the Vue community stand out, why DevRel often misses the mark, and how Wikipedia uses Vue 3 to scale content across thousands of languages.Then, things get real. Marc opens up about a $250,000 startup disaster that changed his view on business forever. Meanwhile, Danny breaks down what it takes to run a tech conference on a shoestring budget—and why developers hate traditional marketing.Key Points from this episode:- The Power of Vue & Nuxt – Marc shares why he chose Vue.js, how he built his website with Nuxt, and what makes the Vue community unique.- DevRel: Hype vs. Reality – A discussion on whether DevRel is truly valuable for companies, how it's often misused, and what actually works in developer advocacy.- A $250K Startup Mistake – Marc's story of losing $250,000 in a failed startup and the crucial lesson about contracts and trust in business.- Scaling Tech & Community – Insights on Wikipedia's use of Vue 3 for translation, plus Danny's experience running a tech conference with limited resources.Chapters0:00 - Introduction to Entrepreneurship and Failure 0:43 - Podcast Introduction and Guest Welcome 4:25 - Mark's Experience in the Vue Community 9:22 - Working with Large-Scale Organizations 13:05 - Transitioning Between Developer and DevRel 19:00 - Is DevRel Worth It? 24:25 - The Challenges of Running a Tech Conference 26:02 - Lessons from Entrepreneurship 30:56 - The Emotional Toll of Failure 35:03 - Revisiting the $250,000 Grant Story 39:42 - Handling Failure and Moving Forward 41:14 - Where to Find Mark OnlineFollow Marc Backes on Social MediaTwitter: https://x.com/themarcbaLinkedin: https://www.linkedin.com/in/themarcba/Sponsored by This Dot: thisdot.co

The Laravel Podcast
The Biggest Day in Laravel History

The Laravel Podcast

Play Episode Listen Later Feb 24, 2025 39:50


In this episode of the Laravel Podcast, host Matt Stauffer sits down with Chris Sev, Laravel's new Director of Developer Relations, to explore his background and the evolving role of DevRel. They dive into exciting updates in the Laravel ecosystem, including the launch of Laravel Cloud, new starter kits, the VS code extension, and the redesigned Laravel website. The discussion also covers the importance of backward compatibility, upcoming community events, and the continuous evolution and support within the Laravel community.Matt Stauffer's TwitterChris Sev's TwitterLaravel TwitterLaravel WebsiteTighten WebsiteLaravel CloudLaravel RedditJoe Dixon AMA on RedditSuggestion Box----- Editing and transcription sponsored by Tighten.

The WP Minute+
Thinking Outside of the WordPress Box

The WP Minute+

Play Episode Listen Later Feb 24, 2025 37:43 Transcription Available


Say thanks and learn more about our podcast sponsor Omnisend. In this episode of the WP Minute+ Podcast, Matt sits down with Tessa Kriesel, a seasoned expert in developer relations and founder of Built for Devs. Once deeply involved in WordPress, Tessa now works with developer-focused companies to help them engage technical audiences authentically. She shares insights on how companies can build trust, engage communities effectively, and think strategically beyond traditional marketing.The conversation examines challenges freelancers and agencies faced in 2024 and what to expect moving into 2025. Tessa discusses the shift in DevRel from casual relationship-building to strategic engagement, the evolving role of AI in development, and how the tech industry is tightening budgets while demanding clear ROI. She also shares advice for WordPress product makers on pricing, sustainability, and community-building. She encourages them to challenge outdated practices like underpricing plugins and relying solely on Black Friday sales.Key TakeawaysThe Role of Developer Relations (DevRel)DevRel isn't just about attending WordCamps and networking; it requires strategy and delivering value.Companies must build authentic relationships with developers while aligning with business objectives.WordPress has a strong community, but other ecosystems also foster deep connections.Challenges in 2024 & Looking Ahead to 2025Economic pressures are causing tech companies to scrutinize spending and demand clear ROI.VC funding has shifted focus from user adoption to revenue generation.The lack of trust in marketing and business interactions makes it harder for companies to gain traction.WordPress & Business GrowthMany WordPress companies still underprice their products, following outdated open-source pricing models.Product makers should focus on value-based pricing rather than low-cost models with limited revenue potential.Relying on Black Friday discounts as a primary sales strategy is shortsighted. Products should be priced for sustainability year-round.The Role of AI in Development & BusinessAI is a powerful tool for efficiency but still requires human oversight.Companies investing in AI-driven solutions must balance automation with trust-building.Developers who integrate AI into their workflows will gain a competitive edge but won't be replaced entirely.Important LinksThe WP Minute+ Podcast: thewpminute.com/subscribeConnect with Tessa KrieselLinkedIn: https://www.linkedin.com/in/tessak22/Twitter/X: https://twitter.com/TessaK22Learn more about Built for Devs: BuiltFor.Dev Support us for as little as $5 to join our members-only Slack group. ★ Support this podcast ★

Community Pulse
Looking Back on 2024 and Ahead to 2025 (Ep 94)

Community Pulse

Play Episode Listen Later Feb 21, 2025 0:27


The team takes a look back at 2024 and ahead to 2025. Enjoy the podcast? Please take a few moments to leave us a review on iTunes (https://itunes.apple.com/us/podcast/community-pulse/id1218368182?mt=2) and follow us on Spotify (https://open.spotify.com/show/3I7g5W9fMSgpWu38zZMjet?si=eb528c7de12b4d7a&nd=1&dlsi=b0c85248dabc48ce), or leave a review on one of the other many podcasting sites that we're on! Your support means a lot to us and helps us continue to produce episodes every month. Like all things Community, this too takes a village. Photo by BoliviaInteligente on Unsplash.

PodRocket - A web development podcast from LogRocket
Prisma Postgres with Nikolas Burk

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Feb 20, 2025 28:18


Nikolas Burk, DevRel at Prisma, talks about Prisma Postgres, its unikernel architecture, and its seamless integration with cloud infrastructure. Discover how Prisma Postgres is revolutionizing database management with features like cold start elimination, real-time event handling and advanced caching strategies! Links https://www.prisma.io/blog/announcing-prisma-postgres-early-access https://x.com/nikolasburk https://www.linkedin.com/in/nikolas-burk-1bbb7b8a https://github.com/nikolasburk We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Nikolas Burk.

Scaling DevTools
Mitchell Hashimoto: Ghostty, libghostty & chasing the human experience

Scaling DevTools

Play Episode Listen Later Feb 20, 2025 57:06 Transcription Available


Mitchell Hashimoto - famously the founder of HashiCorp (creators of Terraform, Vault etc.) joins the show to discuss his latest open-source project, Ghostty, a modern terminal emulator. We discuss:Designing dev tools with a focus on human experience.Taking on large technical projects and breaking them down into achievable steps.Open source sustainability and the role of financial support.The impossible goal of building a perfect human experience with software.Passion and hiring—why obsession with a topic often leads to the best hires.Using AI as a developer and why Mitchell considers AI tooling essential.The motivation behind Ghostty and the idea of "technical philanthropy."The vision for libghostty as a reusable terminal core for other applications.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. https://workos.com/Links:Ghostty (https://ghostty.org/)Mitchell Hashimoto on Twitter (https://twitter.com/mitchellh)Mitchell's blog (https://mitchellh.com/)

ShopTalk » Podcast Feed
651: Jason Lengstorf on CodeTV.dev, DevRel Panic, and Spicy Gear

ShopTalk » Podcast Feed

Play Episode Listen Later Feb 10, 2025 56:58


Show DescriptionJason joins us to talk about his rebranding to CodeTV.dev, how Chris Coyier helped him become a star, the power of free, how he makes money with CodeTV, sponsorship and tech shows, crappy web cams, and the gear he uses to look and sound amazing. Listen on Website →GuestsJason LengstorfGuest's Main URL • Guest's TwitterJason Lengstorf is the producer of CodeTV.dev, where he helps tech companies connect with developer communities through better devrel strategy and media. Links tv for developers — CodeTV The Best React-Based Framework | Gatsby Scale & Ship Faster with a Composable Web Architecture | Netlify The Great British Bake Off Web Development Challenge Leet Heat Pilot TV for Developers Dropout Comedy Nebula Universe Sunny Nihilist Declaration Philosophize This! Episodes BenQ RD280UA Monitor iPhone Webcam for Mac Webcam Comparison Sony FX3 Camera ATEM Mini SponsorsBenQNot really but you should call us Mr or Ms BenQ!

Scaling DevTools
Jacob Eiting - CEO of RevenueCat: Extreme dogfooding

Scaling DevTools

Play Episode Listen Later Feb 7, 2025 52:57 Transcription Available


Jacob Eiting, CEO of RevenueCat, joins us to discuss mobile developers and how they're different, RevenueCat's recent acquisition of Dipsea - and how it helps them dogfood.We also go hard on content - something RevenueCat is great at.We also talk about charisma in founders (but don't worry neither of us said rizz)This was especially fun because I actually used RevenueCat way before I started this show. We discuss:How RevenueCat simplifies in-app subscriptions and why mobile monetization is more complex than it appears.Making developers feel like heroes instead of struggling with tedious implementation.RevenueCat's acquisition of Dipsea—a customer with over 100,000 subscribers—and how it benefits both companies.The advantages of operating an app at scale to better test and iterate on new RevenueCat features.How in-app subscription businesses differ from traditional SaaS in terms of pricing, churn, and optimization.The importance of content marketing and transparency in building trust with developers.The role of personality and authenticity in developer-first marketing.The long-term vision for RevenueCat and how they plan to expand beyond their core subscription infrastructure.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. https://workos.com/Links:Jacob Eiting (https://x.com/jeiting)RevenueCat (https://www.revenuecat.com/)Dipsea (https://www.dipseastories.com/)

Open at Intel
Managing Kubernetes with Komodor

Open at Intel

Play Episode Listen Later Jan 30, 2025 21:30


In this episode, we speak with Udi Hofesh and Itiel Schwartz from Komodor about their roles and the mission of their company. Komodor aims to simplify Kubernetes at scale by providing tools for managing, troubleshooting, and optimizing Kubernetes clusters. They discuss the unique features of Komodor, including their approach to using AI to address Kubernetes issues and their involvement in open source projects like Helm Dashboard. The conversation also touches upon the new native integration for managing Kubernetes add-ons and the future direction of the company. 00:00 Introduction and Guest Introduction 00:27 What is Komodor? 00:59 Challenges in Kubernetes 01:32 Komodor's Unique Solutions 02:27 Target Audience and Developer Relations 06:56 Open Source Contributions 14:09 AI Integration in Komodor 18:47 New Features and Future Plans   Guests: Itiel Shwartz, CTO and Co-founder, Komodor Udi Hofesh, DevRel, Komodor

Scaling DevTools
Taylor Otwell - founder of Laravel

Scaling DevTools

Play Episode Listen Later Jan 30, 2025 38:35 Transcription Available


Taylor Otwell is the creator of the Laravel framework. Taylor has created numerous paid products that have generated millions, such as:Laravel Forge (server provisioning/management)Laravel Vapor (serverless Laravel hosting with AWS)Laravel Envoyer (zero downtime PHP deployments)Laravel Nova (Laravel admin panel)In this interview, Taylor shares why he is now building Laravel Cloud - an infrastructure platform for Laravel apps and why Laravel Cloud needed VC funding.We also cover:The different challenges of bootstrapped and VC funded startupsHow the Laravel ecosystem became so entrepreneurial Building products for the average joe developerThe role of taste and craft in developer toolsWhat Taylor and Adam Wathan learned from each other Fear and Taylor's comparison with  Alex Honnold This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links: Laravel  Taylor Otwell Laravel Cloud Open jobs at Laravel Adam Wathan Chapters:00:00 The Journey of Laravel's Creator02:48 Transitioning from Bootstrap to VC Funding06:10 Building Laravel Cloud: A New Challenge09:04 The Shift in Company Structure and Culture11:50 Maintaining Quality and Usability in Development15:09 Community Impact and Collaboration17:56 Craftsmanship and Design Philosophy20:45 Navigating Growth and Market Needs23:54 Advice for Aspiring DevTool Founders26:48 Future Directions and Innovations in LaravelThank you to Michael Grinich for making this happen. Thank you to Ostap Brehin for introducing me to Laravel. Thank you to Hank Taylor for helping me prep.  

Scaling DevTools
Four tips for early stage DevTools

Scaling DevTools

Play Episode Listen Later Jan 23, 2025 19:34 Transcription Available


In this episode, I pull out some of the key DevTools lessons I've learned in the last 120 interviews. Including:The importance of deeply understanding the problem you're solving by talking to developers directly, as emphasized by Adam Frankl.Ant Wilson's advice on experimenting with different go-to-market strategies and channels rather than relying on conventional wisdom. Zeno Rocha's emphasis on the importance of the last mile—packaging and presentation. He shares how spending more time on documentation and onboarding materials helped his open-source project gain massive traction.Gonto's perspective that "it's better to be different than better," and how creativity, uniqueness, and understanding developer habits are key to successful marketing.My personal reflections on overcoming fear and discomfort in go-to-market efforts.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. https://workos.com.

Scaling DevTools
Søren Bramer Schmidt - founder & CEO of Prisma

Scaling DevTools

Play Episode Listen Later Jan 16, 2025 45:50 Transcription Available


Søren Bramer Schmidt, co-founder and CEO of Prisma, joins us to discuss the journey of building one of the largest developer communities in DevTools. Søren shares how Prisma's deliberate strategies have shaped its growth, feature prioritization, and the launch of new products like Prisma Postgres. We also explore the challenges of managing a vast user base and how Prisma is adapting to shifts in application development.We discuss:How intentional partnerships with educators and influencers fueled Prisma's early growth.Strategies to engage the GraphQL community and gain visibility on platforms like Hacker News.Managing a large developer community while balancing innovation with stability.The evolution from Graphcool to Prisma ORM, including lessons from early pivots.Launching Prisma Postgres and how community feedback influenced its development.Implementing a simple, usage-based pricing model and reducing infrastructure costs through self-hosting.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. https://workos.com/Links:Prisma (https://www.prisma.io/)Prisma Postgres (https://www.prisma.io/postgres)Feldera (https://feldera.com/)

The Data Stack Show
224: Bridging Gaps: DevRel, Marketing Synergies, and the Future of Data with Pedram Navid of Dagster Labs

The Data Stack Show

Play Episode Listen Later Jan 15, 2025 53:24


Highlights from this week's conversation include:Pedram's Background and Journey in Data (0:47)Joining Dagster Labs (1:41)Synergies Between Teams (2:56)Developer Marketing Preferences (6:06)Bridging Technical Gaps (9:54)Understanding Data Orchestration (11:05)Dagster's Unique Features (16:07)The Future of Orchestration (18:09)Freeing Up Team Resources (20:30)Market Readiness of the Modern Data Stack (22:20)Career Journey into DevRel and Marketing (26:09)Understanding Technical Audiences (29:33)Building Trust Through Open Source (31:36)Understanding Vendor Lock-In (34:40)AI and Data Orchestration (36:11)Modern Data Stack Evolution (39:09)The Cost of AI Services (41:58)Differentiation Through Integration (44:13)Language and Frameworks in Orchestration (49:45)Future of Orchestration and Closing Thoughts (51:54)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.

Scaling DevTools
The future of DevRel, with "Danger" Keith Casey

Scaling DevTools

Play Episode Listen Later Jan 9, 2025 54:00 Transcription Available


Keith Casey aka Danger Casey is a Senior Product Manager at Pangea - a Security Platform as a Service.Before Pangea, Keith was Director of Product Marketing at ngrok and worked at Okta and Twilio in a variety of roles - including DevRel.  Keith also curates API Developer Weekly.In this episode we discuss Keith's writings on the future of DevRel.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Links:- original article - followup article- How to kill your sdks in one easy step- Developer productivity and selling to developers- api developer weekly - Pangea - DevRel = zirp phenomenom? 

Community Pulse
The DevRel Foundation (Ep 93)

Community Pulse

Play Episode Listen Later Jan 3, 2025 22:40


In this episode, Wesley, PJ, and Jason take the opportunity to talk about a new phenomenon - The DevRel Foundation from the Linux Foundation. Learn how folks have gotten involved, what the Foundation intends to do, and how you can share your voice. Topics Discussed: Introduction to the DevRel Foundation: The episode explores the new DevRel Foundation, an initiative under the Linux Foundation, created to address challenges in Developer Relations (DevRel). Wesley Faulkner introduces the foundation, noting that its purpose is to be a nonpartisan hub for discussions about DevRel and to provide resources for defining the profession and its practices. Foundational Goals: The DevRel Foundation aims to address key challenges within DevRel, including defining the role, measuring its impact, and rolling out successful DevRel programs. It seeks to aggregate existing knowledge and create a space for new insights. Wesley discusses his role in the steering committee and mentions the ongoing process of recruiting champions for various topics within DevRel to drive these discussions forward. Open Participation and Community Engagement: The foundation is described as a participative effort, where everyone from managers to community members can contribute. This is highlighted as an important distinction from more passive feedback mechanisms (like town halls). Wesley outlines the process, emphasizing that the foundation is open to diverse perspectives, and all contributions will be available for collaboration through platforms like GitHub and Discord. Challenges of Defining DevRel: A major challenge discussed is the diversity of how DevRel is implemented across different organizations (e.g., startups, enterprises, nonprofits). Wesley talks about the need for an inclusive approach that doesn't exclude any perspectives while ensuring practical outcomes. Jason Hand asks about how the foundation plans to handle these varied implementations, suggesting that a “one-size-fits-all” approach may not work. The Role of the Linux Foundation: The Linux Foundation's role is explained as crucial in providing structure, governance, and logistical support for the foundation. The Linux Foundation's history with supporting open-source projects and fostering community-driven initiatives is seen as a key advantage. Real-World Impact and Job Descriptions: Jason Hand discusses the problem of inconsistent DevRel job descriptions in the industry, which often blur the lines between roles like developer advocate, customer success, and sales engineering. The foundation's work could help standardize expectations for DevRel roles across organizations. The episode touches on how a clearer definition of DevRel could assist job seekers and hiring managers in aligning roles more effectively. Future of the DevRel Foundation: The foundation is still in its early stages, and Wesley emphasizes that while there's hope for the project, it will take time to make significant progress. They encourage participation in calls, Discord, and GitHub to stay updated and contribute. Key Takeaways: - The DevRel Foundation seeks to unify and provide structure to the diverse, evolving field of Developer Relations. Inclusive participation is at the core of the foundation's mission, aiming to gather input from all sectors of the community. - The foundation is driven by volunteer work and community passion, with the support of the Linux Foundation's structure and resources. - GitHub and Discord are key platforms for collaboration, ensuring that community voices are heard and that contributions are open for review and iteration. - The foundation's work will eventually help provide clarity in DevRel role definitions, benefiting both organizations and professionals in the field. Action Items: - Join the DevRel Foundation: Individuals can join calls, participate in discussions, or contribute to the work via GitHub and Discord. - Become a Champion: The foundation is actively seeking managers to lead specific topics within DevRel. - Stay Informed: Engage with the monthly updates and open calls to follow the foundation's progress. Key Words and Themes: DevRel Foundation Developer Relations (DevRel) Linux Foundation Open Participation Inclusive Governance Community-Driven Initiatives Job Descriptions in DevRel GitHub and Discord Collaboration Nonprofit Organization Volunteer-Driven Transcript [00:00:00] PJ Haggerty: Hey everybody. And welcome to another episode of Community Pulse. We're super excited to have you. [00:00:04] PJ Haggerty: This week we decided we would take a look at a new phenomenon, the DevRel Foundation, the Developer Relations Foundation from our friends at the Linux Foundation. [00:00:12] PJ Haggerty: Some of you are probably already aware of it. Some of you are probably in the discord chat. Some people might not know about it at all. So we want to take this opportunity to share some information about it and see what we could find out and how we felt about it. So with that, I am joined by, of course, Jason Hand and Wesley Faulkner. Wesley, you've been doing a lot of work with the DevRel foundation as far as like looking at, working models and how people can actually get things done within the foundation. [00:00:37] PJ Haggerty: So do you want to kick us off and give us a description of what's going on? [00:00:41] Wesley Faulkner: Yes. Let me lay a little bit of the groundwork to understand my involvement and how. So I'm part of the steering committee. There's five of us in total. And I am the newest member of that five person steering committee. [00:00:55] Wesley Faulkner: I've been part of the DevRel foundation since June of this year. [00:01:00] And the foundations, the start of it had, I think, started way before that even before the beginning of the year. And the involvement with the Linux foundation happens like I think in around the February timeframe. And so the thought is that there are Certain types of challenges that are unique to people in dev rel defining what we do is one of them that I think is something that people are familiar with, but others that have been lingering around about how do you measure dev rel and like adequately, like, how do you plan for the future and how do you roll out a developer relations program? [00:01:35] Wesley Faulkner: Those are like the broad strokes of it. So the thought of the Dev Rel foundation is to be a nonpartisan home for these types of discussions. And we are currently set up as the steering committee, as people who are trying to facilitate those conversations, give structure and processing of what timeline we should have these conversations and be helped, like [00:02:00] with the being a home to people to find this, Information once we have it all created and to be a repository for a lot of existing knowledge, but also allow the connection tissue to create new knowledge that is not there right now. [00:02:16] Wesley Faulkner: So that's like the whole arc of it. Depending on when you're listening to this podcast, we are currently enrolling people to take on and champion these specific areas of topics. Here are the lists that we've aggregated from the community of the challenges. [00:02:33] Wesley Faulkner: And we're looking for managers to say I want to champion that and run it to ground to make sure that we actually have things defined to help us all as dev or all practitioners. [00:02:43] PJ Haggerty: And I want to zero in because I think that some people I was in the initial meeting kickoff thing that happened back in June and there was a concern and it was oh, this is a town hall, not really a feedback thing, but more of a town hall where we'll come and tell you what we think is [00:03:00] good and you can come and tell us if you don't think it's good. [00:03:03] PJ Haggerty: But what it really is is a participative activity. Not everybody wants to, and that's okay. But the idea is really behind let's put together a compendium of knowledge about what we do and put that so that when people reference it, they can easily say, this is the way it works. [00:03:22] PJ Haggerty: It's a constantly moving organic body. It's similar to software. There is nothing done on this. Would that, do you think that's accurate? Great. [00:03:31] Wesley Faulkner: Yeah, I think that initially I was on that initial feedback preview call as well. And that session, I think, raised a lot of awareness about how developed the thought was of where things were going to go and how open to input. [00:03:47] Wesley Faulkner: The foundation was to the community and letting the community shape the direction and the focus of the foundation. And I think to its credit, the foundation has taken a lot of that into heart. [00:04:00] And I think that's when I joined actually because of that call or after that call. A lot of the work that I've done, at least on the initial side, was finding a way to make sure that the community's voice is heard. [00:04:12] Wesley Faulkner: And then once we get all of this feedback, how do we actually act on it? Because it feels like if you think about the possibilities, the developer relations, there's just so much out there. How do we choose which ones that we're going to help move forward? And I devised or helped with the rest of the people in the steering committee and other feedback. [00:04:31] Wesley Faulkner: From people like you, PJ, about how we address the needs of the community in a way that doesn't feel exclusionary. [00:04:39] PJ Haggerty: Think exclusionary is the word you're looking for. Yeah. [00:04:40] Wesley Faulkner: And also how do we actually be productive to actually move forward instead of having constant discussions all the time and where do we actually make sure that it was the right time to do action? [00:04:52] Jason Hand: Wesley, I got a question. I feel like a lot of our episodes, we generally take a stance on [00:05:00] when it comes to implementing certain things that it just depends on the situation of the organization, the team, the objectives of the org that they're in, there's always just like so many dependencies and variables that go into an implementation of things to take a stance on, how certain aspects or certain elements of developer relations Has found success. [00:05:23] Jason Hand: I'm wondering if there's plans or if there's been any discussion on including lots of different implementation scenarios rather than trying to be one single source of truth, because I feel like that's probably going to be some pushback and going to be some feedback that maybe we hear from this type of organization or foundation, of what goals do we have about putting into concrete terms what. [00:05:48] Jason Hand: developer relations is or isn't when we know that there's just so many ways to do it, Startups are going to do it one way enterprise is going to do it a different One part of the world's going to do it in one way [00:06:00] versus others so Anyway, just curious what your thoughts are on that [00:06:04] Wesley Faulkner: Yeah, there's different verticals, like there's regulated industries like fintech, there are different areas like nonprofit work and open source software as opposed to closed source software. [00:06:14] Wesley Faulkner: Then there is developer first, and then there's developer plus then you mentioned different languages, but there's also different geos and there's also different access to technologies, like parts of the developing world where steady connected electricity and internet is not something that's. [00:06:31] Wesley Faulkner: So there's many different facets. So the answer is, we are trying to be as inclusive as possible by making sure that people have the opportunity to put forth their specific concern. At the same time, we are requiring that as groups are formed around these topics, that there are at least three managers. [00:06:56] Wesley Faulkner: To each of these topics to make sure that there's not [00:07:00] one perspective that's running the show. And then each of these topics, the managers need to recruit at least eight participants. This is to increase the diversity and the different ways that people see things and to make sure that these edge cases or main cases are incorporated into the final result. [00:07:20] Wesley Faulkner: And last, but not least, this is supposed to be an iterative process. So whatever the group Creates, it will be posted to GitHub and you can, and everyone and anyone can put in pull requests so that their voices are heard and their perspectives are also taken into account. [00:07:39] PJ Haggerty: And you're saying all this and for those of you who are listening to the audio and saying, wow, Wesley really has this down. [00:07:44] PJ Haggerty: Wesley has very much structured this and put it into a GitHub document for people to interact with and understand. And I think this that allayed a lot of my concerns when this first came up, because I was like, is this an exercise in student government where the most popular kids [00:08:00] will be voted into their positions of power. [00:08:01] PJ Haggerty: And everyone else will just sit by the wayside with no voice. And Wesley was very careful to design a way in which that wasn't. I think one of the, one of the things that I liked the most about the structure of this, and we'll add the link to the GitHub and the show notes, but one of the things that I really enjoyed about the structure of this was that anyone who is a manager for only a certain period of time. [00:08:24] PJ Haggerty: This isn't a situation where you are, to use the term, they often use an open source project. You're a benevolent dictator for life. Which is that, that's your Linus's and Your David Heinemeyer Handwritten. It's great that you create this thing. [00:08:37] PJ Haggerty: Please let other people as it evolves, take it over. And that's baked into the design. And I feel like we're laying a lot on Wesley here. And I think that there's varying differences between what even the people on this podcast are doing as far as level of participation. [00:08:51] PJ Haggerty: Like I'm a passive participant. I've been watching what's going on, participating in the discord. Talking to some people about some things, but I'm not a manager. [00:09:00] Wesley's a part of the steering committee. Mary had, is that some of those initial meetings are taking a step back due to some busy work related things. [00:09:07] PJ Haggerty: And Jason, are you in the collective? Are you in the discussion or are you just an external passive observer at this point in time?. [00:09:16] Jason Hand: Definitely a passive observer. I think, just through knowing Wesley and the conversations we have here and there I may be a little closer tHand others in terms of just, when I started hearing about it. [00:09:27] Jason Hand: But yeah, at this point I'm not involved. Other than, like I said, just conversations I've had with Wesley. But definitely curious to learn more about what's going on with it. And I quite honestly, I don't have a lot of depth in knowledge around any of the Linux foundations or any just foundations in general. [00:09:45] Jason Hand: And I don't know if Wesley, if that's something you can dig a little deeper into, like what would somebody who has no knowledge of what the Linux Foundation is and any of the offshoots of that, like what are the core benefits? [00:09:57] Wesley Faulkner: I gotta say that there's something that I have to [00:10:00] say about the Linox Foundation in general is that the foundation is an umbrella of other open source projects. So Linux itself is a Linux Foundation project. Git. Is a Linux foundation project. And there's several other Valky is also big and new and it was just launched at the open source summit. [00:10:21] Wesley Faulkner: In September. [00:10:23] PJ Haggerty: Don't forget about that. Dang Kubernetes that people keep talking about. The kids are all under the coop. Yep. CNCF is [00:10:28] Wesley Faulkner: underneath. Yep. The CNCF is under the Linux foundation. Those projects that you know, and love have come under that same umbrella. [00:10:36] Wesley Faulkner: But I have to say the dev rel foundation is different tHand any of those are in all of the other projects because that this feels more of, A governance body or like a list of documents and not necessarily focused on code and making a product from that standpoint, which I think is a little bit different. [00:10:58] Wesley Faulkner: And the question is [00:11:00] why the Linux foundation, and we have a lot of these addressed in our FAQ, if you go to the But for my take that we wanted a place in a home. That was nonpartisan, meaning like it's not owned by a company or someone with specific interests. One that has a history of supporting software and open source processes and making sure it's community like the way that we come to decisions is open to the community and the community can participate [00:11:32] Wesley Faulkner: I can't think of any that checks all of the boxes. So it's part of the Linux Foundation because it is one that does already have a reputation. They are giving us resources and supporting us from a process standpoint. And it allows us to have access to other projects and maintainers and people who've been doing this way longer tHand we have. [00:11:55] Wesley Faulkner: And so being under that umbrella also gives us that connection and [00:12:00] of the siblings who are also in the project. But also just to make sure that it is noted that we are unfunded product projects under the Linux foundation. So we were not trying to make money. No, one's giving us money. [00:12:14] Wesley Faulkner: It's just right now it's all community and volunteer work. That's in the found formation of this foundation. So it's our passions that are driving it. So if there are better suggestions, we are open to hear it. But right now the Linux Foundation sounds like a really good choice and they've been an excellent partner for us. [00:12:36] Wesley Faulkner: Without her support and her guidance and her doing the intros and her doing a lot of the heavy lifting I think we wouldn't have gotten as far as we have right now. [00:12:47] PJ Haggerty: I think it's interesting you mention that because I know that organically around I had been talking for a couple years with people. Wesley, you and I had a conversation that I think is now two and a half years ago about putting together some sort [00:13:00] governance document, some sort of something to say, this is DevRel. [00:13:05] PJ Haggerty: This is the way it worked. This is, giving some sort of guideline to what this all means. I think that some people might be like the Linux foundation eyebrows raised what's going on here at the same time, I think, without having that logistical support, if not the organizational support, this may never have come off because so many people were working in so many small working groups, but not really getting anywhere because they couldn't figure out that logistical component, like how do we do this and not exclude people? [00:13:32] PJ Haggerty: How do we do this and ensure that we have the good mindshare and the diverse mindshare that we need to actually share this information. These are questions that luckily the Linux foundation has answered before, and therefore they can answer it for this. [00:13:49] Wesley Faulkner: Yeah. I got to say that there's been a lot of reaction to the Linux foundation. [00:13:52] Wesley Faulkner: And even just the DevRel foundation. Let's just talk it from there about one saying, why do we need this? That's one of the feedbacks that we've gotten. The [00:14:00] other is, this is amazing. I, this is, I'm so excited. And then I think what Jason also said is that. I'm going to wait and see, so will we, will this have legs? [00:14:11] Wesley Faulkner: Will this keep going? Will this actually produce anything? Will this make a change? And when we were working on our little project back then, Jason PJ it was, some of the conversations were just like, why are we the two people? Or what, why are we the ones to be able to hold this torch and I think the Linux foundation kind of answers some of those questions in terms of it, are we a trusted organization or who legitimizes us for being a person that could have a voice? [00:14:43] Jason Hand: So one more thing I wanted to touch on because I do see a lot of benefits that can come and clearly there's, great examples from the Linux Foundation of success and how this kind of community effort. Can come together and really help in a lot of ways, but a concrete way that I think really [00:15:00] stands out to me that could help for a lot of those folks who are either new to developer relations or in community in general, or maybe they're out on the market looking for new roles because we do hear so much of a variety in terms of what DevRel can look like. [00:15:15] Jason Hand: And you see it like on new job postings where one company is looking for. With a title as a developer relations professional or some variation of that, but then looking through the description, it looks like it's going to include some roles and responsibilities that have traditionally not aligned with developer relations. [00:15:32] Jason Hand: Oftentimes there's just so much variance in terms of what DevRel roles could look like, but this might actually help. Narrow that a little bit and make it easier for both those who are looking to fill roles and those who are looking to find new roles. We're all speaking the same language on what the expectations are here. [00:15:51] PJ Haggerty: Yeah. There's that centralization concept of, maybe if we can define and say, this is what DevRel looks like, then [00:16:00] maybe the hiring managers and the people at LinkedIn and indeed, and what have you, is Monster.com still a thing? I don't think Monster.com is still a thing. [00:16:07] PJ Haggerty: But maybe the people who are in charge of all of this hiring and doing all these things, maybe they can finally have a good definition to understand that maybe you're not looking for a developer advocate or a developer relations specialist, maybe you're actually looking for someone in marketing. [00:16:24] PJ Haggerty: Maybe you're actually looking for a sales engineer. Who's technically minded, but they're to speak to onboard clients. Maybe you're even looking for customer success. Because like you said, Jason, I've looked at a lot of these job descriptions, especially over here that I was unemployed. [00:16:39] PJ Haggerty: And a lot of these people do not understand that their questions that they're asking or that the positions they're describing are not developer relations positions, but. The buzzwords here. So let's go with what we got. [00:16:52] Wesley Faulkner: And also to be frank, these questions have been answered and probably it's been answered multiple [00:17:00] times by different people and everyone who's been in DevRel for a very long time can see and read these and say, that's actually valid. [00:17:09] Wesley Faulkner: Someone who's brand new may not have that ability to distinguish what is. Actually something that makes sense. I think the DevRel foundation will help those new people to be able to do some of that work for them. [00:17:21] Wesley Faulkner: Not necessarily have to create all this new documentation and resources, but aggregating some of the things that are out there that is really good, high quality work that we can help with bringing them into the fold and allowing people to use us as a central point to jump off and find these other resources. [00:17:38] PJ Haggerty: Yeah, that's awesome. And I think that I'm looking forward to seeing what comes out of it. People should not have an expectation. Let's set some boundaries here. People should not have an expectation that like come January one, the dev rel foundation is about to drop the hottest mixtape you've ever heard about dev rel. [00:17:54] PJ Haggerty: These things are going to take time. Yes, we have hope, but hope takes work. [00:17:59] Wesley Faulkner: [00:18:00] And 1 of the things that we're asking or requiring for all these groups that form is that they give at least a monthly update on 1 of our open calls and open meetings that we do every week. [00:18:10] Wesley Faulkner: If you want to stay abreast about the progress take a look in at. Our GitHub and look at what the process we're working and fostering. And also just, if you have input jump into one of these calls and just talk to the people who are championing these directly. [00:18:26] PJ Haggerty: Or at the very least jump in the discord and see what the conversation is. [00:18:29] PJ Haggerty: Yep. I think there's a lot of good conversation going on over there as well. And with that, thank you for giving us space to talk about this. Enjoy the podcast? Please take a few moments to leave us a review on iTunes (https://itunes.apple.com/us/podcast/community-pulse/id1218368182?mt=2) and follow us on Spotify (https://open.spotify.com/show/3I7g5W9fMSgpWu38zZMjet?si=eb528c7de12b4d7a&nd=1&dlsi=b0c85248dabc48ce), or leave a review on one of the other many podcasting sites that we're on! Your support means a lot to us and helps us continue to produce episodes every month. Like all things Community, this too takes a village. Artwork photo by Ramin Khatibi on Unsplash.

Scaling DevTools
Louis Knight-Webb from Bloop.ai - the YC startup turning COBOL into Java

Scaling DevTools

Play Episode Listen Later Jan 2, 2025 46:17 Transcription Available


Louis Knight-Webb is the CEO and co-founder of Bloop.Bloop helps with modernizing legacy software, particularly focusing on COBOL and mainframes. This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. Takeaways:- Mainframes and COBOL are still foundational in many industries.- Bloop started with a focus on code search but evolved to address legacy code modernization.- The transition from COBOL to Java is a significant challenge for many enterprises.- Innovative approaches are needed to effectively translate legacy code.- Ensuring code quality during migration is crucial to avoid operational disruptions.- AI can enhance the code translation process but has limitations with legacy languages.Links:- Louis Knight-Webb - Bloop  Chapters:00:00 The Legacy of Mainframes and COBOL03:05 The Evolution of Bloop and Code Search05:58 Challenges in Modernizing Legacy Code08:48 Navigating the Enterprise Code Landscape12:11 The Transition from COBOL to Java15:05 Innovative Approaches to Code Translation18:02 Ensuring Code Quality and Functionality20:56 The Future of Development and AI Integration23:52 Building Relationships in the Enterprise Space26:45 The Long-Term Vision for Legacy Code Modernization

Convergence
Best of 2024: Top Insights on Developer Tools, APIs, SDKs, and Creating Exceptional DevX

Convergence

Play Episode Listen Later Dec 31, 2024 45:38


We compiled our favorite clips on developer tools and developer experience (DevX). We discuss why DevX has become essential for developer-focused companies and how it drives adoption to grow your product. Learn what makes developers a unique and discerning customer base, and hear practical strategies for designing exceptional tools and platforms. Our guests also share lessons learned from their own experiences—whether in creating frictionless integrations, maintaining a strong feedback culture, or enabling internal platform adoption. Through compelling stories and actionable advice, this episode is packed with lessons on how to build products that developers love. Playlist of Full Episodes from This Compilation: https://www.youtube.com/playlist?list=PL31JETR9AR0FV-46VR4G_n6xi4WdXEx-2 Inside the episode... The importance of developer experience and why it's a priority for developer-facing companies. Key differences between building developer tools and end-user applications. How DevX differs from DevRel and the synergy between the two. Metrics for measuring the success of developer tools: adoption, satisfaction, and revenue. Insights into abstraction ladders and balancing complexity and power. Customer research strategies for validating assumptions and prioritizing features. Stripe's culture of craftsmanship and creating “surprisingly great” experiences. The importance of dogfooding and feedback loops in building trusted platforms. Balancing enablement and avoiding gatekeeping in internal platform adoption. Maintaining consistency and quality across APIs, CLIs, and other resources. Mentioned in this episode Stripe Doppler Heroku Abstraction ladders Developer feedback loops Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Subscribe to the Convergence podcast wherever you get podcasts including video episodes to get updated on the other crucial conversations that we'll post on YouTube at youtube.com/@convergencefmpodcast Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.   Follow the Pod Linkedin: https://www.linkedin.com/company/convergence-podcast/ X: https://twitter.com/podconvergence Instagram: @podconvergence

Scaling DevTools
Guy Podjarny, Snyk and Tessl founder - The future of programming

Scaling DevTools

Play Episode Listen Later Dec 23, 2024 44:43 Transcription Available


Guy Podjarny is the founder of Tessl - a startup that is rethinking how we build software.Guy previously founded Snyk - a dependency scanning tool worth billions of dollars. Before Snyk, Guy founded Blaze, which he sold to Akamai.This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs. In this conversation, we talk about the future of programming and the future of DevTools. The future of programming will focus on writing specifications.Trust in AI toolsSnyk is an example of how tools can integrate into existing workflows.Code can become disposable, allowing for flexibility in development.Specifications will serve as repositories of truth in software development.Developers will need to adapt their skills to leverage AI tools effectively.Community collaboration is essential for the evolution of AI development tools.AI simplifies and democratizes the process of software creationThanks to Anna Debenham for making this happen. 

Community Pulse
DevRel Hiring is Broken (Ep 92)

Community Pulse

Play Episode Listen Later Dec 12, 2024 39:04


It comes as no surprise that something in the hiring of Developer Relations practitioners has become a mystery box of confusion. No standard path to follow, interviews ranging all over the map, homework assignments that go nowhere, and most conversations leading to few actual opportunities. Wesley, Jason, and PJ share their thoughts on what's happening and whether or not there is hope for the future. Enjoy the podcast? Please take a few moments to leave us a review on iTunes (https://itunes.apple.com/us/podcast/community-pulse/id1218368182?mt=2) and follow us on Spotify (https://open.spotify.com/show/3I7g5W9fMSgpWu38zZMjet?si=eb528c7de12b4d7a&nd=1&dlsi=b0c85248dabc48ce), or leave a review on one of the other many podcasting sites that we're on! Your support means a lot to us and helps us continue to produce episodes every month. Like all things Community, this too takes a village. Artwork by Photo by CHUTTERSNAP on Unsplash.

COMPRESSEDfm
188 | How Video Tap Leverages OpenAI for Content Creation

COMPRESSEDfm

Play Episode Listen Later Dec 4, 2024 46:36


In this episode, Chris Sev discusses building SaaS projects with Laravel and AI, detailing his journey from launching Scotch.io to creating VideoTap. Dive into the innovative workflows for automating video marketing content, learn why Laravel remains his go-to framework, and explore insights on the future of AI in development.Show Notes00:00:00 - Intro00:00:33 - Guest Introduction: Chris Sev, DevRel at Sourcegraph00:01:08 - Chris's Background and Journey00:02:28 - Tech Stack Behind Video Tap00:02:55 - Story of Getting the VideoTap.com Domain00:05:20 - VideoTap's AI Implementation and Process00:14:20 - How VideoTap Uses AI for Content Generation00:17:06 - Prompt Engineering Tips and Techniques00:21:17 - AI Content Generation Pipeline and Error Handling00:22:27 - Handling Large Videos and Context Windows00:23:44 - Experimenting with Different AI Models00:24:23 - AI Writing Style and Evaluation Techniques00:27:44 - Current State of VideoTap: Team and Revenue00:30:39 - Future Goals: Integrations and Features00:35:27 - Chris's Work at Sourcegraph and Mission00:38:20 - Picks and PlugsAmyPick: Polar HabitsPlug: Broken Comb NewsletterChrisPick: Phind.com and  Perplexity.aiPlug: TwinPicks.ai, Richest You SubstackJamesPick: Kroser TSA Travel Laptop BackpackPlug: NewsletterLinksVideoTapAnthropic Prompt Engineering GuideCreator Hooks NewsletterThumbnail TestCodi AI Coding Assistant 

All JavaScript Podcasts by Devchat.tv
TypeScript Success: Integration, Type Checking, and Generics - JsJ 660

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Dec 3, 2024 80:36


In this episode, Charles sits down with TypeScript expert Matt Pocock to dive deep into the world of TypeScript migration, learning curves, and developer challenges. They explore why having a TypeScript "wizard" is crucial for teams transitioning from JavaScript and how TypeScript's integration with development environments like Visual Studio Code has been a game changer.Charles and Matt discuss the importance of real-time typechecking, the community's role in TypeScript's success, and practical strategies for migrating large codebases to TypeScript. You'll hear about Matt's journey from drama school to becoming a DevRel expert, his contributions to the XState library, and his philosophy of type-driven development. Together, they highlight TypeScript's advantages, such as enhanced code reliability and the nuanced benefits of explicit vs. inferred types.Whether you're a seasoned developer or just starting with TypeScript, this episode offers valuable insights and actionable advice to help you harness the full power of static typing in your projects. Tune in for a fascinating discussion that underscores the value of "boring" code, the need for continual learning, and the ongoing evolution of software development practices. Stay with us as we unravel the intricacies of TypeScript and share practical tips to elevate your coding journey.SocialsLinkedIn: Matt PocockBecome a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We have a full slate of upcoming events: AI Engineer London, AWS Re:Invent in Las Vegas, and now Latent Space LIVE! at NeurIPS in Vancouver and online. Sign up to join and speak!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!We try to stay close to the inference providers as part of our coverage, as our podcasts with Together AI and Replicate will attest: However one of the most notable pull quotes from our very well received Braintrust episode was his opinion that open source model adoption has NOT gone very well and is actually declining in relative market share terms (it is of course increasing in absolute terms):Today's guest, Lin Qiao, would wholly disagree. Her team of Pytorch/GPU experts are wholly dedicated toward helping you serve and finetune the full stack of open source models from Meta and others, across all modalities (Text, Audio, Image, Embedding, Vision-understanding), helping customers like Cursor and Hubspot scale up open source model inference both rapidly and affordably.Fireworks has emerged after its successive funding rounds with top tier VCs as one of the leaders of the Compound AI movement, a term first coined by the Databricks/Mosaic gang at Berkeley AI and adapted as “Composite AI” by Gartner:Replicating o1We are the first podcast to discuss Fireworks' f1, their proprietary replication of OpenAI's o1. This has become a surprisingly hot area of competition in the past week as both Nous Forge and Deepseek r1 have launched competitive models.Full Video PodcastLike and subscribe!Timestamps* 00:00:00 Introductions* 00:02:08 Pre-history of Fireworks and PyTorch at Meta* 00:09:49 Product Strategy: From Framework to Model Library* 00:13:01 Compound AI Concept and Industry Dynamics* 00:20:07 Fireworks' Distributed Inference Engine* 00:22:58 OSS Model Support and Competitive Strategy* 00:29:46 Declarative System Approach in AI* 00:31:00 Can OSS replicate o1?* 00:36:51 Fireworks f1* 00:41:03 Collaboration with Cursor and Speculative Decoding* 00:46:44 Fireworks quantization (and drama around it)* 00:49:38 Pricing Strategy* 00:51:51 Underrated Features of Fireworks Platform* 00:55:17 HiringTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner at CTO at Danceable Partners, and I'm joined by my co-host, Swyx founder, Osmalayar.Swyx [00:00:11]: Hey, and today we're in a very special studio inside the Fireworks office with Lin Qiang, CEO of Fireworks. Welcome. Yeah.Lin [00:00:20]: Oh, you should welcome us.Swyx [00:00:21]: Yeah, welcome. Yeah, thanks for having us. It's unusual to be in the home of a startup, but it's also, I think our relationship is a bit unusual compared to all our normal guests. Definitely.Lin [00:00:34]: Yeah. I'm super excited to talk about very interesting topics in that space with both of you.Swyx [00:00:41]: You just celebrated your two-year anniversary yesterday.Lin [00:00:43]: Yeah, it's quite a crazy journey. We circle around and share all the crazy stories across these two years, and it has been super fun. All the way from we experienced Silicon Valley bank run to we delete some data that shouldn't be deleted operationally. We went through a massive scale where we actually are busy getting capacity to, yeah, we learned to kind of work with it as a team with a lot of brilliant people across different places to join a company. It has really been a fun journey.Alessio [00:01:24]: When you started, did you think the technical stuff will be harder or the bank run and then the people side? I think there's a lot of amazing researchers that want to do companies and it's like the hardest thing is going to be building the product and then you have all these different other things. So, were you surprised by what has been your experience the most?Lin [00:01:42]: Yeah, to be honest with you, my focus has always been on the product side and then after the product goes to market. And I didn't realize the rest has been so complicated, operating a company and so on. But because I don't think about it, I just kind of manage it. So it's done. I think I just somehow don't think about it too much and solve whatever problem coming our way and it worked.Swyx [00:02:08]: So let's, I guess, let's start at the pre-history, the initial history of Fireworks. You ran the PyTorch team at Meta for a number of years and we previously had Sumit Chintal on and I think we were just all very interested in the history of GenEI. Maybe not that many people know how deeply involved Faire and Meta were prior to the current GenEI revolution.Lin [00:02:35]: My background is deep in distributed system, database management system. And I joined Meta from the data side and I saw this tremendous amount of data growth, which cost a lot of money and we're analyzing what's going on. And it's clear that AI is driving all this data generation. So it's a very interesting time because when I joined Meta, Meta is going through ramping down mobile-first, finishing the mobile-first transition and then starting AI-first. And there's a fundamental reason about that sequence because mobile-first gave a full range of user engagement that has never existed before. And all this user engagement generated a lot of data and this data power AI. So then the whole entire industry is also going through, falling through this same transition. When I see, oh, okay, this AI is powering all this data generation and look at where's our AI stack. There's no software, there's no hardware, there's no people, there's no team. I want to dive up there and help this movement. So when I started, it's very interesting industry landscape. There are a lot of AI frameworks. It's a kind of proliferation of AI frameworks happening in the industry. But all the AI frameworks focus on production and they use a very certain way of defining the graph of neural network and then use that to drive the model iteration and productionization. And PyTorch is completely different. So they could also assume that he was the user of his product. And he basically says, researchers face so much pain using existing AI frameworks, this is really hard to use and I'm going to do something different for myself. And that's the origin story of PyTorch. PyTorch actually started as the framework for researchers. They don't care about production at all. And as they grow in terms of adoption, so the interesting part of AI is research is the top of our normal production. There are so many researchers across academic, across industry, they innovate and they put their results out there in open source and that power the downstream productionization. So it's brilliant for MATA to establish PyTorch as a strategy to drive massive adoption in open source because MATA internally is a PyTorch shop. So it creates a flying wheel effect. So that's kind of a strategy behind PyTorch. But when I took on PyTorch, it's kind of at Caspo, MATA established PyTorch as the framework for both research and production. So no one has done that before. And we have to kind of rethink how to architect PyTorch so we can really sustain production workload, the stability, reliability, low latency, all this production concern was never a concern before. Now it's a concern. And we actually have to adjust its design and make it work for both sides. And that took us five years because MATA has so many AI use cases, all the way from ranking recommendation as powering the business top line or as ranking newsfeed, video ranking to site integrity detect bad content automatically using AI to all kinds of effects, translation, image classification, object detection, all this. And also across AI running on the server side, on mobile phones, on AI VR devices, the wide spectrum. So by the time we actually basically managed to support AI across ubiquitous everywhere across MATA. But interestingly, through open source engagement, we work with a lot of companies. It is clear to us like this industry is starting to take on AI first transition. And of course, MATA's hyperscale always go ahead of industry. And it feels like when we start this AI journey at MATA, there's no software, no hardware, no team. For many companies we engage with through PyTorch, we feel the pain. That's the genesis why we feel like, hey, if we create fireworks and support industry going through this transition, it will be a huge amount of impact. Of course, the problem that the industry is facing will not be the same as MATA. MATA is so big, right? So it's kind of skewed towards extreme scale and extreme optimization in the industry will be different. But we feel like we have the technical chop and we've seen a lot. We'll look to kind of drive that. So yeah, so that's how we started.Swyx [00:06:58]: When you and I chatted about the origins of fireworks, it was originally envisioned more as a PyTorch platform, and then later became much more focused on generative AI. Is that fair to say? What was the customer discovery here?Lin [00:07:13]: Right. So I would say our initial blueprint is we should build a PyTorch cloud because a PyTorch library and there's no SaaS platform to enable AI workloads.Swyx [00:07:26]: Even in 2022, it's interesting.Lin [00:07:28]: I would not say absolutely no, but cloud providers have some of those, but it's not first class citizen, right? At 2022, there's still like TensorFlow is massively in production. And this is all pre-gen AI, and PyTorch is kind of getting more and more adoption. But there's no PyTorch-first SaaS platform existing. At the same time, we are also a very pragmatic set of people. We really want to make sure from the get-go, we get really, really close to customers. We understand their use case, we understand their pain points, we understand the value we deliver to them. So we want to take a different approach instead of building a horizontal PyTorch cloud. We want to build a verticalized platform first. And then we talk with many customers. And interestingly, we started the company in September 2022, and in October, November, the OpenAI announced ChatGPT. And then boom, when we talked with many customers, they were like, can you help us work on the JNS aspect? So of course, there are some open source models. It's not as good at that time, but people are already putting a lot of attention there. Then we decided that if we're going to pick a vertical, we're going to pick JNI. The other reason is all JNI models are PyTorch models. So that's another reason. We believe that because of the nature of JNI, it's going to generate a lot of human consumable content. It will drive a lot of consumer, customer-developer-facing application and product innovation. Guaranteed. We're just at the beginning of this. Our prediction is for those kind of applications, the inference is much more important than training because inference scale is proportional to the up-limit award population. And training scale is proportional to the number of researchers. Of course, each training round could be very expensive. Although PyTorch supports both inference and training, we decided to laser focus on inference. So yeah, so that's how we got started. And we launched our public platform August last year. When we launched, it was a single product. It's a distributed inference engine with a simple API, open AI compatible API with many models. We started with LM and then we added a lot of models. Fast forward to now, we are a full platform with multiple product lines. So we love to kind of dive deep into what we offer. But that's a very fun journey in the past two years.Alessio [00:09:49]: What was the transition from you start to focus on PyTorch and people want to understand the framework, get it live. And now say maybe most people that use you don't even really know much about PyTorch at all. You know, they're just trying to consume a model. From a product perspective, like what were some of the decisions early on? Like right in October, November, you were just like, hey, most people just care about the model, not about the framework. We're going to make it super easy or was it more a gradual transition to the model librarySwyx [00:10:16]: you have today?Lin [00:10:17]: Yeah. So our product decision is all based on who is our ICP. And one thing I want to acknowledge here is the generic technology is disruptive. It's very different from AI before GNI. So it's a clear leap forward. Because before GNI, the companies that want to invest in AI, they have to train from scratch. There's no other way. There's no foundation model. It doesn't exist. So that means then to start a team, first hire a team who is capable of crunch data. There's a lot of data to crunch, right? Because training from scratch, you have to prepare a lot of data. And then they need to have GPUs to train, and then you start to manage GPUs. So then it becomes a very complex project. It takes a long time and not many companies can afford it, actually. And the GNI is a very different game right now, because it is a foundation model. So you don't have to train anymore. That makes AI much more accessible as a technology. As an app developer or product manager, even, not a developer, they can interact with GNI models directly. So our goal is to make AI accessible to all app developers and product engineers. That's our goal. So then getting them into the building model doesn't make any sense anymore with this new technology. And then building easy, accessible APIs is the most important. Early on, when we got started, we decided we're going to be open AI compatible. It's just kind of very easy for developers to adopt this new technology, and we will manage the underlying complexity of serving all these models.Swyx [00:11:56]: Yeah, open AI has become the standard. Even as we're recording today, Gemini announced that they have open AI compatible APIs. Interesting. So we just need to drop it all in line, and then we have everyone popping in line.Lin [00:12:09]: That's interesting, because we are working very closely with Meta as one of the partners. Meta, of course, is kind of very generous to donate many very, very strong open source models, expecting more to come. But also they have announced LamaStack, which is basically standardized, the upper level stack built on top of Lama models. So they don't just want to give out models and you figure out what the upper stack is. They instead want to build a community around the stack and build a new standard. I think there's an interesting dynamics in play in the industry right now, when it's more standardized across open AI, because they are kind of creating the top of the funnel, or standardized across Lama, because this is the most used open source model. So I think it's a lot of fun working at this time.Swyx [00:13:01]: I've been a little bit more doubtful on LamaStack, I think you've been more positive. Basically it's just like the meta version of whatever Hugging Face offers, you know, or TensorRT, or BLM, or whatever the open source opportunity is. But to me, it's not clear that just because Meta open sources Lama, that the rest of LamaStack will be adopted. And it's not clear why I should adopt it. So I don't know if you agree.Lin [00:13:27]: It's very early right now. That's why I kind of work very closely with them and give them feedback. The feedback to the meta team is very important. So then they can use that to continue to improve the model and also improve the higher level I think the success of LamaStack heavily depends on the community adoption. And there's no way around it. And I know the meta team would like to kind of work with a broader set of community. But it's very early.Swyx [00:13:52]: One thing that after your Series B, so you raced for Benchmark, and then Sequoia. I remember being close to you for at least your Series B announcements, you started betting heavily on this term of Compound AI. It's not a term that we've covered very much in the podcast, but I think it's definitely getting a lot of adoption from Databricks and Berkeley people and all that. What's your take on Compound AI? Why is it resonating with people?Lin [00:14:16]: Right. So let me give a little bit of context why we even consider that space.Swyx [00:14:22]: Because like pre-Series B, there was no message, and now it's like on your landing page.Lin [00:14:27]: So it's kind of very organic evolution from when we first launched our public platform, we are a single product. We are a distributed inference engine, where we do a lot of innovation, customized KUDA kernels, raw kernel kernels, running on different kinds of hardware, and build distributed disaggregated execution, inference execution, build all kinds of caching. So that is one. So that's kind of one product line, is the fast, most cost-efficient inference platform. Because we wrote PyTorch code, we know we basically have a special PyTorch build for that, together with a custom kernel we wrote. And then we worked with many more customers, we realized, oh, the distributed inference engine, our design is one size fits all. We want to have this inference endpoint, then everyone come in, and no matter what kind of form and shape or workload they have, it will just work for them. So that's great. But the reality is, we realized all customers have different kinds of use cases. The use cases come in all different forms and shapes. And the end result is the data distribution in their inference workload doesn't align with the data distribution in the training data for the model. It's a given, actually. If you think about it, because researchers have to guesstimate what is important, what's not important in preparing data for training. So because of that misalignment, then we leave a lot of quality, latency, cost improvement on the table. So then we're saying, OK, we want to heavily invest in a customization engine. And we actually announced it called FHIR Optimizer. So FHIR Optimizer basically helps users navigate a three-dimensional optimization space across quality, latency, and cost. So it's a three-dimensional curve. And even for one company, for different use cases, they want to land in different spots. So we automate that process for our customers. It's very simple. You have your inference workload. You inject into the optimizer along with the objective function. And then we spit out inference deployment config and the model setup. So it's your customized setup. So that is a completely different product. So that product thinking is one size fits all. And now on top of that, we provide a huge variety of state-of-the-art models, hundreds of them, varying from text to large state-of-the-art English models. That's where we started. And as we talk with many customers, we realize, oh, audio and text are very, very close. Many of our customers start to build assistants, all kinds of assistants using text. And they immediately want to add audio, audio in, audio out. So we support transcription, translation, speech synthesis, text, audio alignment, all different kinds of audio features. It's a big announcement. You should have heard by the time this is out. And the other areas of vision and text are very close with each other. Because a lot of information doesn't live in plain text. A lot of information lives in multimedia format, images, PDFs, screenshots, and many other different formats. So oftentimes to solve a problem, we need to put the vision model first to extract information and then use language model to process and then send out results. So vision is important. We also support vision model, various different kinds of vision models specialized in processing different kinds of source and extraction. And we're also going to have another announcement of a new API endpoint we'll support for people to upload various different kinds of multimedia content and then get the extract very accurate information out and feed that into LM. And of course, we support embedding because embedding is very important for semantic search, for RAG, and all this. And in addition to that, we also support text-to-image, image generation models, text-to-image, image-to-image, and we're adding text-to-video as well in our portfolio. So it's a very comprehensive set of model catalog that built on top of File Optimizer and Distributed Inference Engine. But then we talk with more customers, they solve business use case, and then we realize one model is not sufficient to solve their problem. And it's very clear because one is the model hallucinates. Many customers, when they onboard this JNI journey, they thought this is magical. JNI is going to solve all my problems magically. But then they realize, oh, this model hallucinates. It hallucinates because it's not deterministic, it's probabilistic. So it's designed to always give you an answer, but based on probabilities, so it hallucinates. And that's actually sometimes a feature for creative writing, for example. Sometimes it's a bug because, hey, you don't want to give misinformation. And different models also have different specialties. To solve a problem, you want to ask different special models to kind of decompose your task into multiple small tasks, narrow tasks, and then have an expert model solve that task really well. And of course, the model doesn't have all the information. It has limited knowledge because the training data is finite, not infinite. So the model oftentimes doesn't have real-time information. It doesn't know any proprietary information within the enterprise. It's clear that in order to really build a compiling application on top of JNI, we need a compound AI system. Compound AI system basically is going to have multiple models across modalities, along with APIs, whether it's public APIs, internal proprietary APIs, storage systems, database systems, knowledge to work together to deliver the best answer.Swyx [00:20:07]: Are you going to offer a vector database?Lin [00:20:09]: We actually heavily partner with several big vector database providers. Which is your favorite? They are all great in different ways. But it's public information, like MongoDB is our investor. And we have been working closely with them for a while.Alessio [00:20:26]: When you say distributed inference engine, what do you mean exactly? Because when I hear your explanation, it's almost like you're centralizing a lot of the decisions through the Fireworks platform on the quality and whatnot. What do you mean distributed? It's like you have GPUs in a lot of different clusters, so you're sharding the inference across the same model.Lin [00:20:45]: So first of all, we run across multiple GPUs. But the way we distribute across multiple GPUs is unique. We don't distribute the whole model monolithically across multiple GPUs. We chop them into pieces and scale them completely differently based on what's the bottleneck. We also are distributed across regions. We have been running in North America, EMEA, and Asia. We have regional affinity to applications because latency is extremely important. We are also doing global load balancing because a lot of applications there, they quickly scale to global population. And then at that scale, different content wakes up at a different time. And you want to kind of load balancing across. So all the way, and we also have, we manage various different kinds of hardware skew from different hardware vendors. And different hardware design is best for different types of workload, whether it's long context, short context, long generation. So all these different types of workload is best fitted for different kinds of hardware skew. And then we can even distribute across different hardware for a workload. So the distribution actually is all around in the full stack.Swyx [00:22:02]: At some point, we'll show on the YouTube, the image that Ray, I think, has been working on with all the different modalities that you offer. To me, it's basically you offer the open source version of everything that OpenAI typically offers. I don't think there is. Actually, if you do text to video, you will be a superset of what OpenAI offers because they don't have Sora. Is that Mochi, by the way? Mochi. Mochi, right?Lin [00:22:27]: Mochi. And there are a few others. I will say, the interesting thing is, I think we're betting on the open source community is going to proliferate. This is literally what we're seeing. And there's amazing video generation companies. There is amazing audio companies. Like cross-border, the innovation is off the chart, and we are building on top of that. I think that's the advantage we have compared with a closed source company.Swyx [00:22:58]: I think I want to restate the value proposition of Fireworks for people who are comparing you versus a raw GPU provider like a RunPod or Lambda or anything like those, which is like you create the developer experience layer and you also make it easily scalable or serverless or as an endpoint. And then, I think for some models, you have custom kernels, but not all models.Lin [00:23:25]: Almost for all models. For all large language models, all your models, and the VRMs. Almost for all models we serve.Swyx [00:23:35]: And so that is called Fire Attention. I don't remember the speed numbers, but apparently much better than VLM, especially on a concurrency basis.Lin [00:23:44]: So Fire Attention is specific mostly for language models, but for other modalities, we'll also have a customized kernel.Swyx [00:23:51]: And I think the typical challenge for people is understanding that has value, and then there are other people who are also offering open-source models. Your mode is your ability to offer a good experience for all these customers. But if your existence is entirely reliant on people releasing nice open-source models, other people can also do the same thing.Lin [00:24:14]: So I would say we build on top of open-source model foundation. So that's the kind of foundation we build on top of. But we look at the value prop from the lens of application developers and product engineers. So they want to create new UX. So what's happening in the industry right now is people are thinking about a completely new way of designing products. And I'm talking to so many founders, it's just mind-blowing. They help me understand existing way of doing PowerPoint, existing way of coding, existing way of managing customer service. It's actually putting a box in our head. For example, PowerPoint. So PowerPoint generation is we always need to think about how to fit into my storytelling into this format of slide one after another. And I'm going to juggle through design together with what story to tell. But the most important thing is what's our storytelling lines, right? And why don't we create a space that is not limited to any format? And those kind of new product UX design combined with automated content generation through Gen AI is the new thing that many founders are doing. What are the challenges they're facing? Let's go from there. One is, again, because a lot of products built on top of Gen AI, they are consumer-personal developer facing, and they require interactive experience. It's just a kind of product experience we all get used to. And our desire is to actually get faster and faster interaction. Otherwise, nobody wants to spend time, right? And then that requires low latency. And the other thing is the nature of consumer-personal developer facing is your audience is very big. You want to scale up to product market fit quickly. But if you lose money at a small scale, you're going to bankrupt quickly. So it's actually a big contrast. I actually have product market fit, but when I scale, I scale out of my business. So that's kind of a very funny way to think about it. So then having low latency and low cost is essential for those new applications and products to survive and really become a generation company. So that's the design point for our distributed inference engine and the file optimizer. File optimizer, you can think about that as a feedback loop. The more you feed your inference workload to our inference engine, the more we help you improve quality, lower latency further, lower your cost. It basically becomes better. And we automate that because we don't want you as an app developer or product engineer to think about how to figure out all these low-level details. It's impossible because you're not trained to do that at all. You should kind of keep your focus on the product innovation. And then the compound AI, we actually feel a lot of pain as the app developers, engineers, there are so many models. Every week, there's at least a new model coming out.Swyx [00:27:09]: Tencent had a giant model this week. Yeah, yeah.Lin [00:27:13]: I saw that. I saw that.Swyx [00:27:15]: It's like $500 billion.Lin [00:27:18]: So they're like, should I keep chasing this or should I forget about it? And which model should I pick to solve what kind of sub-problem? How do I even decompose my problem into those smaller problems and fit the model into it? I have no idea. And then there are two ways to think about this design. I think I talked about that in the past. One is imperative, as in you figure out how to do it. You give developer tools to dictate how to do it. Or you build a declarative system where a developer tells what they want to do, not how. So these are completely two different designs. So the analogy I want to draw is, in the data world, the database management system is a declarative system because people use database, use SQL. SQL is a way you say, what do you want to extract out of a database? What kind of result do you want? But you don't figure out which node is going to, how many nodes you're going to run on top of, how you redefine your disk, which index you use, which project. You don't need to worry about any of those. And database management system will figure out, generate a new best plan, and execute on that. So database is declarative. And it makes it super easy. You just learn SQL, which is learn a semantic meaning of SQL, and you can use it. Imperative side is there are a lot of ETL pipelines. And people design this DAG system with triggers, with actions, and you dictate exactly what to do. And if it fails, then how to recover. So that's an imperative system. We have seen a range of systems in the ecosystem go different ways. I think there's value of both. There's value of both. I don't think one is going to subsume the other. But we are leaning more into the philosophy of the declarative system. Because from the lens of app developer and product engineer, that would be easiest for them to integrate.Swyx [00:29:07]: I understand that's also why PyTorch won as well, right? This is one of the reasons. Ease of use.Lin [00:29:14]: Focus on ease of use, and then let the system take on the hard challenges and complexities. So we follow, we extend that thinking into current system design. So another announcement is we will also announce our next declarative system is going to appear as a model that has extremely high quality. And this model is inspired by Owen's announcement for OpenAI. You should see that by the time we announce this or soon.Alessio [00:29:46]: Trained by you.Lin [00:29:47]: Yes.Alessio [00:29:48]: Is this the first model that you trained? It's not the first.Lin [00:29:52]: We actually have trained a model called FireFunction. It's a function calling model. It's our first step into compound AI system. Because function calling model can dispatch a request into multiple APIs. We have pre-baked set of APIs the model learned. You can also add additional APIs through the configuration to let model dispatch accordingly. So we have a very high quality function calling model that's already released. We have actually three versions. The latest version is very high quality. But now we take a further step that you don't even need to use function calling model. You use our new model we're going to release. It will solve a lot of problems approaching very high OpenAI quality. So I'm very excited about that.Swyx [00:30:41]: Do you have any benchmarks yet?Lin [00:30:43]: We have a benchmark. We're going to release it hopefully next week. We just put our model to LMSYS and people are guessing. Is this the next Gemini model or a MADIS model? People are guessing. That's very interesting. We're watching the Reddit discussion right now.Swyx [00:31:00]: I have to ask more questions about this. When OpenAI released o1, a lot of people asked about whether or not it's a single model or whether it's a chain of models. Noam and basically everyone on the Strawberry team was very insistent that what they did for reinforcement learning, chain of thought, cannot be replicated by a whole bunch of open source model calls. Do you think that that is wrong? Have you done the same amount of work on RL as they have or was it a different direction?Lin [00:31:29]: I think they take a very specific approach where the caliber of team is very high. So I do think they are the domain expert in doing the things they are doing. I don't think there's only one way to achieve the same goal. We're on the same direction in the sense that the quality scaling law is shifting from training to inference. For that, I fully agree with them. But we're taking a completely different approach to the problem. All of that is because, of course, we didn't train the model from scratch. All of that is because we built on the show of giants. The current model available we have access to is getting better and better. The future trend is the gap between the open source model and the co-source model. It's just going to shrink to the point there's not much difference. And then we're on the same level field. That's why I think our early investment in inference and all the work we do around balancing across quality, latency, and cost pay off because we have accumulated a lot of experience and that empowers us to release this new model that is approaching open-ended quality.Alessio [00:32:39]: I guess the question is, what do you think the gap to catch up will be? Because I think everybody agrees with open source models eventually will catch up. And I think with 4, then with Lama 3.2, 3.1, 4.5b, we close the gap. And then 0.1 just reopened the gap so much and it's unclear. Obviously, you're saying your model will have...Swyx [00:32:57]: We're closing that gap.Alessio [00:32:58]: But you think in the future, it's going to be months?Lin [00:33:02]: So here's the thing that's happened. There's public benchmark. It is what it is. But in reality, open source models in certain dimensions are already on par or beat closed source models. So for example, in the coding space, open source models are really, really good. And in function calling, file function is also really, really good. So it's all a matter of whether you build one model to solve all the problems and you want to be the best of solving all the problems, or in the open source domain, it's going to specialize. All these different model builders specialize in certain narrow area. And it's logical that they can be really, really good in that very narrow area. And that's our prediction is with specialization, there will be a lot of expert models really, really good and even better than one-size-fits-all closed source models.Swyx [00:33:55]: I think this is the core debate that I am still not 100% either way on in terms of compound AI versus normal AI. Because you're basically fighting the bitter lesson.Lin [00:34:09]: Look at the human society, right? We specialize. And you feel really good about someone specializing doing something really well, right? And that's how our way evolved from ancient times. We're all journalists. We do everything. Now we heavily specialize in different domains. So my prediction is in the AI model space, it will happen also. Except for the bitter lesson.Swyx [00:34:30]: You get short-term gains by having specialists, domain specialists, and then someone just needs to train like a 10x bigger model on 10x more inference, 10x more data, 10x more model perhaps, whatever the current scaling law is. And then it supersedes all the individual models because of some generalized intelligence slash world knowledge. I think that is the core insight of the GPTs, the GPT-123 networks. Right.Lin [00:34:56]: But the training scaling law is because you have an increasing amount of data to train from. And you can do a lot of compute. So I think on the data side, we're approaching the limit. And the only data to increase that is synthetic generated data. And then there's like what is the secret sauce there, right? Because if you have a very good large model, you can generate very good synthetic data and then continue to improve quality. So that's why I think in OpenAI, they are shifting from the training scaling law intoSwyx [00:35:25]: inference scaling law.Lin [00:35:25]: And it's the test time and all this. So I definitely believe that's the future direction. And that's where we are really good at, doing inference.Swyx [00:35:34]: A couple of questions on that. Are you planning to share your reasoning choices?Lin [00:35:39]: That's a very good question. We are still debating.Swyx [00:35:43]: Yeah.Lin [00:35:45]: We're still debating.Swyx [00:35:46]: I would say, for example, it's interesting that, for example, SweetBench. If you want to be considered for ranking, you have to submit your reasoning choices. And that has actually disqualified some of our past guests. Cosign was doing well on SweetBench, but they didn't want to leak those results. So that's why you don't see O1 preview on SweetBench, because they don't submit their reasoning choices. And obviously, it's IP. But also, if you're going to be more open, then that's one way to be more open. So your model is not going to be open source, right? It's going to be an endpoint that you provide. Okay, cool. And then pricing, also the same as OpenAI, just kind of based on...Lin [00:36:25]: Yeah, this is... I don't have, actually, information. Everything is going so fast, we haven't even thought about that yet. Yeah, I should be more prepared.Swyx [00:36:33]: I mean, this is live. You know, it's nice to just talk about it as it goes live. Any other things that you want feedback on or you're thinking through? It's kind of nice to just talk about something when it's not decided yet. About this new model. It's going to be exciting. It's going to generate a lot of buzz. Right.Lin [00:36:51]: I'm very excited to see how people are going to use this model. So there's already a Reddit discussion about it. And people are asking very deep, mathematical questions. And since the model got it right, surprising. And internally, we're also asking the model to generate what is AGI. And it generates a very complicated DAG thinking process. So we're having a lot of fun testing this internally. But I'm more curious, how will people use it? What kind of application they're going to try and test on it? And that's where we really like to hear feedback from the community. And also feedback to us. What works out well? What doesn't work out well? What works out well, but surprising them? And what kind of thing they think we should improve on? And those kind of feedback will be tremendously helpful.Swyx [00:37:44]: Yeah. So I've been a production user of Preview and Mini since launch. I would say they're very, very obvious jobs in quality. So much so that they made clods on it. And they made the previous state-of-the-art look bad. It's really that stark, that difference. The number one thing, just feedback or feature requests, is people want control on the budget. Because right now, in 0.1, it kind of decides its own thinking budget. But sometimes you know how hard the problem is. And you want to actually tell the model, spend two minutes on this. Or spend some dollar amount. Maybe it's time you miss dollars. I don't know what the budget is. That makes a lot of sense.Lin [00:38:27]: So we actually thought about that requirement. And it should be, at some point, we need to support that. Not initially. But that makes a lot of sense.Swyx [00:38:38]: Okay. So that was a fascinating overview of just the things that you're working on. First of all, I realized that... I don't know if I've ever given you this feedback. But I think you guys are one of the reasons I agreed to advise you. Because I think when you first met me, I was kind of dubious. I was like... Who are you? There's Replicate. There's Together. There's Laptop. There's a whole bunch of other players. You're in very, very competitive fields. Like, why will you win? And the reason I actually changed my mind was I saw you guys shipping. I think your surface area is very big. The team is not that big. No. We're only 40 people. Yeah. And now here you are trying to compete with OpenAI and everyone else. What is the secret?Lin [00:39:21]: I think the team. The team is the secret.Swyx [00:39:23]: Oh boy. So there's no thing I can just copy. You just... No.Lin [00:39:30]: I think we all come from a very aligned culture. Because most of our team came from meta.Swyx [00:39:38]: Yeah.Lin [00:39:38]: And many startups. So we really believe in results. One is result. And second is customer. We're very customer obsessed. And we don't want to drive adoption for the sake of adoption. We really want to make sure we understand we are delivering a lot of business values to the customer. And we really value their feedback. So we would wake up midnight and deploy some model for them. Shuffle some capacity for them. And yeah, over the weekend, no brainer.Swyx [00:40:15]: So yeah.Lin [00:40:15]: So that's just how we work as a team. And the caliber of the team is really, really high as well. So as plug-in, we're hiring. We're expanding very, very fast. So if we are passionate about working on the most cutting-edge technology in the general space, come talk with us. Yeah.Swyx [00:40:38]: Let's talk a little bit about that customer journey. I think one of your more famous customers is Cursor. We were the first podcast to have Cursor on. And then obviously since then, they have blown up. Cause and effect are not related. But you guys especially worked on a fast supply model where you were one of the first people to work on speculative decoding in a production setting. Maybe just talk about what was the behind the scenes of working with Cursor?Lin [00:41:03]: I will say Cursor is a very, very unique team. I think the unique part is the team has very high technical caliber. There's no question about it. But they have decided, although many companies building coding co-pilot, they will say, I'm going to build a whole entire stack because I can. And they are unique in the sense they seek partnership. Not because they cannot. They're fully capable, but they know where to focus. That to me is amazing. And of course, they want to find a bypass partner. So we spent some time working together. They are pushing us very aggressively because for them to deliver high caliber product experience, they need the latency. They need the interactive, but also high quality at the same time. So actually, we expanded our product feature quite a lot as we support Cursor. And they are growing so fast. And we massively scaled quickly across multiple regions. And we developed a pretty high intense inference stack, almost like similar to what we do for Meta. I think that's a very, very interesting engagement. And through that, there's a lot of trust being built. They realize, hey, this is a team they can really partner with. And they can go big with. That comes back to, hey, we're really customer obsessed. And all the engineers working with them, there's just enormous amount of time syncing together with them and discussing. And we're not big on meetings, but we are like stack channel always on. Yeah, so you almost feel like working as one team. So I think that's really highlighted.Swyx [00:42:38]: Yeah. For those who don't know, so basically Cursor is a VS Code fork. But most of the time, people will be using closed models. Like I actually use a lot of SONET. So you're not involved there, right? It's not like you host SONET or you have any partnership with it. You're involved where Cursor is small, or like their house brand models are concerned, right?Lin [00:42:58]: I don't know what I can say, but the things they haven't said.Swyx [00:43:04]: Very obviously, the drop down is 4.0, but in Cursor, right? So I assume that the Cursor side is the Fireworks side. And then the other side, they're calling out the other. Just kind of curious. And then, do you see any more opportunity on the... You know, I think you made a big splash with 1,000 tokens per second. That was because of speculative decoding. Is there more to push there?Lin [00:43:25]: We push a lot. Actually, when I mentioned Fire Optimizer, right? So as in, we have a unique automation stack that is one size fits one. We actually deployed to Cursor earlier on. Basically optimized for their specific workload. And that's a lot of juice to extract out of there. And we see success in that product. It actually can be widely adopted. So that's why we started a separate product line called Fire Optimizer. So speculative decoding is just one approach. And speculative decoding here is not static. We actually wrote a blog post about it. There's so many different ways to do speculative decoding. You can pair a small model with a large model in the same model family. Or you can have equal pads and so on. There are different trade-offs which approach you take. It really depends on your workload. And then with your workload, we can align the Eagle heads or Medusa heads or a small big model pair much better to extract the best latency reduction. So all of that is part of the Fire Optimizer offering.Alessio [00:44:23]: I know you mentioned some of the other inference providers. I think the other question that people always have is around benchmarks. So you get different performance on different platforms. How should people think about... People are like, hey, Lama 3.2 is X on MMLU. But maybe using speculative decoding, you go down a different path. Maybe some providers run a quantized model. How should people think about how much they should care about how you're actually running the model? What's the delta between all the magic that you do and what a raw model...Lin [00:44:57]: Okay, so there are two big development cycles. One is experimentation, where they need fast iteration. They don't want to think about quality, and they just want to experiment with product experience and so on. So that's one. And then it looks good, and they want to post-product market with scaling. And the quality is really important. And latency and all the other things are becoming important. During the experimentation phase, it's just pick a good model. Don't worry about anything else. Make sure you even generate the right solution to your product. And that's the focus. And then post-product market fit, then that's kind of the three-dimensional optimization curve start to kick in across quality, latency, cost, where you should land. And to me, it's purely a product decision. To many products, if you choose a lower quality, but better speed and lower cost, but it doesn't make a difference to the product experience, then you should do it. So that's why I think inference is part of the validation. The validation doesn't stop at offline eval. The validation will go through A-B testing, through inference. And that's where we offer various different configurations for you to test which is the best setting. So this is the traditional product evaluation. So product evaluation should also include your new model versions and different model setup into the consideration.Swyx [00:46:22]: I want to specifically talk about what happens a few months ago with some of your major competitors. I mean, all of this is public. What is your take on what happens? And maybe you want to set the record straight on how Fireworks does quantization because I think a lot of people may have outdated perceptions or they didn't read the clarification post on your approach to quantization.Lin [00:46:44]: First of all, it's always a surprise to us that without any notice, we got called out.Swyx [00:46:51]: Specifically by name, which is normally not what...Lin [00:46:54]: Yeah, in a public post. And have certain interpretation of our quality. So I was really surprised. And it's not a good way to compete, right? We want to compete fairly. And oftentimes when one vendor gives out results, the interpretation of another vendor is always extremely biased. So we actually refrain ourselves to do any of those. And we happily partner with third parties to do the most fair evaluation. So we're very surprised. And we don't think that's a good way to figure out the competition landscape. So then we react. I think when it comes to quantization, the interpretation, we wrote actually a very thorough blog post. Because again, no one says it's all. We have various different quantization schemes. We can quantize very different parts of the model from ways to activation to cross-TPU communication. They can use different quantization schemes or consistent across the board. And again, it's a trade-off. It's a trade-off across this three-dimensional quality, latency, and cost. And for our customer, we actually let them find the best optimized point. And we have a very thorough evaluation process to pick that point. But for self-serve, there's only one point to pick. There's no customization available. So of course, it depends on what we talk with many customers. We have to pick one point. And I think the end result, like AA published, later on AA published a quality measure. And we actually looked really good. So that's why what I mean is, I will leave the evaluation of quality or performance to third party and work with them to find the most fair benchmark. And I think that's a good approach, a methodology. But I'm not a part of an approach of calling out specific namesSwyx [00:48:55]: and critique other competitors in a very biased way. Databases happens as well. I think you're the more politically correct one. And then Dima is the more... Something like this. It's you on Twitter.Lin [00:49:11]: It's like the Russian... We partner. We play different roles.Swyx [00:49:20]: Another one that I wanted to... I'm just the last one on the competition side. There's a perception of price wars in hosting open source models. And we talked about the competitiveness in the market. Do you aim to make margin on open source models? Oh, absolutely, yes.Lin [00:49:38]: So, but I think it really... When we think about pricing, it's really need to coordinate with the value we're delivering. If the value is limited, or there are a lot of people delivering the same value, there's no differentiation. There's only one way to go. It's going down. So through competition. If I take a big step back, there is pricing from... We're more compared with close model providers, APIs, right? The close model provider, their cost structure is even more interesting because we don't bear any training costs. And we focus on inference optimization, and that's kind of where we continue to add a lot of product value. So that's how we think about product. But for the close source API provider, model provider, they bear a lot of training costs. And they need to amortize the training costs into the inference. So that created very interesting dynamics of, yeah, if we match pricing there, and I think how they are going to make money is very, very interesting.Swyx [00:50:37]: So for listeners, opening eyes 2024, $4 billion in revenue, $3 billion in compute training, $2 billion in compute inference, $1 billion in research compute amortization, and $700 million in salaries. So that is like...Swyx [00:50:59]: I mean, a lot of R&D.Lin [00:51:01]: Yeah, so I think matter is basically like, make it zero. So that's a very, very interesting dynamics we're operating within. But coming back to inference, so we are, again, as I mentioned, our product is, we are a platform. We're not just a single model as a service provider as many other inference providers, like they're providing a single model. We have our optimizer to highly customize towards your inference workload. We have a compound AI system where significantly simplify your interaction to high quality and low latency, low cost. So those are all very different from other providers.Alessio [00:51:38]: What do people not know about the work that you do? I guess like people are like, okay, Fireworks, you run model very quickly. You have the function model. Is there any kind of like underrated part of Fireworks that more people should try?Lin [00:51:51]: Yeah, actually, one user post on x.com, he mentioned, oh, actually, Fireworks can allow me to upload the LoRa adapter to the service model at the same cost and use it at same cost. Nobody has provided that. That's because we have a very special, like we rolled out multi-LoRa last year, actually. And we actually have this function for a long time. And many people has been using it, but it's not well known that, oh, if you find your model, you don't need to use on demand. If you find your model is LoRa, you can upload your LoRa adapter and we deploy it as if it's a new model. And then you use, you get your endpoint and you can use that directly, but at the same cost as the base model. So I'm happy that user is marketing it for us. He discovered that feature, but we have that for last year. So I think to feedback to me is, we have a lot of very, very good features, as Sean just mentioned. I'm the advisor to the company,Swyx [00:52:57]: and I didn't know that you had speculative decoding released.Lin [00:53:02]: We have prompt catching way back last year also. We have many, yeah. So I think that is one of the underrated feature. And if they're developers, you are using our self-serve platform, please try it out.Swyx [00:53:16]: The LoRa thing is interesting because I think you also, the reason people add additional costs to it, it's not because they feel like charging people. Normally in normal LoRa serving setups, there is a cost to dedicating, loading those weights and dedicating a machine to that inference. How come you can't avoid it?Lin [00:53:36]: Yeah, so this is kind of our technique called multi-LoRa. So we basically have many LoRa adapters share the same base model. And basically we significantly reduce the memory footprint of serving. And the one base model can sustain a hundred to a thousand LoRa adapters. And then basically all these different LoRa adapters can share the same, like direct the same traffic to the same base model where base model is dominating the cost. So that's how we advertise that way. And that's how we can manage the tokens per dollar, million token pricing, the same as base model.Swyx [00:54:13]: Awesome. Is there anything that you think you want to request from the community or you're looking for model-wise or tooling-wise that you think like someone should be working on in this?Lin [00:54:23]: Yeah, so we really want to get a lot of feedback from the application developers who are starting to build on JNN or on the already adopted or starting about thinking about new use cases and so on to try out Fireworks first. And let us know what works out really well for you and what is your wishlist and what sucks, right? So what is not working out for you and we would like to continue to improve. And for our new product launches, typically we want to launch to a small group of people. Usually we launch on our Discord first to have a set of people use that first. So please join our Discord channel. We have a lot of communication going on there. Again, you can also give us feedback. We'll have a starting office hour for you to directly talk with our DevRel and engineers to exchange more long notes.Alessio [00:55:17]: And you're hiring across the board?Lin [00:55:18]: We're hiring across the board. We're hiring front-end engineers, infrastructure cloud, infrastructure engineers, back-end system optimization engineers, applied researchers, like researchers who have done post-training, who have done a lot of fine-tuning and so on.Swyx [00:55:34]: That's it. Thank you. Thanks for having us. Get full access to Latent Space at www.latent.space/subscribe

Tech News Weekly (MP3)
TNW 360: GitHub Copilot Goes Multi-Model - Amazon Echo Graveyard, Mac Week, Genmoji

Tech News Weekly (MP3)

Play Episode Listen Later Oct 31, 2024 72:40


Like Google, Amazon has a list of products the tech company discontinued. Is one of your favorites on the list? What Apple announced this past "Mac Week." What GitHub announced at its GitHub Universe 2024 event. And how Apple's Genmoji generator will operate when it's released with the future iOS 18.2 release. Mikah Sargent talks about a great article from The Verge highlighting a handful of Amazon Echo products that the tech company has discontinued over the years. Dan Moren of SixColors joins the show again to discuss the new M4 Mac products that the company announced this past week. Martin Woodward, VP of DevRel for GitHub stops by to talk about some of the new things that GitHub announced at its GibHub Universe 2024 event, including the new GitHub Spark. And Mikah talks about Apple's Genmoji service that is slated to come in iOS 18.2 and some of the things you can and cannot do with it. Host: Mikah Sargent Guests: Dan Moren and Martin Woodward Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: uscloud.com flashpoint.io veeam.com bigid.com/tnw

Tech News Weekly (Video HI)
TNW 360: GitHub Copilot Goes Multi-Model - Amazon Echo Graveyard, Mac Week, Genmoji

Tech News Weekly (Video HI)

Play Episode Listen Later Oct 31, 2024 72:40


Like Google, Amazon has a list of products the tech company discontinued. Is one of your favorites on the list? What Apple announced this past "Mac Week." What GitHub announced at its GitHub Universe 2024 event. And how Apple's Genmoji generator will operate when it's released with the future iOS 18.2 release. Mikah Sargent talks about a great article from The Verge highlighting a handful of Amazon Echo products that the tech company has discontinued over the years. Dan Moren of SixColors joins the show again to discuss the new M4 Mac products that the company announced this past week. Martin Woodward, VP of DevRel for GitHub stops by to talk about some of the new things that GitHub announced at its GibHub Universe 2024 event, including the new GitHub Spark. And Mikah talks about Apple's Genmoji service that is slated to come in iOS 18.2 and some of the things you can and cannot do with it. Host: Mikah Sargent Guests: Dan Moren and Martin Woodward Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: uscloud.com flashpoint.io veeam.com bigid.com/tnw

All TWiT.tv Shows (MP3)
Tech News Weekly 360: GitHub Copilot Goes Multi-Model

All TWiT.tv Shows (MP3)

Play Episode Listen Later Oct 31, 2024 72:40


Like Google, Amazon has a list of products the tech company discontinued. Is one of your favorites on the list? What Apple announced this past "Mac Week." What GitHub announced at its GitHub Universe 2024 event. And how Apple's Genmoji generator will operate when it's released with the future iOS 18.2 release. Mikah Sargent talks about a great article from The Verge highlighting a handful of Amazon Echo products that the tech company has discontinued over the years. Dan Moren of SixColors joins the show again to discuss the new M4 Mac products that the company announced this past week. Martin Woodward, VP of DevRel for GitHub stops by to talk about some of the new things that GitHub announced at its GibHub Universe 2024 event, including the new GitHub Spark. And Mikah talks about Apple's Genmoji service that is slated to come in iOS 18.2 and some of the things you can and cannot do with it. Host: Mikah Sargent Guests: Dan Moren and Martin Woodward Download or subscribe to this show at https://twit.tv/shows/tech-news-weekly. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: uscloud.com flashpoint.io veeam.com bigid.com/tnw

Screaming in the Cloud
Replay - Chaos Engineering for Gremlins with Jason Yee

Screaming in the Cloud

Play Episode Listen Later Oct 31, 2024 31:22


On this Replay, we're revisiting our conversation with Jason Yee, Staff Technical Advocate at Datadog. At the time of this recording, he was the Director of Advocacy at Gremlin, an enterprise-grade chaos engineering platform. Join Corey and Jason as they talk about what Gremlin is and what a director of advocacy does, making chaos engineering more accessible for the masses, how it's hard to calculate ROI for developer advocates, how developer advocacy and DevRel changes from one company to the next, why developer advocates need to focus on meaningful connections, why you should start chaos engineering as a mental game, qualities to look for in good developer advocates, the Break Things On Purpose podcast, and more.Show Highlights(0:00) Intro(0:31) Blackblaze sponsor read(0:58) The role of a Director of Advocacy(3:34) DevRel and twisting job definitions(5:50) How DevRel confusion manifests into marketing(11:37) Being able to measure and define a team's success(13:42) Building respect and a community in tech(15:22) Effectively courting a community(18:02) The challenges of Jason's job(21:06) Planning for failure modes(22:30) Determining your value in tech(25:41) The growth of Gremlin(30:16) Where you can find more from JasonAbout Jason YeeJason Yee is Staff Technical Avdocate at Datadog, where he works to inspire developers and ops engineers with the power of metrics and monitoring. Previously, he was the community manager for DevOps & Performance at O'Reilly Media and a software engineer at MongoDB.LinksBreak Things On Purpose podcast: https://www.gremlin.com/podcast/Twitter: https://twitter.com/gitbisectOriginal episodehttps://www.lastweekinaws.com/podcast/screaming-in-the-cloud/chaos-engineering-for-gremlins-with-jason-yee/SponsorBackblaze: https://www.backblaze.com/

Overtired
421: Give Yourself A Five

Overtired

Play Episode Listen Later Oct 28, 2024 66:17


Brett, Jeff, and the fabulous Jay Miller dive into hilarious and chaotic tales of surviving corporate reorgs, handling ADHD, and wrestling with DevRel magic. From API designing demands to repeated layoffs, they share hearty laughs and…

Programming Throwdown
176: MLOps at SwampUp

Programming Throwdown

Play Episode Listen Later Sep 24, 2024 118:37


James Morse: Software Engineer at CiscoSystem Administrator to DevOpsDifference between DevOps and MLOpsGetting Started with DevOpsLuke Marsden: CEO of Helix MLHow to start a business at 15 years oldBTRFS vs ZFSMLOps: the intersection of software, DevOps and AIFine-tuning AI on the CloudSome advice for folks interested in ML OpsYuval Fernbach: CTO MLOps & JFrogStarting QuarkGoing from a jupyter notebook to productionML Supply ChainGetting started in Machine LearningStephen Chin: VP of DevRel at Neo4JDeveloper Relations: The JobWhat is a Large Language Model?Knowledge graphs and the Linkage ModelHow to Use Graph databases in EnterpriseHow to get into ML Ops ★ Support this podcast on Patreon ★