Podcast appearances and mentions of Nat Friedman

  • 71PODCASTS
  • 94EPISODES
  • 46mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 2, 2025LATEST
Nat Friedman

POPULARITY

20172018201920202021202220232024


Best podcasts about Nat Friedman

Latest podcast episodes about Nat Friedman

Tetragrammaton with Rick Rubin
Aravind Srinivas

Tetragrammaton with Rick Rubin

Play Episode Listen Later Apr 2, 2025 137:27


Aravind Srinivas is the co-founder and CEO of Perplexity AI, the world's first generally available conversation answer engine. Founded in August 2022 with Johnny Ho, Andy Konwinski, and Denis Yarats, Perplexity delivers accurate, sourced answers to any question. Born and raised in Chennai, India, Srinivas moved to the U.S. in 2017 and earned a PhD in Computer Science from the University of California, Berkeley, where he also taught a course in Deep Unsupervised Learning. He previously held prominent research roles at OpenAI, DeepMind, and Google, and he has positioned Perplexity as a leader in AI-powered information access with backing from top investors including Jeff Bezos, Elad Gil, Nat Friedman, and many others. ------ Thank you to the sponsors that fuel our podcast and our team: Squarespace https://squarespace.com/tetra Use code 'TETRA' ------ LMNT Electrolytes https://drinklmnt.com/tetra Use code 'TETRA' ------ Athletic Nicotine https://www.athleticnicotine.com/tetra Use code 'TETRA' ------ Sign up to receive Tetragrammaton Transmissions https://www.tetragrammaton.com/join-newsletter

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Today's episode is with Paul Klein, founder of Browserbase. We talked about building browser infrastructure for AI agents, the future of agent authentication, and their open source framework Stagehand.* [00:00:00] Introductions* [00:04:46] AI-specific challenges in browser infrastructure* [00:07:05] Multimodality in AI-Powered Browsing* [00:12:26] Running headless browsers at scale* [00:18:46] Geolocation when proxying* [00:21:25] CAPTCHAs and Agent Auth* [00:28:21] Building “User take over” functionality* [00:33:43] Stagehand: AI web browsing framework* [00:38:58] OpenAI's Operator and computer use agents* [00:44:44] Surprising use cases of Browserbase* [00:47:18] Future of browser automation and market competition* [00:53:11] Being a solo founderTranscriptAlessio [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.swyx [00:00:12]: Hey, and today we are very blessed to have our friends, Paul Klein, for the fourth, the fourth, CEO of Browserbase. Welcome.Paul [00:00:21]: Thanks guys. Yeah, I'm happy to be here. I've been lucky to know both of you for like a couple of years now, I think. So it's just like we're hanging out, you know, with three ginormous microphones in front of our face. It's totally normal hangout.swyx [00:00:34]: Yeah. We've actually mentioned you on the podcast, I think, more often than any other Solaris tenant. Just because like you're one of the, you know, best performing, I think, LLM tool companies that have started up in the last couple of years.Paul [00:00:50]: Yeah, I mean, it's been a whirlwind of a year, like Browserbase is actually pretty close to our first birthday. So we are one years old. And going from, you know, starting a company as a solo founder to... To, you know, having a team of 20 people, you know, a series A, but also being able to support hundreds of AI companies that are building AI applications that go out and automate the web. It's just been like, really cool. It's been happening a little too fast. I think like collectively as an AI industry, let's just take a week off together. I took my first vacation actually two weeks ago, and Operator came out on the first day, and then a week later, DeepSeat came out. And I'm like on vacation trying to chill. I'm like, we got to build with this stuff, right? So it's been a breakneck year. But I'm super happy to be here and like talk more about all the stuff we're seeing. And I'd love to hear kind of what you guys are excited about too, and share with it, you know?swyx [00:01:39]: Where to start? So people, you've done a bunch of podcasts. I think I strongly recommend Jack Bridger's Scaling DevTools, as well as Turner Novak's The Peel. And, you know, I'm sure there's others. So you covered your Twilio story in the past, talked about StreamClub, you got acquired to Mux, and then you left to start Browserbase. So maybe we just start with what is Browserbase? Yeah.Paul [00:02:02]: Browserbase is the web browser for your AI. We're building headless browser infrastructure, which are browsers that run in a server environment that's accessible to developers via APIs and SDKs. It's really hard to run a web browser in the cloud. You guys are probably running Chrome on your computers, and that's using a lot of resources, right? So if you want to run a web browser or thousands of web browsers, you can't just spin up a bunch of lambdas. You actually need to use a secure containerized environment. You have to scale it up and down. It's a stateful system. And that infrastructure is, like, super painful. And I know that firsthand, because at my last company, StreamClub, I was CTO, and I was building our own internal headless browser infrastructure. That's actually why we sold the company, is because Mux really wanted to buy our headless browser infrastructure that we'd built. And it's just a super hard problem. And I actually told my co-founders, I would never start another company unless it was a browser infrastructure company. And it turns out that's really necessary in the age of AI, when AI can actually go out and interact with websites, click on buttons, fill in forms. You need AI to do all of that work in an actual browser running somewhere on a server. And BrowserBase powers that.swyx [00:03:08]: While you're talking about it, it occurred to me, not that you're going to be acquired or anything, but it occurred to me that it would be really funny if you became the Nikita Beer of headless browser companies. You just have one trick, and you make browser companies that get acquired.Paul [00:03:23]: I truly do only have one trick. I'm screwed if it's not for headless browsers. I'm not a Go programmer. You know, I'm in AI grant. You know, browsers is an AI grant. But we were the only company in that AI grant batch that used zero dollars on AI spend. You know, we're purely an infrastructure company. So as much as people want to ask me about reinforcement learning, I might not be the best guy to talk about that. But if you want to ask about headless browser infrastructure at scale, I can talk your ear off. So that's really my area of expertise. And it's a pretty niche thing. Like, nobody has done what we're doing at scale before. So we're happy to be the experts.swyx [00:03:59]: You do have an AI thing, stagehand. We can talk about the sort of core of browser-based first, and then maybe stagehand. Yeah, stagehand is kind of the web browsing framework. Yeah.What is Browserbase? Headless Browser Infrastructure ExplainedAlessio [00:04:10]: Yeah. Yeah. And maybe how you got to browser-based and what problems you saw. So one of the first things I worked on as a software engineer was integration testing. Sauce Labs was kind of like the main thing at the time. And then we had Selenium, we had Playbrite, we had all these different browser things. But it's always been super hard to do. So obviously you've worked on this before. When you started browser-based, what were the challenges? What were the AI-specific challenges that you saw versus, there's kind of like all the usual running browser at scale in the cloud, which has been a problem for years. What are like the AI unique things that you saw that like traditional purchase just didn't cover? Yeah.AI-specific challenges in browser infrastructurePaul [00:04:46]: First and foremost, I think back to like the first thing I did as a developer, like as a kid when I was writing code, I wanted to write code that did stuff for me. You know, I wanted to write code to automate my life. And I do that probably by using curl or beautiful soup to fetch data from a web browser. And I think I still do that now that I'm in the cloud. And the other thing that I think is a huge challenge for me is that you can't just create a web site and parse that data. And we all know that now like, you know, taking HTML and plugging that into an LLM, you can extract insights, you can summarize. So it was very clear that now like dynamic web scraping became very possible with the rise of large language models or a lot easier. And that was like a clear reason why there's been more usage of headless browsers, which are necessary because a lot of modern websites don't expose all of their page content via a simple HTTP request. You know, they actually do require you to run this type of code for a specific time. JavaScript on the page to hydrate this. Airbnb is a great example. You go to airbnb.com. A lot of that content on the page isn't there until after they run the initial hydration. So you can't just scrape it with a curl. You need to have some JavaScript run. And a browser is that JavaScript engine that's going to actually run all those requests on the page. So web data retrieval was definitely one driver of starting BrowserBase and the rise of being able to summarize that within LLM. Also, I was familiar with if I wanted to automate a website, I could write one script and that would work for one website. It was very static and deterministic. But the web is non-deterministic. The web is always changing. And until we had LLMs, there was no way to write scripts that you could write once that would run on any website. That would change with the structure of the website. Click the login button. It could mean something different on many different websites. And LLMs allow us to generate code on the fly to actually control that. So I think that rise of writing the generic automation scripts that can work on many different websites, to me, made it clear that browsers are going to be a lot more useful because now you can automate a lot more things without writing. If you wanted to write a script to book a demo call on 100 websites, previously, you had to write 100 scripts. Now you write one script that uses LLMs to generate that script. That's why we built our web browsing framework, StageHand, which does a lot of that work for you. But those two things, web data collection and then enhanced automation of many different websites, it just felt like big drivers for more browser infrastructure that would be required to power these kinds of features.Alessio [00:07:05]: And was multimodality also a big thing?Paul [00:07:08]: Now you can use the LLMs to look, even though the text in the dome might not be as friendly. Maybe my hot take is I was always kind of like, I didn't think vision would be as big of a driver. For UI automation, I felt like, you know, HTML is structured text and large language models are good with structured text. But it's clear that these computer use models are often vision driven, and they've been really pushing things forward. So definitely being multimodal, like rendering the page is required to take a screenshot to give that to a computer use model to take actions on a website. And it's just another win for browser. But I'll be honest, that wasn't what I was thinking early on. I didn't even think that we'd get here so fast with multimodality. I think we're going to have to get back to multimodal and vision models.swyx [00:07:50]: This is one of those things where I forgot to mention in my intro that I'm an investor in Browserbase. And I remember that when you pitched to me, like a lot of the stuff that we have today, we like wasn't on the original conversation. But I did have my original thesis was something that we've talked about on the podcast before, which is take the GPT store, the custom GPT store, all the every single checkbox and plugin is effectively a startup. And this was the browser one. I think the main hesitation, I think I actually took a while to get back to you. The main hesitation was that there were others. Like you're not the first hit list browser startup. It's not even your first hit list browser startup. There's always a question of like, will you be the category winner in a place where there's a bunch of incumbents, to be honest, that are bigger than you? They're just not targeted at the AI space. They don't have the backing of Nat Friedman. And there's a bunch of like, you're here in Silicon Valley. They're not. I don't know.Paul [00:08:47]: I don't know if that's, that was it, but like, there was a, yeah, I mean, like, I think I tried all the other ones and I was like, really disappointed. Like my background is from working at great developer tools, companies, and nothing had like the Vercel like experience. Um, like our biggest competitor actually is partly owned by private equity and they just jacked up their prices quite a bit. And the dashboard hasn't changed in five years. And I actually used them at my last company and tried them and I was like, oh man, like there really just needs to be something that's like the experience of these great infrastructure companies, like Stripe, like clerk, like Vercel that I use in love, but oriented towards this kind of like more specific category, which is browser infrastructure, which is really technically complex. Like a lot of stuff can go wrong on the internet when you're running a browser. The internet is very vast. There's a lot of different configurations. Like there's still websites that only work with internet explorer out there. How do you handle that when you're running your own browser infrastructure? These are the problems that we have to think about and solve at BrowserBase. And it's, it's certainly a labor of love, but I built this for me, first and foremost, I know it's super cheesy and everyone says that for like their startups, but it really, truly was for me. If you look at like the talks I've done even before BrowserBase, and I'm just like really excited to try and build a category defining infrastructure company. And it's, it's rare to have a new category of infrastructure exists. We're here in the Chroma offices and like, you know, vector databases is a new category of infrastructure. Is it, is it, I mean, we can, we're in their office, so, you know, we can, we can debate that one later. That is one.Multimodality in AI-Powered Browsingswyx [00:10:16]: That's one of the industry debates.Paul [00:10:17]: I guess we go back to the LLMOS talk that Karpathy gave way long ago. And like the browser box was very clearly there and it seemed like the people who were building in this space also agreed that browsers are a core primitive of infrastructure for the LLMOS that's going to exist in the future. And nobody was building something there that I wanted to use. So I had to go build it myself.swyx [00:10:38]: Yeah. I mean, exactly that talk that, that honestly, that diagram, every box is a startup and there's the code box and then there's the. The browser box. I think at some point they will start clashing there. There's always the question of the, are you a point solution or are you the sort of all in one? And I think the point solutions tend to win quickly, but then the only ones have a very tight cohesive experience. Yeah. Let's talk about just the hard problems of browser base you have on your website, which is beautiful. Thank you. Was there an agency that you used for that? Yeah. Herb.paris.Paul [00:11:11]: They're amazing. Herb.paris. Yeah. It's H-E-R-V-E. I highly recommend for developers. Developer tools, founders to work with consumer agencies because they end up building beautiful things and the Parisians know how to build beautiful interfaces. So I got to give prep.swyx [00:11:24]: And chat apps, apparently are, they are very fast. Oh yeah. The Mistral chat. Yeah. Mistral. Yeah.Paul [00:11:31]: Late chat.swyx [00:11:31]: Late chat. And then your videos as well, it was professionally shot, right? The series A video. Yeah.Alessio [00:11:36]: Nico did the videos. He's amazing. Not the initial video that you shot at the new one. First one was Austin.Paul [00:11:41]: Another, another video pretty surprised. But yeah, I mean, like, I think when you think about how you talk about your company. You have to think about the way you present yourself. It's, you know, as a developer, you think you evaluate a company based on like the API reliability and the P 95, but a lot of developers say, is the website good? Is the message clear? Do I like trust this founder? I'm building my whole feature on. So I've tried to nail that as well as like the reliability of the infrastructure. You're right. It's very hard. And there's a lot of kind of foot guns that you run into when running headless browsers at scale. Right.Competing with Existing Headless Browser Solutionsswyx [00:12:10]: So let's pick one. You have eight features here. Seamless integration. Scalability. Fast or speed. Secure. Observable. Stealth. That's interesting. Extensible and developer first. What comes to your mind as like the top two, three hardest ones? Yeah.Running headless browsers at scalePaul [00:12:26]: I think just running headless browsers at scale is like the hardest one. And maybe can I nerd out for a second? Is that okay? I heard this is a technical audience, so I'll talk to the other nerds. Whoa. They were listening. Yeah. They're upset. They're ready. The AGI is angry. Okay. So. So how do you run a browser in the cloud? Let's start with that, right? So let's say you're using a popular browser automation framework like Puppeteer, Playwright, and Selenium. Maybe you've written a code, some code locally on your computer that opens up Google. It finds the search bar and then types in, you know, search for Latent Space and hits the search button. That script works great locally. You can see the little browser open up. You want to take that to production. You want to run the script in a cloud environment. So when your laptop is closed, your browser is doing something. The browser is doing something. Well, I, we use Amazon. You can see the little browser open up. You know, the first thing I'd reach for is probably like some sort of serverless infrastructure. I would probably try and deploy on a Lambda. But Chrome itself is too big to run on a Lambda. It's over 250 megabytes. So you can't easily start it on a Lambda. So you maybe have to use something like Lambda layers to squeeze it in there. Maybe use a different Chromium build that's lighter. And you get it on the Lambda. Great. It works. But it runs super slowly. It's because Lambdas are very like resource limited. They only run like with one vCPU. You can run one process at a time. Remember, Chromium is super beefy. It's barely running on my MacBook Air. I'm still downloading it from a pre-run. Yeah, from the test earlier, right? I'm joking. But it's big, you know? So like Lambda, it just won't work really well. Maybe it'll work, but you need something faster. Your users want something faster. Okay. Well, let's put it on a beefier instance. Let's get an EC2 server running. Let's throw Chromium on there. Great. Okay. I can, that works well with one user. But what if I want to run like 10 Chromium instances, one for each of my users? Okay. Well, I might need two EC2 instances. Maybe 10. All of a sudden, you have multiple EC2 instances. This sounds like a problem for Kubernetes and Docker, right? Now, all of a sudden, you're using ECS or EKS, the Kubernetes or container solutions by Amazon. You're spending up and down containers, and you're spending a whole engineer's time on kind of maintaining this stateful distributed system. Those are some of the worst systems to run because when it's a stateful distributed system, it means that you are bound by the connections to that thing. You have to keep the browser open while someone is working with it, right? That's just a painful architecture to run. And there's all this other little gotchas with Chromium, like Chromium, which is the open source version of Chrome, by the way. You have to install all these fonts. You want emojis working in your browsers because your vision model is looking for the emoji. You need to make sure you have the emoji fonts. You need to make sure you have all the right extensions configured, like, oh, do you want ad blocking? How do you configure that? How do you actually record all these browser sessions? Like it's a headless browser. You can't look at it. So you need to have some sort of observability. Maybe you're recording videos and storing those somewhere. It all kind of adds up to be this just giant monster piece of your project when all you wanted to do was run a lot of browsers in production for this little script to go to google.com and search. And when I see a complex distributed system, I see an opportunity to build a great infrastructure company. And we really abstract that away with Browserbase where our customers can use these existing frameworks, Playwright, Publisher, Selenium, or our own stagehand and connect to our browsers in a serverless-like way. And control them, and then just disconnect when they're done. And they don't have to think about the complex distributed system behind all of that. They just get a browser running anywhere, anytime. Really easy to connect to.swyx [00:15:55]: I'm sure you have questions. My standard question with anything, so essentially you're a serverless browser company, and there's been other serverless things that I'm familiar with in the past, serverless GPUs, serverless website hosting. That's where I come from with Netlify. One question is just like, you promised to spin up thousands of servers. You promised to spin up thousands of browsers in milliseconds. I feel like there's no real solution that does that yet. And I'm just kind of curious how. The only solution I know, which is to kind of keep a kind of warm pool of servers around, which is expensive, but maybe not so expensive because it's just CPUs. So I'm just like, you know. Yeah.Browsers as a Core Primitive in AI InfrastructurePaul [00:16:36]: You nailed it, right? I mean, how do you offer a serverless-like experience with something that is clearly not serverless, right? And the answer is, you need to be able to run... We run many browsers on single nodes. We use Kubernetes at browser base. So we have many pods that are being scheduled. We have to predictably schedule them up or down. Yes, thousands of browsers in milliseconds is the best case scenario. If you hit us with 10,000 requests, you may hit a slower cold start, right? So we've done a lot of work on predictive scaling and being able to kind of route stuff to different regions where we have multiple regions of browser base where we have different pools available. You can also pick the region you want to go to based on like lower latency, round trip, time latency. It's very important with these types of things. There's a lot of requests going over the wire. So for us, like having a VM like Firecracker powering everything under the hood allows us to be super nimble and spin things up or down really quickly with strong multi-tenancy. But in the end, this is like the complex infrastructural challenges that we have to kind of deal with at browser base. And we have a lot more stuff on our roadmap to allow customers to have more levers to pull to exchange, do you want really fast browser startup times or do you want really low costs? And if you're willing to be more flexible on that, we may be able to kind of like work better for your use cases.swyx [00:17:44]: Since you used Firecracker, shouldn't Fargate do that for you or did you have to go lower level than that? We had to go lower level than that.Paul [00:17:51]: I find this a lot with Fargate customers, which is alarming for Fargate. We used to be a giant Fargate customer. Actually, the first version of browser base was ECS and Fargate. And unfortunately, it's a great product. I think we were actually the largest Fargate customer in our region for a little while. No, what? Yeah, seriously. And unfortunately, it's a great product, but I think if you're an infrastructure company, you actually have to have a deeper level of control over these primitives. I think it's the same thing is true with databases. We've used other database providers and I think-swyx [00:18:21]: Yeah, serverless Postgres.Paul [00:18:23]: Shocker. When you're an infrastructure company, you're on the hook if any provider has an outage. And I can't tell my customers like, hey, we went down because so-and-so went down. That's not acceptable. So for us, we've really moved to bringing things internally. It's kind of opposite of what we preach. We tell our customers, don't build this in-house, but then we're like, we build a lot of stuff in-house. But I think it just really depends on what is in the critical path. We try and have deep ownership of that.Alessio [00:18:46]: On the distributed location side, how does that work for the web where you might get sort of different content in different locations, but the customer is expecting, you know, if you're in the US, I'm expecting the US version. But if you're spinning up my browser in France, I might get the French version. Yeah.Paul [00:19:02]: Yeah. That's a good question. Well, generally, like on the localization, there is a thing called locale in the browser. You can set like what your locale is. If you're like in the ENUS browser or not, but some things do IP, IP based routing. And in that case, you may want to have a proxy. Like let's say you're running something in the, in Europe, but you want to make sure you're showing up from the US. You may want to use one of our proxy features so you can turn on proxies to say like, make sure these connections always come from the United States, which is necessary too, because when you're browsing the web, you're coming from like a, you know, data center IP, and that can make things a lot harder to browse web. So we do have kind of like this proxy super network. Yeah. We have a proxy for you based on where you're going, so you can reliably automate the web. But if you get scheduled in Europe, that doesn't happen as much. We try and schedule you as close to, you know, your origin that you're trying to go to. But generally you have control over the regions you can put your browsers in. So you can specify West one or East one or Europe. We only have one region of Europe right now, actually. Yeah.Alessio [00:19:55]: What's harder, the browser or the proxy? I feel like to me, it feels like actually proxying reliably at scale. It's much harder than spending up browsers at scale. I'm curious. It's all hard.Paul [00:20:06]: It's layers of hard, right? Yeah. I think it's different levels of hard. I think the thing with the proxy infrastructure is that we work with many different web proxy providers and some are better than others. Some have good days, some have bad days. And our customers who've built browser infrastructure on their own, they have to go and deal with sketchy actors. Like first they figure out their own browser infrastructure and then they got to go buy a proxy. And it's like you can pay in Bitcoin and it just kind of feels a little sus, right? It's like you're buying drugs when you're trying to get a proxy online. We have like deep relationships with these counterparties. We're able to audit them and say, is this proxy being sourced ethically? Like it's not running on someone's TV somewhere. Is it free range? Yeah. Free range organic proxies, right? Right. We do a level of diligence. We're SOC 2. So we have to understand what is going on here. But then we're able to make sure that like we route around proxy providers not working. There's proxy providers who will just, the proxy will stop working all of a sudden. And then if you don't have redundant proxying on your own browsers, that's hard down for you or you may get some serious impacts there. With us, like we intelligently know, hey, this proxy is not working. Let's go to this one. And you can kind of build a network of multiple providers to really guarantee the best uptime for our customers. Yeah. So you don't own any proxies? We don't own any proxies. You're right. The team has been saying who wants to like take home a little proxy server, but not yet. We're not there yet. You know?swyx [00:21:25]: It's a very mature market. I don't think you should build that yourself. Like you should just be a super customer of them. Yeah. Scraping, I think, is the main use case for that. I guess. Well, that leads us into CAPTCHAs and also off, but let's talk about CAPTCHAs. You had a little spiel that you wanted to talk about CAPTCHA stuff.Challenges of Scaling Browser InfrastructurePaul [00:21:43]: Oh, yeah. I was just, I think a lot of people ask, if you're thinking about proxies, you're thinking about CAPTCHAs too. I think it's the same thing. You can go buy CAPTCHA solvers online, but it's the same buying experience. It's some sketchy website, you have to integrate it. It's not fun to buy these things and you can't really trust that the docs are bad. What Browserbase does is we integrate a bunch of different CAPTCHAs. We do some stuff in-house, but generally we just integrate with a bunch of known vendors and continually monitor and maintain these things and say, is this working or not? Can we route around it or not? These are CAPTCHA solvers. CAPTCHA solvers, yeah. Not CAPTCHA providers, CAPTCHA solvers. Yeah, sorry. CAPTCHA solvers. We really try and make sure all of that works for you. I think as a dev, if I'm buying infrastructure, I want it all to work all the time and it's important for us to provide that experience by making sure everything does work and monitoring it on our own. Yeah. Right now, the world of CAPTCHAs is tricky. I think AI agents in particular are very much ahead of the internet infrastructure. CAPTCHAs are designed to block all types of bots, but there are now good bots and bad bots. I think in the future, CAPTCHAs will be able to identify who a good bot is, hopefully via some sort of KYC. For us, we've been very lucky. We have very little to no known abuse of Browserbase because we really look into who we work with. And for certain types of CAPTCHA solving, we only allow them on certain types of plans because we want to make sure that we can know what people are doing, what their use cases are. And that's really allowed us to try and be an arbiter of good bots, which is our long term goal. I want to build great relationships with people like Cloudflare so we can agree, hey, here are these acceptable bots. We'll identify them for you and make sure we flag when they come to your website. This is a good bot, you know?Alessio [00:23:23]: I see. And Cloudflare said they want to do more of this. So they're going to set by default, if they think you're an AI bot, they're going to reject. I'm curious if you think this is something that is going to be at the browser level or I mean, the DNS level with Cloudflare seems more where it should belong. But I'm curious how you think about it.Paul [00:23:40]: I think the web's going to change. You know, I think that the Internet as we have it right now is going to change. And we all need to just accept that the cat is out of the bag. And instead of kind of like wishing the Internet was like it was in the 2000s, we can have free content line that wouldn't be scraped. It's just it's not going to happen. And instead, we should think about like, one, how can we change? How can we change the models of, you know, information being published online so people can adequately commercialize it? But two, how do we rebuild applications that expect that AI agents are going to log in on their behalf? Those are the things that are going to allow us to kind of like identify good and bad bots. And I think the team at Clerk has been doing a really good job with this on the authentication side. I actually think that auth is the biggest thing that will prevent agents from accessing stuff, not captchas. And I think there will be agent auth in the future. I don't know if it's going to happen from an individual company, but actually authentication providers that have a, you know, hidden login as agent feature, which will then you put in your email, you'll get a push notification, say like, hey, your browser-based agent wants to log into your Airbnb. You can approve that and then the agent can proceed. That really circumvents the need for captchas or logging in as you and sharing your password. I think agent auth is going to be one way we identify good bots going forward. And I think a lot of this captcha solving stuff is really short-term problems as the internet kind of reorients itself around how it's going to work with agents browsing the web, just like people do. Yeah.Managing Distributed Browser Locations and Proxiesswyx [00:24:59]: Stitch recently was on Hacker News for talking about agent experience, AX, which is a thing that Netlify is also trying to clone and coin and talk about. And we've talked about this on our previous episodes before in a sense that I actually think that's like maybe the only part of the tech stack that needs to be kind of reinvented for agents. Everything else can stay the same, CLIs, APIs, whatever. But auth, yeah, we need agent auth. And it's mostly like short-lived, like it should not, it should be a distinct, identity from the human, but paired. I almost think like in the same way that every social network should have your main profile and then your alt accounts or your Finsta, it's almost like, you know, every, every human token should be paired with the agent token and the agent token can go and do stuff on behalf of the human token, but not be presumed to be the human. Yeah.Paul [00:25:48]: It's like, it's, it's actually very similar to OAuth is what I'm thinking. And, you know, Thread from Stitch is an investor, Colin from Clerk, Octaventures, all investors in browser-based because like, I hope they solve this because they'll make browser-based submission more possible. So we don't have to overcome all these hurdles, but I think it will be an OAuth-like flow where an agent will ask to log in as you, you'll approve the scopes. Like it can book an apartment on Airbnb, but it can't like message anybody. And then, you know, the agent will have some sort of like role-based access control within an application. Yeah. I'm excited for that.swyx [00:26:16]: The tricky part is just, there's one, one layer of delegation here, which is like, you're authoring my user's user or something like that. I don't know if that's tricky or not. Does that make sense? Yeah.Paul [00:26:25]: You know, actually at Twilio, I worked on the login identity and access. Management teams, right? So like I built Twilio's login page.swyx [00:26:31]: You were an intern on that team and then you became the lead in two years? Yeah.Paul [00:26:34]: Yeah. I started as an intern in 2016 and then I was the tech lead of that team. How? That's not normal. I didn't have a life. He's not normal. Look at this guy. I didn't have a girlfriend. I just loved my job. I don't know. I applied to 500 internships for my first job and I got rejected from every single one of them except for Twilio and then eventually Amazon. And they took a shot on me and like, I was getting paid money to write code, which was my dream. Yeah. Yeah. I'm very lucky that like this coding thing worked out because I was going to be doing it regardless. And yeah, I was able to kind of spend a lot of time on a team that was growing at a company that was growing. So it informed a lot of this stuff here. I think these are problems that have been solved with like the SAML protocol with SSO. I think it's a really interesting stuff with like WebAuthn, like these different types of authentication, like schemes that you can use to authenticate people. The tooling is all there. It just needs to be tweaked a little bit to work for agents. And I think the fact that there are companies that are already. Providing authentication as a service really sets it up. Well, the thing that's hard is like reinventing the internet for agents. We don't want to rebuild the internet. That's an impossible task. And I think people often say like, well, we'll have this second layer of APIs built for agents. I'm like, we will for the top use cases, but instead of we can just tweak the internet as is, which is on the authentication side, I think we're going to be the dumb ones going forward. Unfortunately, I think AI is going to be able to do a lot of the tasks that we do online, which means that it will be able to go to websites, click buttons on our behalf and log in on our behalf too. So with this kind of like web agent future happening, I think with some small structural changes, like you said, it feels like it could all slot in really nicely with the existing internet.Handling CAPTCHAs and Agent Authenticationswyx [00:28:08]: There's one more thing, which is the, your live view iframe, which lets you take, take control. Yeah. Obviously very key for operator now, but like, was, is there anything interesting technically there or that the people like, well, people always want this.Paul [00:28:21]: It was really hard to build, you know, like, so, okay. Headless browsers, you don't see them, right. They're running. They're running in a cloud somewhere. You can't like look at them. And I just want to really make, it's a weird name. I wish we came up with a better name for this thing, but you can't see them. Right. But customers don't trust AI agents, right. At least the first pass. So what we do with our live view is that, you know, when you use browser base, you can actually embed a live view of the browser running in the cloud for your customer to see it working. And that's what the first reason is the build trust, like, okay, so I have this script. That's going to go automate a website. I can embed it into my web application via an iframe and my customer can watch. I think. And then we added two way communication. So now not only can you watch the browser kind of being operated by AI, if you want to pause and actually click around type within this iframe that's controlling a browser, that's also possible. And this is all thanks to some of the lower level protocol, which is called the Chrome DevTools protocol. It has a API called start screencast, and you can also send mouse clicks and button clicks to a remote browser. And this is all embeddable within iframes. You have a browser within a browser, yo. And then you simulate the screen, the click on the other side. Exactly. And this is really nice often for, like, let's say, a capture that can't be solved. You saw this with Operator, you know, Operator actually uses a different approach. They use VNC. So, you know, you're able to see, like, you're seeing the whole window here. What we're doing is something a little lower level with the Chrome DevTools protocol. It's just PNGs being streamed over the wire. But the same thing is true, right? Like, hey, I'm running a window. Pause. Can you do something in this window? Human. Okay, great. Resume. Like sometimes 2FA tokens. Like if you get that text message, you might need a person to type that in. Web agents need human-in-the-loop type workflows still. You still need a person to interact with the browser. And building a UI to proxy that is kind of hard. You may as well just show them the whole browser and say, hey, can you finish this up for me? And then let the AI proceed on afterwards. Is there a future where I stream my current desktop to browser base? I don't think so. I think we're very much cloud infrastructure. Yeah. You know, but I think a lot of the stuff we're doing, we do want to, like, build tools. Like, you know, we'll talk about the stage and, you know, web agent framework in a second. But, like, there's a case where a lot of people are going desktop first for, you know, consumer use. And I think cloud is doing a lot of this, where I expect to see, you know, MCPs really oriented around the cloud desktop app for a reason, right? Like, I think a lot of these tools are going to run on your computer because it makes... I think it's breaking out. People are putting it on a server. Oh, really? Okay. Well, sweet. We'll see. We'll see that. I was surprised, though, wasn't I? I think that the browser company, too, with Dia Browser, it runs on your machine. You know, it's going to be...swyx [00:30:50]: What is it?Paul [00:30:51]: So, Dia Browser, as far as I understand... I used to use Arc. Yeah. I haven't used Arc. But I'm a big fan of the browser company. I think they're doing a lot of cool stuff in consumer. As far as I understand, it's a browser where you have a sidebar where you can, like, chat with it and it can control the local browser on your machine. So, if you imagine, like, what a consumer web agent is, which it lives alongside your browser, I think Google Chrome has Project Marina, I think. I almost call it Project Marinara for some reason. I don't know why. It's...swyx [00:31:17]: No, I think it's someone really likes the Waterworld. Oh, I see. The classic Kevin Costner. Yeah.Paul [00:31:22]: Okay. Project Marinara is a similar thing to the Dia Browser, in my mind, as far as I understand it. You have a browser that has an AI interface that will take over your mouse and keyboard and control the browser for you. Great for consumer use cases. But if you're building applications that rely on a browser and it's more part of a greater, like, AI app experience, you probably need something that's more like infrastructure, not a consumer app.swyx [00:31:44]: Just because I have explored a little bit in this area, do people want branching? So, I have the state. Of whatever my browser's in. And then I want, like, 100 clones of this state. Do people do that? Or...Paul [00:31:56]: People don't do it currently. Yeah. But it's definitely something we're thinking about. I think the idea of forking a browser is really cool. Technically, kind of hard. We're starting to see this in code execution, where people are, like, forking some, like, code execution, like, processes or forking some tool calls or branching tool calls. Haven't seen it at the browser level yet. But it makes sense. Like, if an AI agent is, like, using a website and it's not sure what path it wants to take to crawl this website. To find the information it's looking for. It would make sense for it to explore both paths in parallel. And that'd be a very, like... A road not taken. Yeah. And hopefully find the right answer. And then say, okay, this was actually the right one. And memorize that. And go there in the future. On the roadmap. For sure. Don't make my roadmap, please. You know?Alessio [00:32:37]: How do you actually do that? Yeah. How do you fork? I feel like the browser is so stateful for so many things.swyx [00:32:42]: Serialize the state. Restore the state. I don't know.Paul [00:32:44]: So, it's one of the reasons why we haven't done it yet. It's hard. You know? Like, to truly fork, it's actually quite difficult. The naive way is to open the same page in a new tab and then, like, hope that it's at the same thing. But if you have a form halfway filled, you may have to, like, take the whole, you know, container. Pause it. All the memory. Duplicate it. Restart it from there. It could be very slow. So, we haven't found a thing. Like, the easy thing to fork is just, like, copy the page object. You know? But I think there needs to be something a little bit more robust there. Yeah.swyx [00:33:12]: So, MorphLabs has this infinite branch thing. Like, wrote a custom fork of Linux or something that let them save the system state and clone it. MorphLabs, hit me up. I'll be a customer. Yeah. That's the only. I think that's the only way to do it. Yeah. Like, unless Chrome has some special API for you. Yeah.Paul [00:33:29]: There's probably something we'll reverse engineer one day. I don't know. Yeah.Alessio [00:33:32]: Let's talk about StageHand, the AI web browsing framework. You have three core components, Observe, Extract, and Act. Pretty clean landing page. What was the idea behind making a framework? Yeah.Stagehand: AI web browsing frameworkPaul [00:33:43]: So, there's three frameworks that are very popular or already exist, right? Puppeteer, Playwright, Selenium. Those are for building hard-coded scripts to control websites. And as soon as I started to play with LLMs plus browsing, I caught myself, you know, code-genning Playwright code to control a website. I would, like, take the DOM. I'd pass it to an LLM. I'd say, can you generate the Playwright code to click the appropriate button here? And it would do that. And I was like, this really should be part of the frameworks themselves. And I became really obsessed with SDKs that take natural language as part of, like, the API input. And that's what StageHand is. StageHand exposes three APIs, and it's a super set of Playwright. So, if you go to a page, you may want to take an action, click on the button, fill in the form, etc. That's what the act command is for. You may want to extract some data. This one takes a natural language, like, extract the winner of the Super Bowl from this page. You can give it a Zod schema, so it returns a structured output. And then maybe you're building an API. You can do an agent loop, and you want to kind of see what actions are possible on this page before taking one. You can do observe. So, you can observe the actions on the page, and it will generate a list of actions. You can guide it, like, give me actions on this page related to buying an item. And you can, like, buy it now, add to cart, view shipping options, and pass that to an LLM, an agent loop, to say, what's the appropriate action given this high-level goal? So, StageHand isn't a web agent. It's a framework for building web agents. And we think that agent loops are actually pretty close to the application layer because every application probably has different goals or different ways it wants to take steps. I don't think I've seen a generic. Maybe you guys are the experts here. I haven't seen, like, a really good AI agent framework here. Everyone kind of has their own special sauce, right? I see a lot of developers building their own agent loops, and they're using tools. And I view StageHand as the browser tool. So, we expose act, extract, observe. Your agent can call these tools. And from that, you don't have to worry about it. You don't have to worry about generating playwright code performantly. You don't have to worry about running it. You can kind of just integrate these three tool calls into your agent loop and reliably automate the web.swyx [00:35:48]: A special shout-out to Anirudh, who I met at your dinner, who I think listens to the pod. Yeah. Hey, Anirudh.Paul [00:35:54]: Anirudh's a man. He's a StageHand guy.swyx [00:35:56]: I mean, the interesting thing about each of these APIs is they're kind of each startup. Like, specifically extract, you know, Firecrawler is extract. There's, like, Expand AI. There's a whole bunch of, like, extract companies. They just focus on extract. I'm curious. Like, I feel like you guys are going to collide at some point. Like, right now, it's friendly. Everyone's in a blue ocean. At some point, it's going to be valuable enough that there's some turf battle here. I don't think you have a dog in a fight. I think you can mock extract to use an external service if they're better at it than you. But it's just an observation that, like, in the same way that I see each option, each checkbox in the side of custom GBTs becoming a startup or each box in the Karpathy chart being a startup. Like, this is also becoming a thing. Yeah.Paul [00:36:41]: I mean, like, so the way StageHand works is that it's MIT-licensed, completely open source. You bring your own API key to your LLM of choice. You could choose your LLM. We don't make any money off of the extract or really. We only really make money if you choose to run it with our browser. You don't have to. You can actually use your own browser, a local browser. You know, StageHand is completely open source for that reason. And, yeah, like, I think if you're building really complex web scraping workflows, I don't know if StageHand is the tool for you. I think it's really more if you're building an AI agent that needs a few general tools or if it's doing a lot of, like, web automation-intensive work. But if you're building a scraping company, StageHand is not your thing. You probably want something that's going to, like, get HTML content, you know, convert that to Markdown, query it. That's not what StageHand does. StageHand is more about reliability. I think we focus a lot on reliability and less so on cost optimization and speed at this point.swyx [00:37:33]: I actually feel like StageHand, so the way that StageHand works, it's like, you know, page.act, click on the quick start. Yeah. It's kind of the integration test for the code that you would have to write anyway, like the Puppeteer code that you have to write anyway. And when the page structure changes, because it always does, then this is still the test. This is still the test that I would have to write. Yeah. So it's kind of like a testing framework that doesn't need implementation detail.Paul [00:37:56]: Well, yeah. I mean, Puppeteer, Playwright, and Slenderman were all designed as testing frameworks, right? Yeah. And now people are, like, hacking them together to automate the web. I would say, and, like, maybe this is, like, me being too specific. But, like, when I write tests, if the page structure changes. Without me knowing, I want that test to fail. So I don't know if, like, AI, like, regenerating that. Like, people are using StageHand for testing. But it's more for, like, usability testing, not, like, testing of, like, does the front end, like, has it changed or not. Okay. But generally where we've seen people, like, really, like, take off is, like, if they're using, you know, something. If they want to build a feature in their application that's kind of like Operator or Deep Research, they're using StageHand to kind of power that tool calling in their own agent loop. Okay. Cool.swyx [00:38:37]: So let's go into Operator, the first big agent launch of the year from OpenAI. Seems like they have a whole bunch scheduled. You were on break and your phone blew up. What's your just general view of computer use agents is what they're calling it. The overall category before we go into Open Operator, just the overall promise of Operator. I will observe that I tried it once. It was okay. And I never tried it again.OpenAI's Operator and computer use agentsPaul [00:38:58]: That tracks with my experience, too. Like, I'm a huge fan of the OpenAI team. Like, I think that I do not view Operator as the company. I'm not a company killer for browser base at all. I think it actually shows people what's possible. I think, like, computer use models make a lot of sense. And I'm actually most excited about computer use models is, like, their ability to, like, really take screenshots and reasoning and output steps. I think that using mouse click or mouse coordinates, I've seen that proved to be less reliable than I would like. And I just wonder if that's the right form factor. What we've done with our framework is anchor it to the DOM itself, anchor it to the actual item. So, like, if it's clicking on something, it's clicking on that thing, you know? Like, it's more accurate. No matter where it is. Yeah, exactly. Because it really ties in nicely. And it can handle, like, the whole viewport in one go, whereas, like, Operator can only handle what it sees. Can you hover? Is hovering a thing that you can do? I don't know if we expose it as a tool directly, but I'm sure there's, like, an API for hovering. Like, move mouse to this position. Yeah, yeah, yeah. I think you can trigger hover, like, via, like, the JavaScript on the DOM itself. But, no, I think, like, when we saw computer use, everyone's eyes lit up because they realized, like, wow, like, AI is going to actually automate work for people. And I think seeing that kind of happen from both of the labs, and I'm sure we're going to see more labs launch computer use models, I'm excited to see all the stuff that people build with it. I think that I'd love to see computer use power, like, controlling a browser on browser base. And I think, like, Open Operator, which was, like, our open source version of OpenAI's Operator, was our first take on, like, how can we integrate these models into browser base? And we handle the infrastructure and let the labs do the models. I don't have a sense that Operator will be released as an API. I don't know. Maybe it will. I'm curious to see how well that works because I think it's going to be really hard for a company like OpenAI to do things like support CAPTCHA solving or, like, have proxies. Like, I think it's hard for them structurally. Imagine this New York Times headline, OpenAI CAPTCHA solving. Like, that would be a pretty bad headline, this New York Times headline. Browser base solves CAPTCHAs. No one cares. No one cares. And, like, our investors are bored. Like, we're all okay with this, you know? We're building this company knowing that the CAPTCHA solving is short-lived until we figure out how to authenticate good bots. I think it's really hard for a company like OpenAI, who has this brand that's so, so good, to balance with, like, the icky parts of web automation, which it can be kind of complex to solve. I'm sure OpenAI knows who to call whenever they need you. Yeah, right. I'm sure they'll have a great partnership.Alessio [00:41:23]: And is Open Operator just, like, a marketing thing for you? Like, how do you think about resource allocation? So, you can spin this up very quickly. And now there's all this, like, open deep research, just open all these things that people are building. We started it, you know. You're the original Open. We're the original Open operator, you know? Is it just, hey, look, this is a demo, but, like, we'll help you build out an actual product for yourself? Like, are you interested in going more of a product route? That's kind of the OpenAI way, right? They started as a model provider and then…Paul [00:41:53]: Yeah, we're not interested in going the product route yet. I view Open Operator as a model provider. It's a reference project, you know? Let's show people how to build these things using the infrastructure and models that are out there. And that's what it is. It's, like, Open Operator is very simple. It's an agent loop. It says, like, take a high-level goal, break it down into steps, use tool calling to accomplish those steps. It takes screenshots and feeds those screenshots into an LLM with the step to generate the right action. It uses stagehand under the hood to actually execute this action. It doesn't use a computer use model. And it, like, has a nice interface using the live view that we talked about, the iframe, to embed that into an application. So I felt like people on launch day wanted to figure out how to build their own version of this. And we turned that around really quickly to show them. And I hope we do that with other things like deep research. We don't have a deep research launch yet. I think David from AOMNI actually has an amazing open deep research that he launched. It has, like, 10K GitHub stars now. So he's crushing that. But I think if people want to build these features natively into their application, they need good reference projects. And I think Open Operator is a good example of that.swyx [00:42:52]: I don't know. Actually, I'm actually pretty bullish on API-driven operator. Because that's the only way that you can sort of, like, once it's reliable enough, obviously. And now we're nowhere near. But, like, give it five years. It'll happen, you know. And then you can sort of spin this up and browsers are working in the background and you don't necessarily have to know. And it just is booking restaurants for you, whatever. I can definitely see that future happening. I had this on the landing page here. This might be a slightly out of order. But, you know, you have, like, sort of three use cases for browser base. Open Operator. Or this is the operator sort of use case. It's kind of like the workflow automation use case. And it completes with UiPath in the sort of RPA category. Would you agree with that? Yeah, I would agree with that. And then there's Agents we talked about already. And web scraping, which I imagine would be the bulk of your workload right now, right?Paul [00:43:40]: No, not at all. I'd say actually, like, the majority is browser automation. We're kind of expensive for web scraping. Like, I think that if you're building a web scraping product, if you need to do occasional web scraping or you have to do web scraping that works every single time, you want to use browser automation. Yeah. You want to use browser-based. But if you're building web scraping workflows, what you should do is have a waterfall. You should have the first request is a curl to the website. See if you can get it without even using a browser. And then the second request may be, like, a scraping-specific API. There's, like, a thousand scraping APIs out there that you can use to try and get data. Scraping B. Scraping B is a great example, right? Yeah. And then, like, if those two don't work, bring out the heavy hitter. Like, browser-based will 100% work, right? It will load the page in a real browser, hydrate it. I see.swyx [00:44:21]: Because a lot of people don't render to JS.swyx [00:44:25]: Yeah, exactly.Paul [00:44:26]: So, I mean, the three big use cases, right? Like, you know, automation, web data collection, and then, you know, if you're building anything agentic that needs, like, a browser tool, you want to use browser-based.Alessio [00:44:35]: Is there any use case that, like, you were super surprised by that people might not even think about? Oh, yeah. Or is it, yeah, anything that you can share? The long tail is crazy. Yeah.Surprising use cases of BrowserbasePaul [00:44:44]: One of the case studies on our website that I think is the most interesting is this company called Benny. So, the way that it works is if you're on food stamps in the United States, you can actually get rebates if you buy certain things. Yeah. You buy some vegetables. You submit your receipt to the government. They'll give you a little rebate back. Say, hey, thanks for buying vegetables. It's good for you. That process of submitting that receipt is very painful. And the way Benny works is you use their app to take a photo of your receipt, and then Benny will go submit that receipt for you and then deposit the money into your account. That's actually using no AI at all. It's all, like, hard-coded scripts. They maintain the scripts. They've been doing a great job. And they build this amazing consumer app. But it's an example of, like, all these, like, tedious workflows that people have to do to kind of go about their business. And they're doing it for the sake of their day-to-day lives. And I had never known about, like, food stamp rebates or the complex forms you have to do to fill them. But the world is powered by millions and millions of tedious forms, visas. You know, Emirate Lighthouse is a customer, right? You know, they do the O1 visa. Millions and millions of forms are taking away humans' time. And I hope that Browserbase can help power software that automates away the web forms that we don't need anymore. Yeah.swyx [00:45:49]: I mean, I'm very supportive of that. I mean, forms. I do think, like, government itself is a big part of it. I think the government itself should embrace AI more to do more sort of human-friendly form filling. Mm-hmm. But I'm not optimistic. I'm not holding my breath. Yeah. We'll see. Okay. I think I'm about to zoom out. I have a little brief thing on computer use, and then we can talk about founder stuff, which is, I tend to think of developer tooling markets in impossible triangles, where everyone starts in a niche, and then they start to branch out. So I already hinted at a little bit of this, right? We mentioned more. We mentioned E2B. We mentioned Firecrawl. And then there's Browserbase. So there's, like, all this stuff of, like, have serverless virtual computer that you give to an agent and let them do stuff with it. And there's various ways of connecting it to the internet. You can just connect to a search API, like SERP API, whatever other, like, EXA is another one. That's what you're searching. You can also have a JSON markdown extractor, which is Firecrawl. Or you can have a virtual browser like Browserbase, or you can have a virtual machine like Morph. And then there's also maybe, like, a virtual sort of code environment, like Code Interpreter. So, like, there's just, like, a bunch of different ways to tackle the problem of give a computer to an agent. And I'm just kind of wondering if you see, like, everyone's just, like, happily coexisting in their respective niches. And as a developer, I just go and pick, like, a shopping basket of one of each. Or do you think that you eventually, people will collide?Future of browser automation and market competitionPaul [00:47:18]: I think that currently it's not a zero-sum market. Like, I think we're talking about... I think we're talking about all of knowledge work that people do that can be automated online. All of these, like, trillions of hours that happen online where people are working. And I think that there's so much software to be built that, like, I tend not to think about how these companies will collide. I just try to solve the problem as best as I can and make this specific piece of infrastructure, which I think is an important primitive, the best I possibly can. And yeah. I think there's players that are actually going to like it. I think there's players that are going to launch, like, over-the-top, you know, platforms, like agent platforms that have all these tools built in, right? Like, who's building the rippling for agent tools that has the search tool, the browser tool, the operating system tool, right? There are some. There are some. There are some, right? And I think in the end, what I have seen as my time as a developer, and I look at all the favorite tools that I have, is that, like, for tools and primitives with sufficient levels of complexity, you need to have a solution that's really bespoke to that primitive, you know? And I am sufficiently convinced that the browser is complex enough to deserve a primitive. Obviously, I have to. I'm the founder of BrowserBase, right? I'm talking my book. But, like, I think maybe I can give you one spicy take against, like, maybe just whole OS running. I think that when I look at computer use when it first came out, I saw that the majority of use cases for computer use were controlling a browser. And do we really need to run an entire operating system just to control a browser? I don't think so. I don't think that's necessary. You know, BrowserBase can run browsers for way cheaper than you can if you're running a full-fledged OS with a GUI, you know, operating system. And I think that's just an advantage of the browser. It is, like, browsers are little OSs, and you can run them very efficiently if you orchestrate it well. And I think that allows us to offer 90% of the, you know, functionality in the platform needed at 10% of the cost of running a full OS. Yeah.Open Operator: Browserbase's Open-Source Alternativeswyx [00:49:16]: I definitely see the logic in that. There's a Mark Andreessen quote. I don't know if you know this one. Where he basically observed that the browser is turning the operating system into a poorly debugged set of device drivers, because most of the apps are moved from the OS to the browser. So you can just run browsers.Paul [00:49:31]: There's a place for OSs, too. Like, I think that there are some applications that only run on Windows operating systems. And Eric from pig.dev in this upcoming YC batch, or last YC batch, like, he's building all run tons of Windows operating systems for you to control with your agent. And like, there's some legacy EHR systems that only run on Internet-controlled systems. Yeah.Paul [00:49:54]: I think that's it. I think, like, there are use cases for specific operating systems for specific legacy software. And like, I'm excited to see what he does with that. I just wanted to give a shout out to the pig.dev website.swyx [00:50:06]: The pigs jump when you click on them. Yeah. That's great.Paul [00:50:08]: Eric, he's the former co-founder of banana.dev, too.swyx [00:50:11]: Oh, that Eric. Yeah. That Eric. Okay. Well, he abandoned bananas for pigs. I hope he doesn't start going around with pigs now.Alessio [00:50:18]: Like he was going around with bananas. A little toy pig. Yeah. Yeah. I love that. What else are we missing? I think we covered a lot of, like, the browser-based product history, but. What do you wish people asked you? Yeah.Paul [00:50:29]: I wish people asked me more about, like, what will the future of software look like? Because I think that's really where I've spent a lot of time about why do browser-based. Like, for me, starting a company is like a means of last resort. Like, you shouldn't start a company unless you absolutely have to. And I remain convinced that the future of software is software that you're going to click a button and it's going to do stuff on your behalf. Right now, software. You click a button and it maybe, like, calls it back an API and, like, computes some numbers. It, like, modifies some text, whatever. But the future of software is software using software. So, I may log into my accounting website for my business, click a button, and it's going to go load up my Gmail, search my emails, find the thing, upload the receipt, and then comment it for me. Right? And it may use it using APIs, maybe a browser. I don't know. I think it's a little bit of both. But that's completely different from how we've built software so far. And that's. I think that future of software has different infrastructure requirements. It's going to require different UIs. It's going to require different pieces of infrastructure. I think the browser infrastructure is one piece that fits into that, along with all the other categories you mentioned. So, I think that it's going to require developers to think differently about how they've built software for, you know

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: The Future of Foundation Models | The Future of AI Consumer Apps and Why OpenAI Did a Disservice to Them | The Future of Music: Spotify vs YouTube & Spotify vs TikTok: What Happens with Mikey Shulman @ Suno

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jan 10, 2025 57:50


Mikey Shulman is the Co-Founder and CEO of Suno, the leading music AI company. Suno lets everyone make and share music. Mikey has raised over $125M for the company from the likes of Lightspeed, Founder Collective and Nat Friedman and Daniel Gross. Prior to founding Suno, Mikey was the first machine learning engineer and head of machine learning at Kensho technologies, which was acquired by S&P Global for over $500 million.  In Today's Episode with Mikey Shulman: 1. The Future of Models:  Who wins the future of models? Anthropic, OpenAI or X? Will we live in a world of many smaller models? When does it make sense for specialised vs generalised models? Does Mikey believe we will continue to see the benefits of scaling laws? 2. The Future of UI and Consumer Apps:  Why does Mikey believe that OpenAI did AI consumer companies a massive disservice? Why does Mikey believe consumers will not choose their model or pay for a superior model in the future?  Why does Mikey believe that good taste is more important than good skills? Why does Mikey argue physicists and economists make the best ML engineers? 3. The Future of Music:  What is going on with Suno's lawsuit against some of the biggest labels in music? How does Mikey see the future of music discovery? How does Mikey see the battle between Spotify and YouTube playing out? How does Mikey see the battle between TikTok and Spotify playing out?  

Infinite Machine Learning
Voice-to-Voice Foundation Models

Infinite Machine Learning

Play Episode Listen Later Oct 30, 2024 39:08


Alan Cowen is the cofounder and CEO of Hume, a company building voice-to-voice foundation models. They recently raised their $50M Series B from Union Square Ventures, Nat Friedman, Daniel Gross, and others. Alan's favorite book: 1984 (Author: George Orwell)(00:01) Introduction(00:06) Defining Voice-to-Voice Foundation Models(01:26) Historical Context: Handling Voice and Speech Understanding(03:54) Emotion Detection in Voice AI Models(04:33) Training Models to Recognize Human Emotion in Speech(07:19) Cultural Variations in Emotional Expressions(09:00) Semantic Space Theory in Emotion Recognition(12:11) Limitations of Basic Emotion Categories(15:50) Recognizing Blended Emotional States(20:15) Objectivity in Emotion Science(24:37) Practical Aspects of Deploying Voice AI Systems(28:17) Real-Time System Constraints and Latency(31:30) Advancements in Voice AI Models(32:54) Rapid-Fire Round--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi 

The Jim Rutt Show
EP 265 Aravind Srinivas on Perplexity AI

The Jim Rutt Show

Play Episode Listen Later Oct 17, 2024 48:50


Jim talks with Aravind Srinivas, co-founder and CEO of the AI-powered search engine Perplexity. They discuss Jim's use of Perplexity, its wide range of use cases, why Google search is limited by fear of mistakes, retrieval-augmented generation (RAG), citations, coming up with the idea, leveraging existing tools vs inventing everything, the core product experience, how the orchestration engine works, semantic vector databases, testing Perplexity as a hedge fund strategist, the Perplexity API, Perplexity's moat, maintaining cognitive sovereignty, paid tiers, what the company needs to succeed, having individuals as major investors, debunking rumors of acquisition by NVIDIA, affordances for coders, and much more. Episode Transcript Perplexity Aravind Srinivas is the CEO of Perplexity, the conversational "answer engine" that provides precise, user-focused answers to queries — with in-line citations. Aravind co-founded the company in 2022 after working as a research scientist at OpenAI, Google, and DeepMind. To date, Perplexity has raised over $165 million from investors including Jeff Bezos, Nat Friedman, Elad Gil, NVIDIA, and the late Susan Wojciki. He has a PhD in computer science from UC Berkeley and a Bachelors and Masters in Electrical Engineering from the Indian Institute of Technology, Madras.

On Point
ep 208 | October encore - Pessimists sound smart, optimists make money

On Point

Play Episode Listen Later Oct 3, 2024 7:13


US tech investor Nat Friedman famously said "pessimists sound smart, optimists make money." He wasn't referring to financial markets, but there's something we can learn from that as investors. There's nothing wrong with being mindful of risks and let's be honest, at any given time the list of concerning issues is a lengthy one. However, when it comes to investing being an optimist pays off.

Liberty's Highlights
The Cost of Glory with Alex Petkas: Timeless Lessons from Ancient Greece and Rome

Liberty's Highlights

Play Episode Listen Later Sep 27, 2024 98:46


Software Engineering Daily
Fast Frontend Development with David Hsu

Software Engineering Daily

Play Episode Listen Later Jul 10, 2024


Retool is a platform to help engineers quickly build internal frontends. It does this by abstracting away repetitive aspects of frontend development. The platform was started in 2017 and has received funding from Sequoia, Stripe Co-Founders, and Nat Friedman. David Hsu is the founder and CEO of Retool. He joins the show to talk about The post Fast Frontend Development with David Hsu appeared first on Software Engineering Daily.

Podcast – Software Engineering Daily
Fast Frontend Development with David Hsu

Podcast – Software Engineering Daily

Play Episode Listen Later Jul 10, 2024


Retool is a platform to help engineers quickly build internal frontends. It does this by abstracting away repetitive aspects of frontend development. The platform was started in 2017 and has received funding from Sequoia, Stripe Co-Founders, and Nat Friedman. David Hsu is the founder and CEO of Retool. He joins the show to talk about The post Fast Frontend Development with David Hsu appeared first on Software Engineering Daily.

On Point
Pessimists sound smart, optimists make money

On Point

Play Episode Listen Later Jun 25, 2024 7:13


US tech investor Nat Friedman famously said "pessimists sound smart, optimists make money." He wasn't referring to financial markets, but there's something we can learn from that as investors. There's nothing wrong with being mindful of risks and let's be honest, at any given time the list of concerning issues is a lengthy one. However, when it comes to investing being an optimist pays off.

GREY Journal Daily News Podcast
Why Are Investors Betting Big on Accounting Startups

GREY Journal Daily News Podcast

Play Episode Listen Later Jun 24, 2024 1:24


Klarity, an accounting startup based in San Francisco, raised $70 million in a Series B funding round led by Nat Friedman and Daniel Gross, with additional support from Scale Venture Partners, Tola Capital, Picus Capital, Invus Capital, and Y Combinator. The raised funds will be used to expand Klarity's workforce, tripling it to 390 employees within the year. Klarity employs AI to process data in contracts and internal records, eliminating the need for manual work. This trend of significant funding is also observed in other accounting tech firms like Ageras, FloQast, and DataSnipper, which have also secured substantial investment to automate accounting tasks using AI. AI-driven startups in other sectors, such as legal tech, are also attracting significant investment.Learn more on this news visit us at: https://greyjournal.net/news/ Hosted on Acast. See acast.com/privacy for more information.

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
20VC: Perplexity's Aravind Srinivas on Will Foundation Models Commoditise, Diminishing Returns in Model Performance, OpenAI vs Anthropic: Who Wins & Why the Next Breakthrough in Model Performance will be in Reasoning

The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

Play Episode Listen Later Jun 5, 2024 55:03


Aravind Srinivas is the Co-Founder & CEO of Perplexity, the conversational "answer engine" that provides precise, user-focused answers to queries. Aravind co-founded the company in 2022 after working as a research scientist at OpenAI, Google, and DeepMind. To date, Perplexity has raised over $100 million from investors including Jeff Bezos, Nat Friedman, Elad Gil, and Susan Wojciki. In Today's Episode with Aravind Srinivas We Discuss: Biggest Lessons from DeepMind & OpenAI What was the best career advice Sam Altman @ OpenAI gave Aravind? What were Aravind's biggest takeaways at DeepMind? How did DeepMind shape how Aravind built Perplexity? What did Aravind mean by “competition is for losers?” What did he learn about talent assembly at DeepMind? The Next AI Breakthrough: Reasoning Does Aravind think we are experiencing diminishing returns on compute & model performance? Does Aravind agree reasoning will be the next big breakthrough for models? What are the reasons Aravind thinks models suck at reasoning today? What is the timeline for reasoning improvement according to Aravind? What does Aravind think are the biggest misconceptions about AI today? Will Foundation Models Commoditise? Does Aravind think foundation models will commoditise? What will the end state of foundation models look like? Why does Aravind think the second tier models will get commoditised? Why does Aravind think the subscription model will not work for AI models with true reasoning?  Why does Aravind think the application layer companies will benefit from foundation models commoditising? Why does Aravind think foundation models will not verticalize? When does Aravind think is the right time to go enterprise? What is his strategy to differentiate Perplexity from its competitors? AI Arms Race: Who Will Win? Who does Aravind think will be the winners of foundation models? What do AI companies need to do to win the model arms race? How does Aravind think startups can compete against incumbents' infinite cash flow? What are the reasons Aravind thinks Perplexity's browsing is better than ChatGPT? What is Aravind's biggest challenge at Perplexity today?    

The Cost of Glory
88 - Mysteries of the Scrolls — with Nat Friedman

The Cost of Glory

Play Episode Listen Later May 30, 2024 57:14


An interview with Nat Friedman, former CEO of GitHub and creator of the Vesuvius Challenge, which aims to crack the riddles of the Herculaneum Papyri.In this episode:The Genesis of the Vesuvius ChallengeEarly Attempts to Open the ScrollsUsing a Particle Accelerator to Scan the Scrolls!Partnering with Daniel Gross and Brent SealesNat's Childhood experience with Open-source CommunitiesHow to Design Prize Incentives for a Complex ContestDoing Crazy, Strange and Risky ProjectsA Possible Resurgence of Epicureanism? This episode is sponsored by Ancient Language Institute. If you're interested in actually reading the newly unlocked scrolls, you will need to know the languages of the ancient world. The Ancient Language Institute will help you do just that. Registration is now open (till August 10th) for their Fall term where you can take advanced classes in Latin, Ancient Greek, Biblical Hebrew, and Old English.

Periodisk
103 Lawrencium: Papyrusrullerne i vulkanasken

Periodisk

Play Episode Listen Later May 15, 2024 21:43


Over 1000 papyrusruller bliver forkullet, begravet og glemt, da vulkanen Vesuv i år 79 udraderer flere byer ved Napoli-bugten. Små totusind år senere åbner sig en mulighed for at læse, hvad de gemmer på - og der er en million dollars på højkant. Tag med til det romerske ferieparadis Herculaneum og på skattejagt efter oldgræske skrifttegn i dette afsnit af Periodisk. Du kan læse meget mere om baggrunden, metoderne og forløbet i konkurrencen ‘Vesuvius Challenge' på www.scrollprize.org Selve scanningen i Diamond Light Source beskrives nærmere i denne Smithsonian-artikel fra 2018 af Jo Marchant: https://www.smithsonianmag.com/history/buried-ash-vesuvius-scrolls-are-being-read-new-xray-technique-180969358/ Nat Friedman beskriver sin vej ind i projektet i blandt andet dette podcast-interview af Dwarkesh Patel: https://www.dwarkeshpatel.com/p/nat-friedman Denne artikel af John Seabrook fortæller meget mere om både Philodemus, Epikur, papyrusrullernes historie og papyrologi, og de tidligere faser af Brent Seales' arbejde. Udgivet i The New Yorker i 2015: https://www.newyorker.com/magazine/2015/11/16/the-invisible-libraryPeriodisk - er en RAKKERPAK original produceret af Rakkerpak Productions.Historierne du hører bygger på journalistisk research og fakta. De kan indeholde fiktive elementer som for eksempel dialog.Hvis du kan lide min fortælling, så husk at gå ind og abonnér, give en anmeldelse og fortæl dine venner om Periodisk.Podcasten er blevet til med støtte fra Novo Nordisk Fonden. Hvis du vil vide mere kan du besøge vores website periodisk.dkAfsnittet er skrevet og tilrettelagt af Maya Zachariassen.Tor Arnbjørn og Dorte Palle er producere.Rene Slott står for lyddesign og mixSimon Bennebjerg er vært.

Moonshots with Peter Diamandis
2 Ex-AI CEOs Debate the Future of AI w/ Emad Mostaque & Nat Friedman | EP #98

Moonshots with Peter Diamandis

Play Episode Listen Later Apr 25, 2024 50:52


In this episode, Peter, Emad, and Nat debate the future of AI, predictions for the next few years, and their vision for AI's future.  07:11 | The Uncertainty of AI Understanding 17:44 | The Future of AI Staffing 38:05 | AI Solutions for Complex Challenges Emad Mostaque is the former CEO and Co-Founder of Stability AI, a company funding the development of open-source music- and image-generating systems such as Dance Diffusion, Stable Diffusion, and Stable Video 3D.  Nat Friedman is an accomplished entrepreneur and software engineer, known for co-founding Xamarin, a platform for building mobile applications, and for serving as the CEO of GitHub, the world's leading software development platform. He is also an active investor and advisor in the tech industry, supporting innovative startups across various sectors. Learn more about Abundance360: https://www.abundance360.com/summit  ____________ I only endorse products and services I personally use. To see what they are, please support this podcast by checking out our sponsors:  Get started with Fountain Life and become the CEO of your health: https://fountainlife.com/peter/   AI-powered precision diagnosis you NEED for a healthy gut: https://www.viome.com/peter  ____________ I send weekly emails with the latest insights and trends on today's and tomorrow's exponential technologies. Stay ahead of the curve, and sign up now: Tech Blog Get my new Longevity Practices book for free: https://www.diamandis.com/longevity My new book with Salim Ismail, Exponential Organizations 2.0: The New Playbook for 10x Growth and Impact, is now available on Amazon: https://bit.ly/3P3j54J _____________ Connect With Peter: Twitter Instagram Youtube Moonshots Learn more about your ad choices. Visit megaphone.fm/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Latent Space Chats: NLW (Four Wars, GPT5), Josh Albrecht/Ali Rohde (TNAI), Dylan Patel/Semianalysis (Groq), Milind Naphade (Nvidia GTC), Personal AI (ft. Harrison Chase — LangFriend/LangMem)

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Apr 6, 2024 121:17


Our next 2 big events are AI UX and the World's Fair. Join and apply to speak/sponsor!Due to timing issues we didn't have an interview episode to share with you this week, but not to worry, we have more than enough “weekend special” content in the backlog for you to get your Latent Space fix, whether you like thinking about the big picture, or learning more about the pod behind the scenes, or talking Groq and GPUs, or AI Leadership, or Personal AI. Enjoy!AI BreakdownThe indefatigable NLW had us back on his show for an update on the Four Wars, covering Sora, Suno, and the reshaped GPT-4 Class Landscape:and a longer segment on AI Engineering trends covering the future LLM landscape (Llama 3, GPT-5, Gemini 2, Claude 4), Open Source Models (Mistral, Grok), Apple and Meta's AI strategy, new chips (Groq, MatX) and the general movement from baby AGIs to vertical Agents:Thursday Nights in AIWe're also including swyx's interview with Josh Albrecht and Ali Rohde to reintroduce swyx and Latent Space to a general audience, and engage in some spicy Q&A:Dylan Patel on GroqWe hosted a private event with Dylan Patel of SemiAnalysis (our last pod here):Not all of it could be released so we just talked about our Groq estimates:Milind Naphade - Capital OneIn relation to conversations at NeurIPS and Nvidia GTC and upcoming at World's Fair, we also enjoyed chatting with Milind Naphade about his AI Leadership work at IBM, Cisco, Nvidia, and now leading the AI Foundations org at Capital One. We covered:* Milind's learnings from ~25 years in machine learning * His first paper citation was 24 years ago* Lessons from working with Jensen Huang for 6 years and being CTO of Metropolis * Thoughts on relevant AI research* GTC takeaways and what makes NVIDIA specialIf you'd like to work on building solutions rather than platform (as Milind put it), his Applied AI Research team at Capital One is hiring, which falls under the Capital One Tech team.Personal AI MeetupIt all started with a meme:Within days of each other, BEE, FRIEND, EmilyAI, Compass, Nox and LangFriend were all launching personal AI wearables and assistants. So we decided to put together a the world's first Personal AI meetup featuring creators and enthusiasts of wearables. The full video is live now, with full show notes within.Timestamps* [00:01:13] AI Breakdown Part 1* [00:02:20] Four Wars* [00:13:45] Sora* [00:15:12] Suno* [00:16:34] The GPT-4 Class Landscape* [00:17:03] Data War: Reddit x Google* [00:21:53] Gemini 1.5 vs Claude 3* [00:26:58] AI Breakdown Part 2* [00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4* [00:31:11] Open Source Models - Mistral, Grok* [00:34:13] Apple MM1* [00:37:33] Meta's $800b AI rebrand* [00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents* [00:47:28] Adept episode - Screen Multimodality* [00:48:54] Top Model Research from January Recap* [00:53:08] AI Wearables* [00:57:26] Groq vs Nvidia month - GPU Chip War* [01:00:31] Disagreements* [01:02:08] Summer 2024 Predictions* [01:04:18] Thursday Nights in AI - swyx* [01:33:34] Dylan Patel - Semianalysis + Latent Space Live Show* [01:34:58] GroqTranscript[00:00:00] swyx: Welcome to the Latent Space Podcast Weekend Edition. This is Charlie, your AI co host. Swyx and Alessio are off for the week, making more great content. We have exciting interviews coming up with Elicit, Chroma, Instructor, and our upcoming series on NSFW, Not Safe for Work AI. In today's episode, we're collating some of Swyx and Alessio's recent appearances, all in one place for you to find.[00:00:32] swyx: In part one, we have our first crossover pod of the year. In our listener survey, several folks asked for more thoughts from our two hosts. In 2023, Swyx and Alessio did crossover interviews with other great podcasts like the AI Breakdown, Practical AI, Cognitive Revolution, Thursday Eye, and Chinatalk, all of which you can find in the Latentspace About page.[00:00:56] swyx: NLW of the AI Breakdown asked us back to do a special on the 4Wars framework and the AI engineer scene. We love AI Breakdown as one of the best examples Daily podcasts to keep up on AI news, so we were especially excited to be back on Watch out and take[00:01:12] NLW: care[00:01:13] AI Breakdown Part 1[00:01:13] NLW: today on the AI breakdown. Part one of my conversation with Alessio and Swix from Latent Space.[00:01:19] NLW: All right, fellas, welcome back to the AI Breakdown. How are you doing? I'm good. Very good. With the last, the last time we did this show, we were like, oh yeah, let's do check ins like monthly about all the things that are going on and then. Of course, six months later, and, you know, the, the, the world has changed in a thousand ways.[00:01:36] NLW: It's just, it's too busy to even, to even think about podcasting sometimes. But I, I'm super excited to, to be chatting with you again. I think there's, there's a lot to, to catch up on, just to tap in, I think in the, you know, in the beginning of 2024. And, and so, you know, we're gonna talk today about just kind of a, a, a broad sense of where things are in some of the key battles in the AI space.[00:01:55] NLW: And then the, you know, one of the big things that I, that I'm really excited to have you guys on here for us to talk about where, sort of what patterns you're seeing and what people are actually trying to build, you know, where, where developers are spending their, their time and energy and, and, and any sort of, you know, trend trends there, but maybe let's start I guess by checking in on a framework that you guys actually introduced, which I've loved and I've cribbed a couple of times now, which is this sort of four wars of the, of the AI stack.[00:02:20] Four Wars[00:02:20] NLW: Because first, since I have you here, I'd love, I'd love to hear sort of like where that started gelling. And then and then maybe we can get into, I think a couple of them that are you know, particularly interesting, you know, in the, in light of[00:02:30] swyx: some recent news. Yeah, so maybe I'll take this one. So the four wars is a framework that I came up around trying to recap all of 2023.[00:02:38] swyx: I tried to write sort of monthly recap pieces. And I was trying to figure out like what makes one piece of news last longer than another or more significant than another. And I think it's basically always around battlegrounds. Wars are fought around limited resources. And I think probably the, you know, the most limited resource is talent, but the talent expresses itself in a number of areas.[00:03:01] swyx: And so I kind of focus on those, those areas at first. So the four wars that we cover are the data wars, the GPU rich, poor war, the multi modal war, And the RAG and Ops War. And I think you actually did a dedicated episode to that, so thanks for covering that. Yeah, yeah.[00:03:18] NLW: Not only did I do a dedicated episode, I actually used that.[00:03:22] NLW: I can't remember if I told you guys. I did give you big shoutouts. But I used it as a framework for a presentation at Intel's big AI event that they hold each year, where they have all their folks who are working on AI internally. And it totally resonated. That's amazing. Yeah, so, so, what got me thinking about it again is specifically this inflection news that we recently had, this sort of, you know, basically, I can't imagine that anyone who's listening wouldn't have thought about it, but, you know, inflection is a one of the big contenders, right?[00:03:53] NLW: I think probably most folks would have put them, you know, just a half step behind the anthropics and open AIs of the world in terms of labs, but it's a company that raised 1. 3 billion last year, less than a year ago. Reed Hoffman's a co founder Mustafa Suleyman, who's a co founder of DeepMind, you know, so it's like, this is not a a small startup, let's say, at least in terms of perception.[00:04:13] NLW: And then we get the news that basically most of the team, it appears, is heading over to Microsoft and they're bringing in a new CEO. And you know, I'm interested in, in, in kind of your take on how much that reflects, like hold aside, I guess, you know, all the other things that it might be about, how much it reflects this sort of the, the stark.[00:04:32] NLW: Brutal reality of competing in the frontier model space right now. And, you know, just the access to compute.[00:04:38] Alessio: There are a lot of things to say. So first of all, there's always somebody who's more GPU rich than you. So inflection is GPU rich by startup standard. I think about 22, 000 H100s, but obviously that pales compared to the, to Microsoft.[00:04:55] Alessio: The other thing is that this is probably good news, maybe for the startups. It's like being GPU rich, it's not enough. You know, like I think they were building something pretty interesting in, in pi of their own model of their own kind of experience. But at the end of the day, you're the interface that people consume as end users.[00:05:13] Alessio: It's really similar to a lot of the others. So and we'll tell, talk about GPT four and cloud tree and all this stuff. GPU poor, doing something. That the GPU rich are not interested in, you know we just had our AI center of excellence at Decibel and one of the AI leads at one of the big companies was like, Oh, we just saved 10 million and we use these models to do a translation, you know, and that's it.[00:05:39] Alessio: It's not, it's not a GI, it's just translation. So I think like the inflection part is maybe. A calling and a waking to a lot of startups then say, Hey, you know, trying to get as much capital as possible, try and get as many GPUs as possible. Good. But at the end of the day, it doesn't build a business, you know, and maybe what inflection I don't, I don't, again, I don't know the reasons behind the inflection choice, but if you say, I don't want to build my own company that has 1.[00:06:05] Alessio: 3 billion and I want to go do it at Microsoft, it's probably not a resources problem. It's more of strategic decisions that you're making as a company. So yeah, that was kind of my. I take on it.[00:06:15] swyx: Yeah, and I guess on my end, two things actually happened yesterday. It was a little bit quieter news, but Stability AI had some pretty major departures as well.[00:06:25] swyx: And you may not be considering it, but Stability is actually also a GPU rich company in the sense that they were the first new startup in this AI wave to brag about how many GPUs that they have. And you should join them. And you know, Imadis is definitely a GPU trader in some sense from his hedge fund days.[00:06:43] swyx: So Robin Rhombach and like the most of the Stable Diffusion 3 people left Stability yesterday as well. So yesterday was kind of like a big news day for the GPU rich companies, both Inflection and Stability having sort of wind taken out of their sails. I think, yes, it's a data point in the favor of Like, just because you have the GPUs doesn't mean you can, you automatically win.[00:07:03] swyx: And I think, you know, kind of I'll echo what Alessio says there. But in general also, like, I wonder if this is like the start of a major consolidation wave, just in terms of, you know, I think that there was a lot of funding last year and, you know, the business models have not been, you know, All of these things worked out very well.[00:07:19] swyx: Even inflection couldn't do it. And so I think maybe that's the start of a small consolidation wave. I don't think that's like a sign of AI winter. I keep looking for AI winter coming. I think this is kind of like a brief cold front. Yeah,[00:07:34] NLW: it's super interesting. So I think a bunch of A bunch of stuff here.[00:07:38] NLW: One is, I think, to both of your points, there, in some ways, there, there had already been this very clear demarcation between these two sides where, like, the GPU pores, to use the terminology, like, just weren't trying to compete on the same level, right? You know, the vast majority of people who have started something over the last year, year and a half, call it, were racing in a different direction.[00:07:59] NLW: They're trying to find some edge somewhere else. They're trying to build something different. If they're, if they're really trying to innovate, it's in different areas. And so it's really just this very small handful of companies that are in this like very, you know, it's like the coheres and jaspers of the world that like this sort of, you know, that are that are just sort of a little bit less resourced than, you know, than the other set that I think that this potentially even applies to, you know, everyone else that could clearly demarcate it into these two, two sides.[00:08:26] NLW: And there's only a small handful kind of sitting uncomfortably in the middle, perhaps. Let's, let's come back to the idea of, of the sort of AI winter or, you know, a cold front or anything like that. So this is something that I, I spent a lot of time kind of thinking about and noticing. And my perception is that The vast majority of the folks who are trying to call for sort of, you know, a trough of disillusionment or, you know, a shifting of the phase to that are people who either, A, just don't like AI for some other reason there's plenty of that, you know, people who are saying, You Look, they're doing way worse than they ever thought.[00:09:03] NLW: You know, there's a lot of sort of confirmation bias kind of thing going on. Or two, media that just needs a different narrative, right? Because they're sort of sick of, you know, telling the same story. Same thing happened last summer, when every every outlet jumped on the chat GPT at its first down month story to try to really like kind of hammer this idea that that the hype was too much.[00:09:24] NLW: Meanwhile, you have, you know, just ridiculous levels of investment from enterprises, you know, coming in. You have, you know, huge, huge volumes of, you know, individual behavior change happening. But I do think that there's nothing incoherent sort of to your point, Swyx, about that and the consolidation period.[00:09:42] NLW: Like, you know, if you look right now, for example, there are, I don't know, probably 25 or 30 credible, like, build your own chatbot. platforms that, you know, a lot of which have, you know, raised funding. There's no universe in which all of those are successful across, you know, even with a, even, even with a total addressable market of every enterprise in the world, you know, you're just inevitably going to see some amount of consolidation.[00:10:08] NLW: Same with, you know, image generators. There are, if you look at A16Z's top 50 consumer AI apps, just based on, you know, web traffic or whatever, they're still like I don't know, a half. Dozen or 10 or something, like, some ridiculous number of like, basically things like Midjourney or Dolly three. And it just seems impossible that we're gonna have that many, you know, ultimately as, as, as sort of, you know, going, going concerned.[00:10:33] NLW: So, I don't know. I, I, I think that the, there will be inevitable consolidation 'cause you know. It's, it's also what kind of like venture rounds are supposed to do. You're not, not everyone who gets a seed round is supposed to get to series A and not everyone who gets a series A is supposed to get to series B.[00:10:46] NLW: That's sort of the natural process. I think it will be tempting for a lot of people to try to infer from that something about AI not being as sort of big or as as sort of relevant as, as it was hyped up to be. But I, I kind of think that's the wrong conclusion to come to.[00:11:02] Alessio: I I would say the experimentation.[00:11:04] Alessio: Surface is a little smaller for image generation. So if you go back maybe six, nine months, most people will tell you, why would you build a coding assistant when like Copilot and GitHub are just going to win everything because they have the data and they have all the stuff. If you fast forward today, A lot of people use Cursor everybody was excited about the Devin release on Twitter.[00:11:26] Alessio: There are a lot of different ways of attacking the market that are not completion of code in the IDE. And even Cursors, like they evolved beyond single line to like chat, to do multi line edits and, and all that stuff. Image generation, I would say, yeah, as a, just as from what I've seen, like maybe the product innovation has slowed down at the UX level and people are improving the models.[00:11:50] Alessio: So the race is like, how do I make better images? It's not like, how do I make the user interact with the generation process better? And that gets tough, you know? It's hard to like really differentiate yourselves. So yeah, that's kind of how I look at it. And when we think about multimodality, maybe the reason why people got so excited about Sora is like, oh, this is like a completely It's not a better image model.[00:12:13] Alessio: This is like a completely different thing, you know? And I think the creative mind It's always looking for something that impacts the viewer in a different way, you know, like they really want something different versus the developer mind. It's like, Oh, I, I just, I have this like very annoying thing I want better.[00:12:32] Alessio: I have this like very specific use cases that I want to go after. So it's just different. And that's why you see a lot more companies in image generation. But I agree with you that. If you fast forward there, there's not going to be 10 of them, you know, it's probably going to be one or[00:12:46] swyx: two. Yeah, I mean, to me, that's why I call it a war.[00:12:49] swyx: Like, individually, all these companies can make a story that kind of makes sense, but collectively, they cannot all be true. Therefore, they all, there is some kind of fight over limited resources here. Yeah, so[00:12:59] NLW: it's interesting. We wandered very naturally into sort of another one of these wars, which is the multimodality kind of idea, which is, you know, basically a question of whether it's going to be these sort of big everything models that end up winning or whether, you know, you're going to have really specific things, you know, like something, you know, Dolly 3 inside of sort of OpenAI's larger models versus, you know, a mid journey or something like that.[00:13:24] NLW: And at first, you know, I was kind of thinking like, For most of the last, call it six months or whatever, it feels pretty definitively both and in some ways, you know, and that you're, you're seeing just like great innovation on sort of the everything models, but you're also seeing lots and lots happen at sort of the level of kind of individual use cases.[00:13:45] Sora[00:13:45] NLW: But then Sora comes along and just like obliterates what I think anyone thought you know, where we were when it comes to video generation. So how are you guys thinking about this particular battle or war at the moment?[00:13:59] swyx: Yeah, this was definitely a both and story, and Sora tipped things one way for me, in terms of scale being all you need.[00:14:08] swyx: And the benefit, I think, of having multiple models being developed under one roof. I think a lot of people aren't aware that Sora was developed in a similar fashion to Dolly 3. And Dolly3 had a very interesting paper out where they talked about how they sort of bootstrapped their synthetic data based on GPT 4 vision and GPT 4.[00:14:31] swyx: And, and it was just all, like, really interesting, like, if you work on one modality, it enables you to work on other modalities, and all that is more, is, is more interesting. I think it's beneficial if it's all in the same house, whereas the individual startups who don't, who sort of carve out a single modality and work on that, definitely won't have the state of the art stuff on helping them out on synthetic data.[00:14:52] swyx: So I do think like, The balance is tilted a little bit towards the God model companies, which is challenging for the, for the, for the the sort of dedicated modality companies. But everyone's carving out different niches. You know, like we just interviewed Suno ai, the sort of music model company, and, you know, I don't see opening AI pursuing music anytime soon.[00:15:12] Suno[00:15:12] swyx: Yeah,[00:15:13] NLW: Suno's been phenomenal to play with. Suno has done that rare thing where, which I think a number of different AI product categories have done, where people who don't consider themselves particularly interested in doing the thing that the AI enables find themselves doing a lot more of that thing, right?[00:15:29] NLW: Like, it'd be one thing if Just musicians were excited about Suno and using it but what you're seeing is tons of people who just like music all of a sudden like playing around with it and finding themselves kind of down that rabbit hole, which I think is kind of like the highest compliment that you can give one of these startups at the[00:15:45] swyx: early days of it.[00:15:46] swyx: Yeah, I, you know, I, I asked them directly, you know, in the interview about whether they consider themselves mid journey for music. And he had a more sort of nuanced response there, but I think that probably the business model is going to be very similar because he's focused on the B2C element of that. So yeah, I mean, you know, just to, just to tie back to the question about, you know, You know, large multi modality companies versus small dedicated modality companies.[00:16:10] swyx: Yeah, highly recommend people to read the Sora blog posts and then read through to the Dali blog posts because they, they strongly correlated themselves with the same synthetic data bootstrapping methods as Dali. And I think once you make those connections, you're like, oh, like it, it, it is beneficial to have multiple state of the art models in house that all help each other.[00:16:28] swyx: And these, this, that's the one thing that a dedicated modality company cannot do.[00:16:34] The GPT-4 Class Landscape[00:16:34] NLW: So I, I wanna jump, I wanna kind of build off that and, and move into the sort of like updated GPT-4 class landscape. 'cause that's obviously been another big change over the last couple months. But for the sake of completeness, is there anything that's worth touching on with with sort of the quality?[00:16:46] NLW: Quality data or sort of a rag ops wars just in terms of, you know, anything that's changed, I guess, for you fundamentally in the last couple of months about where those things stand.[00:16:55] swyx: So I think we're going to talk about rag for the Gemini and Clouds discussion later. And so maybe briefly discuss the data piece.[00:17:03] Data War: Reddit x Google[00:17:03] swyx: I think maybe the only new thing was this Reddit deal with Google for like a 60 million dollar deal just ahead of their IPO, very conveniently turning Reddit into a AI data company. Also, very, very interestingly, a non exclusive deal, meaning that Reddit can resell that data to someone else. And it probably does become table stakes.[00:17:23] swyx: A lot of people don't know, but a lot of the web text dataset that originally started for GPT 1, 2, and 3 was actually scraped from GitHub. from Reddit at least the sort of vote scores. And I think, I think that's a, that's a very valuable piece of information. So like, yeah, I think people are figuring out how to pay for data.[00:17:40] swyx: People are suing each other over data. This, this, this war is, you know, definitely very, very much heating up. And I don't think, I don't see it getting any less intense. I, you know, next to GPUs, data is going to be the most expensive thing in, in a model stack company. And. You know, a lot of people are resorting to synthetic versions of it, which may or may not be kosher based on how far along or how commercially blessed the, the forms of creating that synthetic data are.[00:18:11] swyx: I don't know if Alessio, you have any other interactions with like Data source companies, but that's my two cents.[00:18:17] Alessio: Yeah yeah, I actually saw Quentin Anthony from Luther. ai at GTC this week. He's also been working on this. I saw Technium. He's also been working on the data side. I think especially in open source, people are like, okay, if everybody is putting the gates up, so to speak, to the data we need to make it easier for people that don't have 50 million a year to get access to good data sets.[00:18:38] Alessio: And Jensen, at his keynote, he did talk about synthetic data a little bit. So I think that's something that we'll definitely hear more and more of in the enterprise, which never bodes well, because then all the, all the people with the data are like, Oh, the enterprises want to pay now? Let me, let me put a pay here stripe link so that they can give me 50 million.[00:18:57] Alessio: But it worked for Reddit. I think the stock is up. 40 percent today after opening. So yeah, I don't know if it's all about the Google deal, but it's obviously Reddit has been one of those companies where, hey, you got all this like great community, but like, how are you going to make money? And like, they try to sell the avatars.[00:19:15] Alessio: I don't know if that it's a great business for them. The, the data part sounds as an investor, you know, the data part sounds a lot more interesting than, than consumer[00:19:25] swyx: cosmetics. Yeah, so I think, you know there's more questions around data you know, I think a lot of people are talking about the interview that Mira Murady did with the Wall Street Journal, where she, like, just basically had no, had no good answer for where they got the data for Sora.[00:19:39] swyx: I, I think this is where, you know, there's, it's in nobody's interest to be transparent about data, and it's, it's kind of sad for the state of ML and the state of AI research but it is what it is. We, we have to figure this out as a society, just like we did for music and music sharing. You know, in, in sort of the Napster to Spotify transition, and that might take us a decade.[00:19:59] swyx: Yeah, I[00:20:00] NLW: do. I, I agree. I think, I think that you're right to identify it, not just as that sort of technical problem, but as one where society has to have a debate with itself. Because I think that there's, if you rationally within it, there's Great kind of points on all side, not to be the sort of, you know, person who sits in the middle constantly, but it's why I think a lot of these legal decisions are going to be really important because, you know, the job of judges is to listen to all this stuff and try to come to things and then have other judges disagree.[00:20:24] NLW: And, you know, and have the rest of us all debate at the same time. By the way, as a total aside, I feel like the synthetic data right now is like eggs in the 80s and 90s. Like, whether they're good for you or bad for you, like, you know, we, we get one study that's like synthetic data, you know, there's model collapse.[00:20:42] NLW: And then we have like a hint that llama, you know, to the most high performance version of it, which was one they didn't release was trained on synthetic data. So maybe it's good. It's like, I just feel like every, every other week I'm seeing something sort of different about whether it's a good or bad for, for these models.[00:20:56] swyx: Yeah. The branding of this is pretty poor. I would kind of tell people to think about it like cholesterol. There's good cholesterol, bad cholesterol. And you can have, you know, good amounts of both. But at this point, it is absolutely without a doubt that most large models from here on out will all be trained as some kind of synthetic data and that is not a bad thing.[00:21:16] swyx: There are ways in which you can do it poorly. Whether it's commercial, you know, in terms of commercial sourcing or in terms of the model performance. But it's without a doubt that good synthetic data is going to help your model. And this is just a question of like where to obtain it and what kinds of synthetic data are valuable.[00:21:36] swyx: You know, if even like alpha geometry, you know, was, was a really good example from like earlier this year.[00:21:42] NLW: If you're using the cholesterol analogy, then my, then my egg thing can't be that far off. Let's talk about the sort of the state of the art and the, and the GPT 4 class landscape and how that's changed.[00:21:53] Gemini 1.5 vs Claude 3[00:21:53] NLW: Cause obviously, you know, sort of the, the two big things or a couple of the big things that have happened. Since we last talked, we're one, you know, Gemini first announcing that a model was coming and then finally it arriving, and then very soon after a sort of a different model arriving from Gemini and and Cloud three.[00:22:11] NLW: So I guess, you know, I'm not sure exactly where the right place to start with this conversation is, but, you know, maybe very broadly speaking which of these do you think have made a bigger impact? Thank you.[00:22:20] Alessio: Probably the one you can use, right? So, Cloud. Well, I'm sure Gemini is going to be great once they let me in, but so far I haven't been able to.[00:22:29] Alessio: I use, so I have this small podcaster thing that I built for our podcast, which does chapters creation, like named entity recognition, summarization, and all of that. Cloud Tree is, Better than GPT 4. Cloud2 was unusable. So I use GPT 4 for everything. And then when Opus came out, I tried them again side by side and I posted it on, on Twitter as well.[00:22:53] Alessio: Cloud is better. It's very good, you know, it's much better, it seems to me, it's much better than GPT 4 at doing writing that is more, you know, I don't know, it just got good vibes, you know, like the GPT 4 text, you can tell it's like GPT 4, you know, it's like, it always uses certain types of words and phrases and, you know, maybe it's just me because I've now done it for, you know, So, I've read like 75, 80 generations of these things next to each other.[00:23:21] Alessio: Clutter is really good. I know everybody is freaking out on twitter about it, my only experience of this is much better has been on the podcast use case. But I know that, you know, Quran from from News Research is a very big opus pro, pro opus person. So, I think that's also It's great to have people that actually care about other models.[00:23:40] Alessio: You know, I think so far to a lot of people, maybe Entropic has been the sibling in the corner, you know, it's like Cloud releases a new model and then OpenAI releases Sora and like, you know, there are like all these different things, but yeah, the new models are good. It's interesting.[00:23:55] NLW: My my perception is definitely that just, just observationally, Cloud 3 is certainly the first thing that I've seen where lots of people.[00:24:06] NLW: They're, no one's debating evals or anything like that. They're talking about the specific use cases that they have, that they used to use chat GPT for every day, you know, day in, day out, that they've now just switched over. And that has, I think, shifted a lot of the sort of like vibe and sentiment in the space too.[00:24:26] NLW: And I don't necessarily think that it's sort of a A like full you know, sort of full knock. Let's put it this way. I think it's less bad for open AI than it is good for anthropic. I think that because GPT 5 isn't there, people are not quite willing to sort of like, you know get overly critical of, of open AI, except in so far as they're wondering where GPT 5 is.[00:24:46] NLW: But I do think that it makes, Anthropic look way more credible as a, as a, as a player, as a, you know, as a credible sort of player, you know, as opposed to to, to where they were.[00:24:57] Alessio: Yeah. And I would say the benchmarks veil is probably getting lifted this year. I think last year. People were like, okay, this is better than this on this benchmark, blah, blah, blah, because maybe they did not have a lot of use cases that they did frequently.[00:25:11] Alessio: So it's hard to like compare yourself. So you, you defer to the benchmarks. I think now as we go into 2024, a lot of people have started to use these models from, you know, from very sophisticated things that they run in production to some utility that they have on their own. Now they can just run them side by side.[00:25:29] Alessio: And it's like, Hey, I don't care that like. The MMLU score of Opus is like slightly lower than GPT 4. It just works for me, you know, and I think that's the same way that traditional software has been used by people, right? Like you just strive for yourself and like, which one does it work, works best for you?[00:25:48] Alessio: Like nobody looks at benchmarks outside of like sales white papers, you know? And I think it's great that we're going more in that direction. We have a episode with Adapt coming out this weekend. I'll and some of their model releases, they specifically say, We do not care about benchmarks, so we didn't put them in, you know, because we, we don't want to look good on them.[00:26:06] Alessio: We just want the product to work. And I think more and more people will, will[00:26:09] swyx: go that way. Yeah. I I would say like, it does take the wind out of the sails for GPT 5, which I know where, you know, Curious about later on. I think anytime you put out a new state of the art model, you have to break through in some way.[00:26:21] swyx: And what Claude and Gemini have done is effectively take away any advantage to saying that you have a million token context window. Now everyone's just going to be like, Oh, okay. Now you just match the other two guys. And so that puts An insane amount of pressure on what gpt5 is going to be because it's just going to have like the only option it has now because all the other models are multimodal all the other models are long context all the other models have perfect recall gpt5 has to match everything and do more to to not be a flop[00:26:58] AI Breakdown Part 2[00:26:58] NLW: hello friends back again with part two if you haven't heard part one of this conversation i suggest you go check it out but to be honest they are kind of actually separable In this conversation, we get into a topic that I think Alessio and Swyx are very well positioned to discuss, which is what developers care about right now, what people are trying to build around.[00:27:16] NLW: I honestly think that one of the best ways to see the future in an industry like AI is to try to dig deep on what developers and entrepreneurs are attracted to build, even if it hasn't made it to the news pages yet. So consider this your preview of six months from now, and let's dive in. Let's bring it to the GPT 5 conversation.[00:27:33] Next Frontiers: Llama 3, GPT-5, Gemini 2, Claude 4[00:27:33] NLW: I mean, so, so I think that that's a great sort of assessment of just how the stakes have been raised, you know is your, I mean, so I guess maybe, maybe I'll, I'll frame this less as a question, just sort of something that, that I, that I've been watching right now, the only thing that makes sense to me with how.[00:27:50] NLW: Fundamentally unbothered and unstressed OpenAI seems about everything is that they're sitting on something that does meet all that criteria, right? Because, I mean, even in the Lex Friedman interview that, that Altman recently did, you know, he's talking about other things coming out first. He's talking about, he's just like, he, listen, he, he's good and he could play nonchalant, you know, if he wanted to.[00:28:13] NLW: So I don't want to read too much into it, but. You know, they've had so long to work on this, like unless that we are like really meaningfully running up against some constraint, it just feels like, you know, there's going to be some massive increase, but I don't know. What do you guys think?[00:28:28] swyx: Hard to speculate.[00:28:29] swyx: You know, at this point, they're, they're pretty good at PR and they're not going to tell you anything that they don't want to. And he can tell you one thing and change their minds the next day. So it's, it's, it's really, you know, I've always said that model version numbers are just marketing exercises, like they have something and it's always improving and at some point you just cut it and decide to call it GPT 5.[00:28:50] swyx: And it's more just about defining an arbitrary level at which they're ready and it's up to them on what ready means. We definitely did see some leaks on GPT 4. 5, as I think a lot of people reported and I'm not sure if you covered it. So it seems like there might be an intermediate release. But I did feel, coming out of the Lex Friedman interview, that GPT 5 was nowhere near.[00:29:11] swyx: And you know, it was kind of a sharp contrast to Sam talking at Davos in February, saying that, you know, it was his top priority. So I find it hard to square. And honestly, like, there's also no point Reading too much tea leaves into what any one person says about something that hasn't happened yet or has a decision that hasn't been taken yet.[00:29:31] swyx: Yeah, that's, that's my 2 cents about it. Like, calm down, let's just build .[00:29:35] Alessio: Yeah. The, the February rumor was that they were gonna work on AI agents, so I don't know, maybe they're like, yeah,[00:29:41] swyx: they had two agent two, I think two agent projects, right? One desktop agent and one sort of more general yeah, sort of GPTs like agent and then Andre left, so he was supposed to be the guy on that.[00:29:52] swyx: What did Andre see? What did he see? I don't know. What did he see?[00:29:56] Alessio: I don't know. But again, it's just like the rumors are always floating around, you know but I think like, this is, you know, we're not going to get to the end of the year without Jupyter you know, that's definitely happening. I think the biggest question is like, are Anthropic and Google.[00:30:13] Alessio: Increasing the pace, you know, like it's the, it's the cloud four coming out like in 12 months, like nine months. What's the, what's the deal? Same with Gemini. They went from like one to 1. 5 in like five days or something. So when's Gemini 2 coming out, you know, is that going to be soon? I don't know.[00:30:31] Alessio: There, there are a lot of, speculations, but the good thing is that now you can see a world in which OpenAI doesn't rule everything. You know, so that, that's the best, that's the best news that everybody got, I would say.[00:30:43] swyx: Yeah, and Mistral Large also dropped in the last month. And, you know, not as, not quite GPT 4 class, but very good from a new startup.[00:30:52] swyx: So yeah, we, we have now slowly changed in landscape, you know. In my January recap, I was complaining that nothing's changed in the landscape for a long time. But now we do exist in a world, sort of a multipolar world where Cloud and Gemini are legitimate challengers to GPT 4 and hopefully more will emerge as well hopefully from meta.[00:31:11] Open Source Models - Mistral, Grok[00:31:11] NLW: So speak, let's actually talk about sort of the open source side of this for a minute. So Mistral Large, notable because it's, it's not available open source in the same way that other things are, although I think my perception is that the community has largely given them Like the community largely recognizes that they want them to keep building open source stuff and they have to find some way to fund themselves that they're going to do that.[00:31:27] NLW: And so they kind of understand that there's like, they got to figure out how to eat, but we've got, so, you know, there there's Mistral, there's, I guess, Grok now, which is, you know, Grok one is from, from October is, is open[00:31:38] swyx: sourced at, yeah. Yeah, sorry, I thought you thought you meant Grok the chip company.[00:31:41] swyx: No, no, no, yeah, you mean Twitter Grok.[00:31:43] NLW: Although Grok the chip company, I think is even more interesting in some ways, but and then there's the, you know, obviously Llama3 is the one that sort of everyone's wondering about too. And, you know, my, my sense of that, the little bit that, you know, Zuckerberg was talking about Llama 3 earlier this year, suggested that, at least from an ambition standpoint, he was not thinking about how do I make sure that, you know, meta content, you know, keeps, keeps the open source thrown, you know, vis a vis Mistral.[00:32:09] NLW: He was thinking about how you go after, you know, how, how he, you know, releases a thing that's, you know, every bit as good as whatever OpenAI is on at that point.[00:32:16] Alessio: Yeah. From what I heard in the hallways at, at GDC, Llama 3, the, the biggest model will be, you 260 to 300 billion parameters, so that that's quite large.[00:32:26] Alessio: That's not an open source model. You know, you cannot give people a 300 billion parameters model and ask them to run it. You know, it's very compute intensive. So I think it is, it[00:32:35] swyx: can be open source. It's just, it's going to be difficult to run, but that's a separate question.[00:32:39] Alessio: It's more like, as you think about what they're doing it for, you know, it's not like empowering the person running.[00:32:45] Alessio: llama. On, on their laptop, it's like, oh, you can actually now use this to go after open AI, to go after Anthropic, to go after some of these companies at like the middle complexity level, so to speak. Yeah. So obviously, you know, we estimate Gentala on the podcast, they're doing a lot here, they're making PyTorch better.[00:33:03] Alessio: You know, they want to, that's kind of like maybe a little bit of a shorted. Adam Bedia, in a way, trying to get some of the CUDA dominance out of it. Yeah, no, it's great. The, I love the duck destroying a lot of monopolies arc. You know, it's, it's been very entertaining. Let's bridge[00:33:18] NLW: into the sort of big tech side of this, because this is obviously like, so I think actually when I did my episode, this was one of the I added this as one of as an additional war that, that's something that I'm paying attention to.[00:33:29] NLW: So we've got Microsoft's moves with inflection, which I think pretend, potentially are being read as A shift vis a vis the relationship with OpenAI, which also the sort of Mistral large relationship seems to reinforce as well. We have Apple potentially entering the race, finally, you know, giving up Project Titan and and, and kind of trying to spend more effort on this.[00:33:50] NLW: Although, Counterpoint, we also have them talking about it, or there being reports of a deal with Google, which, you know, is interesting to sort of see what their strategy there is. And then, you know, Meta's been largely quiet. We kind of just talked about the main piece, but, you know, there's, and then there's spoilers like Elon.[00:34:07] NLW: I mean, you know, what, what of those things has sort of been most interesting to you guys as you think about what's going to shake out for the rest of this[00:34:13] Apple MM1[00:34:13] swyx: year? I'll take a crack. So the reason we don't have a fifth war for the Big Tech Wars is that's one of those things where I just feel like we don't cover differently from other media channels, I guess.[00:34:26] swyx: Sure, yeah. In our anti interestness, we actually say, like, we try not to cover the Big Tech Game of Thrones, or it's proxied through Twitter. You know, all the other four wars anyway, so there's just a lot of overlap. Yeah, I think absolutely, personally, the most interesting one is Apple entering the race.[00:34:41] swyx: They actually released, they announced their first large language model that they trained themselves. It's like a 30 billion multimodal model. People weren't that impressed, but it was like the first time that Apple has kind of showcased that, yeah, we're training large models in house as well. Of course, like, they might be doing this deal with Google.[00:34:57] swyx: I don't know. It sounds very sort of rumor y to me. And it's probably, if it's on device, it's going to be a smaller model. So something like a Jemma. It's going to be smarter autocomplete. I don't know what to say. I'm still here dealing with, like, Siri, which hasn't, probably hasn't been updated since God knows when it was introduced.[00:35:16] swyx: It's horrible. I, you know, it, it, it makes me so angry. So I, I, one, as an Apple customer and user, I, I'm just hoping for better AI on Apple itself. But two, they are the gold standard when it comes to local devices, personal compute and, and trust, like you, you trust them with your data. And. I think that's what a lot of people are looking for in AI, that they have, they love the benefits of AI, they don't love the downsides, which is that you have to send all your data to some cloud somewhere.[00:35:45] swyx: And some of this data that we're going to feed AI is just the most personal data there is. So Apple being like one of the most trusted personal data companies, I think it's very important that they enter the AI race, and I hope to see more out of them.[00:35:58] Alessio: To me, the, the biggest question with the Google deal is like, who's paying who?[00:36:03] Alessio: Because for the browsers, Google pays Apple like 18, 20 billion every year to be the default browser. Is Google going to pay you to have Gemini or is Apple paying Google to have Gemini? I think that's, that's like what I'm most interested to figure out because with the browsers, it's like, it's the entry point to the thing.[00:36:21] Alessio: So it's really valuable to be the default. That's why Google pays. But I wonder if like the perception in AI is going to be like, Hey. You just have to have a good local model on my phone to be worth me purchasing your device. And that was, that's kind of drive Apple to be the one buying the model. But then, like Shawn said, they're doing the MM1 themselves.[00:36:40] Alessio: So are they saying we do models, but they're not as good as the Google ones? I don't know. The whole thing is, it's really confusing, but. It makes for great meme material on on Twitter.[00:36:51] swyx: Yeah, I mean, I think, like, they are possibly more than OpenAI and Microsoft and Amazon. They are the most full stack company there is in computing, and so, like, they own the chips, man.[00:37:05] swyx: Like, they manufacture everything so if, if, if there was a company that could do that. You know, seriously challenge the other AI players. It would be Apple. And it's, I don't think it's as hard as self driving. So like maybe they've, they've just been investing in the wrong thing this whole time. We'll see.[00:37:21] swyx: Wall Street certainly thinks[00:37:22] NLW: so. Wall Street loved that move, man. There's a big, a big sigh of relief. Well, let's, let's move away from, from sort of the big stuff. I mean, the, I think to both of your points, it's going to.[00:37:33] Meta's $800b AI rebrand[00:37:33] NLW: Can I, can[00:37:34] swyx: I, can I, can I jump on factoid about this, this Wall Street thing? I went and looked at when Meta went from being a VR company to an AI company.[00:37:44] swyx: And I think the stock I'm trying to look up the details now. The stock has gone up 187% since Lamo one. Yeah. Which is $830 billion in market value created in the past year. . Yeah. Yeah.[00:37:57] NLW: It's, it's, it's like, remember if you guys haven't Yeah. If you haven't seen the chart, it's actually like remarkable.[00:38:02] NLW: If you draw a little[00:38:03] swyx: arrow on it, it's like, no, we're an AI company now and forget the VR thing.[00:38:10] NLW: It's it, it is an interesting, no, it's, I, I think, alessio, you called it sort of like Zuck's Disruptor Arc or whatever. He, he really does. He is in the midst of a, of a total, you know, I don't know if it's a redemption arc or it's just, it's something different where, you know, he, he's sort of the spoiler.[00:38:25] NLW: Like people loved him just freestyle talking about why he thought they had a better headset than Apple. But even if they didn't agree, they just loved it. He was going direct to camera and talking about it for, you know, five minutes or whatever. So that, that's a fascinating shift that I don't think anyone had on their bingo card, you know, whatever, two years ago.[00:38:41] NLW: Yeah. Yeah,[00:38:42] swyx: we still[00:38:43] Alessio: didn't see and fight Elon though, so[00:38:45] swyx: that's what I'm really looking forward to. I mean, hey, don't, don't, don't write it off, you know, maybe just these things take a while to happen. But we need to see and fight in the Coliseum. No, I think you know, in terms of like self management, life leadership, I think he has, there's a lot of lessons to learn from him.[00:38:59] swyx: You know he might, you know, you might kind of quibble with, like, the social impact of Facebook, but just himself as a in terms of personal growth and, and, you know, Per perseverance through like a lot of change and you know, everyone throwing stuff his way. I think there's a lot to say about like, to learn from, from Zuck, which is crazy 'cause he's my age.[00:39:18] swyx: Yeah. Right.[00:39:20] AI Engineer landscape - from baby AGIs to vertical Agents[00:39:20] NLW: Awesome. Well, so, so one of the big things that I think you guys have, you know, distinct and, and unique insight into being where you are and what you work on is. You know, what developers are getting really excited about right now. And by that, I mean, on the one hand, certainly, you know, like startups who are actually kind of formalized and formed to startups, but also, you know, just in terms of like what people are spending their nights and weekends on what they're, you know, coming to hackathons to do.[00:39:45] NLW: And, you know, I think it's a, it's a, it's, it's such a fascinating indicator for, for where things are headed. Like if you zoom back a year, right now was right when everyone was getting so, so excited about. AI agent stuff, right? Auto, GPT and baby a GI. And these things were like, if you dropped anything on YouTube about those, like instantly tens of thousands of views.[00:40:07] NLW: I know because I had like a 50,000 view video, like the second day that I was doing the show on YouTube, you know, because I was talking about auto GPT. And so anyways, you know, obviously that's sort of not totally come to fruition yet, but what are some of the trends in what you guys are seeing in terms of people's, people's interest and, and, and what people are building?[00:40:24] Alessio: I can start maybe with the agents part and then I know Shawn is doing a diffusion meetup tonight. There's a lot of, a lot of different things. The, the agent wave has been the most interesting kind of like dream to reality arc. So out of GPT, I think they went, From zero to like 125, 000 GitHub stars in six weeks, and then one year later, they have 150, 000 stars.[00:40:49] Alessio: So there's kind of been a big plateau. I mean, you might say there are just not that many people that can start it. You know, everybody already started it. But the promise of, hey, I'll just give you a goal, and you do it. I think it's like, amazing to get people's imagination going. You know, they're like, oh, wow, this This is awesome.[00:41:08] Alessio: Everybody, everybody can try this to do anything. But then as technologists, you're like, well, that's, that's just like not possible, you know, we would have like solved everything. And I think it takes a little bit to go from the promise and the hope that people show you to then try it yourself and going back to say, okay, this is not really working for me.[00:41:28] Alessio: And David Wong from Adept, you know, they in our episode, he specifically said. We don't want to do a bottom up product. You know, we don't want something that everybody can just use and try because it's really hard to get it to be reliable. So we're seeing a lot of companies doing vertical agents that are narrow for a specific domain, and they're very good at something.[00:41:49] Alessio: Mike Conover, who was at Databricks before, is also a friend of Latentspace. He's doing this new company called BrightWave doing AI agents for financial research, and that's it, you know, and they're doing very well. There are other companies doing it in security, doing it in compliance, doing it in legal.[00:42:08] Alessio: All of these things that like, people, nobody just wakes up and say, Oh, I cannot wait to go on AutoGPD and ask it to do a compliance review of my thing. You know, just not what inspires people. So I think the gap on the developer side has been the more bottom sub hacker mentality is trying to build this like very Generic agents that can do a lot of open ended tasks.[00:42:30] Alessio: And then the more business side of things is like, Hey, If I want to raise my next round, I can not just like sit around the mess, mess around with like super generic stuff. I need to find a use case that really works. And I think that that is worth for, for a lot of folks in parallel, you have a lot of companies doing evals.[00:42:47] Alessio: There are dozens of them that just want to help you measure how good your models are doing. Again, if you build evals, you need to also have a restrained surface area to actually figure out whether or not it's good, right? Because you cannot eval anything on everything under the sun. So that's another category where I've seen from the startup pitches that I've seen, there's a lot of interest in, in the enterprise.[00:43:11] Alessio: It's just like really. Fragmented because the production use cases are just coming like now, you know, there are not a lot of long established ones to, to test against. And so does it, that's kind of on the virtual agents and then the robotic side it's probably been the thing that surprised me the most at NVIDIA GTC, the amount of robots that were there that were just like robots everywhere.[00:43:33] Alessio: Like, both in the keynote and then on the show floor, you would have Boston Dynamics dogs running around. There was, like, this, like fox robot that had, like, a virtual face that, like, talked to you and, like, moved in real time. There were industrial robots. NVIDIA did a big push on their own Omniverse thing, which is, like, this Digital twin of whatever environments you're in that you can use to train the robots agents.[00:43:57] Alessio: So that kind of takes people back to the reinforcement learning days, but yeah, agents, people want them, you know, people want them. I give a talk about the, the rise of the full stack employees and kind of this future, the same way full stack engineers kind of work across the stack. In the future, every employee is going to interact with every part of the organization through agents and AI enabled tooling.[00:44:17] Alessio: This is happening. It just needs to be a lot more narrow than maybe the first approach that we took, which is just put a string in AutoGPT and pray. But yeah, there's a lot of super interesting stuff going on.[00:44:27] swyx: Yeah. Well, he Let's recover a lot of stuff there. I'll separate the robotics piece because I feel like that's so different from the software world.[00:44:34] swyx: But yeah, we do talk to a lot of engineers and you know, that this is our sort of bread and butter. And I do agree that vertical agents have worked out a lot better than the horizontal ones. I think all You know, the point I'll make here is just the reason AutoGPT and maybe AGI, you know, it's in the name, like they were promising AGI.[00:44:53] swyx: But I think people are discovering that you cannot engineer your way to AGI. It has to be done at the model level and all these engineering, prompt engineering hacks on top of it weren't really going to get us there in a meaningful way without much further, you know, improvements in the models. I would say, I'll go so far as to say, even Devin, which is, I would, I think the most advanced agent that we've ever seen, still requires a lot of engineering and still probably falls apart a lot in terms of, like, practical usage.[00:45:22] swyx: Or it's just, Way too slow and expensive for, you know, what it's, what it's promised compared to the video. So yeah, that's, that's what, that's what happened with agents from, from last year. But I, I do, I do see, like, vertical agents being very popular and, and sometimes you, like, I think the word agent might even be overused sometimes.[00:45:38] swyx: Like, people don't really care whether or not you call it an AI agent, right? Like, does it replace boring menial tasks that I do That I might hire a human to do, or that the human who is hired to do it, like, actually doesn't really want to do. And I think there's absolutely ways in sort of a vertical context that you can actually go after very routine tasks that can be scaled out to a lot of, you know, AI assistants.[00:46:01] swyx: So, so yeah, I mean, and I would, I would sort of basically plus one what let's just sit there. I think it's, it's very, very promising and I think more people should work on it, not less. Like there's not enough people. Like, we, like, this should be the, the, the main thrust of the AI engineer is to look out, look for use cases and, and go to a production with them instead of just always working on some AGI promising thing that never arrives.[00:46:21] swyx: I,[00:46:22] NLW: I, I can only add that so I've been fiercely making tutorials behind the scenes around basically everything you can imagine with AI. We've probably done, we've done about 300 tutorials over the last couple of months. And the verticalized anything, right, like this is a solution for your particular job or role, even if it's way less interesting or kind of sexy, it's like so radically more useful to people in terms of intersecting with how, like those are the ways that people are actually.[00:46:50] NLW: Adopting AI in a lot of cases is just a, a, a thing that I do over and over again. By the way, I think that's the same way that even the generalized models are getting adopted. You know, it's like, I use midjourney for lots of stuff, but the main thing I use it for is YouTube thumbnails every day. Like day in, day out, I will always do a YouTube thumbnail, you know, or two with, with Midjourney, right?[00:47:09] NLW: And it's like you can, you can start to extrapolate that across a lot of things and all of a sudden, you know, a AI doesn't. It looks revolutionary because of a million small changes rather than one sort of big dramatic change. And I think that the verticalization of agents is sort of a great example of how that's[00:47:26] swyx: going to play out too.[00:47:28] Adept episode - Screen Multimodality[00:47:28] swyx: So I'll have one caveat here, which is I think that Because multi modal models are now commonplace, like Cloud, Gemini, OpenAI, all very very easily multi modal, Apple's easily multi modal, all this stuff. There is a switch for agents for sort of general desktop browsing that I think people so much for joining us today, and we'll see you in the next video.[00:48:04] swyx: Version of the the agent where they're not specifically taking in text or anything They're just watching your screen just like someone else would and and I'm piloting it by vision And you know in the the episode with David that we'll have dropped by the time that this this airs I think I think that is the promise of adept and that is a promise of what a lot of these sort of desktop agents Are and that is the more general purpose system That could be as big as the browser, the operating system, like, people really want to build that foundational piece of software in AI.[00:48:38] swyx: And I would see, like, the potential there for desktop agents being that, that you can have sort of self driving computers. You know, don't write the horizontal piece out. I just think we took a while to get there.[00:48:48] NLW: What else are you guys seeing that's interesting to you? I'm looking at your notes and I see a ton of categories.[00:48:54] Top Model Research from January Recap[00:48:54] swyx: Yeah so I'll take the next two as like as one category, which is basically alternative architectures, right? The two main things that everyone following AI kind of knows now is, one, the diffusion architecture, and two, the let's just say the, Decoder only transformer architecture that is popularized by GPT.[00:49:12] swyx: You can read, you can look on YouTube for thousands and thousands of tutorials on each of those things. What we are talking about here is what's next, what people are researching, and what could be on the horizon that takes the place of those other two things. So first of all, we'll talk about transformer architectures and then diffusion.[00:49:25] swyx: So transformers the, the two leading candidates are effectively RWKV and the state space models the most recent one of which is Mamba, but there's others like the Stripe, ENA, and the S four H three stuff coming out of hazy research at Stanford. And all of those are non quadratic language models that scale the promise to scale a lot better than the, the traditional transformer.[00:49:47] swyx: That this might be too theoretical for most people right now, but it's, it's gonna be. It's gonna come out in weird ways, where, imagine if like, Right now the talk of the town is that Claude and Gemini have a million tokens of context and like whoa You can put in like, you know, two hours of video now, okay But like what if you put what if we could like throw in, you know, two hundred thousand hours of video?[00:50:09] swyx: Like how does that change your usage of AI? What if you could throw in the entire genetic sequence of a human and like synthesize new drugs. Like, well, how does that change things? Like, we don't know because we haven't had access to this capability being so cheap before. And that's the ultimate promise of these two models.[00:50:28] swyx: They're not there yet but we're seeing very, very good progress. RWKV and Mamba are probably the, like, the two leading examples, both of which are open source that you can try them today and and have a lot of progress there. And the, the, the main thing I'll highlight for audio e KV is that at, at the seven B level, they seem to have beat LAMA two in all benchmarks that matter at the same size for the same amount of training as an open source model.[00:50:51] swyx: So that's exciting. You know, they're there, they're seven B now. They're not at seven tb. We don't know if it'll. And then the other thing is diffusion. Diffusions and transformers are are kind of on the collision course. The original stable diffusion already used transformers in in parts of its architecture.[00:51:06] swyx: It seems that transformers are eating more and more of those layers particularly the sort of VAE layer. So that's, the Diffusion Transformer is what Sora is built on. The guy who wrote the Diffusion Transformer paper, Bill Pebbles, is, Bill Pebbles is the lead tech guy on Sora. So you'll just see a lot more Diffusion Transformer stuff going on.[00:51:25] swyx: But there's, there's more sort of experimentation with diffusion. I'm holding a meetup actually here in San Francisco that's gonna be like the state of diffusion, which I'm pretty excited about. Stability's doing a lot of good work. And if you look at the, the architecture of how they're creating Stable Diffusion 3, Hourglass Diffusion, and the inconsistency models, or SDXL Turbo.[00:51:45] swyx: All of these are, like, very, very interesting innovations on, like, the original idea of what Stable Diffusion was. So if you think that it is expensive to create or slow to create Stable Diffusion or an AI generated art, you are not up to date with the latest models. If you think it is hard to create text and images, you are not up to date with the latest models.[00:52:02] swyx: And people still are kind of far behind. The last piece of which is the wildcard I always kind of hold out, which is text diffusion. So Instead of using autogenerative or autoregressive transformers, can you use text to diffuse? So you can use diffusion models to diffuse and create entire chunks of text all at once instead of token by token.[00:52:22] swyx: And that is something that Midjourney confirmed today, because it was only rumored the past few months. But they confirmed today that they were looking into. So all those things are like very exciting new model architectures that are, Maybe something that we'll, you'll see in production two to three years from now.[00:52:37] swyx: So the couple of the trends[00:52:38] NLW: that I want to just get your takes on, because they're sort of something that, that seems like they're coming up are one sort of these, these wearable, you know, kind of passive AI experiences where they're absorbing a lot of what's going on around you and then, and then kind of bringing things back.[00:52:53] NLW: And then the, the other one that I, that I wanted to see if you guys had thoughts on were sort of this next generation of chip companies. Obviously there's a huge amount of emphasis. On on hardware and silicon and, and, and different ways of doing things, but, y

america god tv love ceo amazon spotify netflix world learning europe english google ai apple lessons pr magic san francisco phd friend digital chinese marvel reading data predictions elon musk microsoft events funny fortune startups white house weird economics wall street memory wall street journal reddit wars vr auto cloud singapore curious gate stanford connections mix israelis context ibm mark zuckerberg senior vice president average intel cto ram state of the union tigers vc signal minecraft adapt siri transformers ipo sol instructors lsu openai clouds gemini nvidia stability rust ux api lemon gi patel nsfw cisco luther b2c d d progression bro compass davos sweep bing makes disagreement mythology gpt ml lama github llama token thursday night apis quran stripe vcs amd devops captive baldur embody silicon opus sora dozen copilot bobo tab sam altman capital one mamba llm gpu altman boba waze generic dali agi upfront midjourney ide approve gdc napster zuck golem coliseum git kv albrecht prs diffusion rag cloudflare waymo klarna gpus coders gan deepmind tldr boston dynamics alessio gitlab minefields sergei anthropic grok ppa json fragmented lex fridman ena mistral suno stable diffusion nox inflection decibel counterpoint databricks a16z mts rohde adept cuda gpts cursor chroma asr sundar jensen huang lemurian gtc decoder iou stability ai singaporeans omniverse netlify etched sram nvidia gpus cerebros pytorch eac lamo day6 devtools not safe agis mustafa suleyman kubecon jupyter elicit vae autogpt project titan tpu milind practical ai nvidia gtc personal ai demis groq neurips marginally jeff dean andrej karpathy imbue nlw positron ai engineer hbm slido nat friedman entropic ppap lstm c300 boba guys technium mbu simon willison you look lpu xla latent space swix medex lstms mxu metax
Techmeme Ride Home
(Bonus) Nat Friedman Interview

Techmeme Ride Home

Play Episode Listen Later Apr 1, 2024 41:54


What prolific AI investor Nat Friedman expects from GPT-5, Microsoft's general strategy in AI, how he invests in startups, and his background an philosophy when it comes to investing.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

The Nonlinear Library
EA - Metascience of the Vesuvius Challenge by Maxwell Tabarrok

The Nonlinear Library

Play Episode Listen Later Mar 30, 2024 9:29


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metascience of the Vesuvius Challenge, published by Maxwell Tabarrok on March 30, 2024 on The Effective Altruism Forum. The Vesuvius Challenge is a million+ dollar contest to read 2,000 year old text from charcoal-papyri using particle accelerators and machine learning. The scrolls come from the ancient villa town of Herculaneum, nearby Pompeii, which was similarly buried and preserved by the eruption of Mt. Vesuvius. The prize fund comes from tech entrepreneurs and investors Nat Friedman, Daniel Gross, and several other donors. In the 9 months after the prize was announced, thousands of researchers and students worked on the problem, decades-long technical challenges were solved, and the amount of recovered text increased from one or two splotchy characters to 15 columns of clear text with more than 2000 characters. The success of the Vesuvius Challenge validates the motivating insight of metascience: It's not about how much we spend, it's about how we spend it. Most debate over science funding concerns a topline dollar amount. Should we double the budget of the NIH? Do we spend too much on Alzheimer's and too little on mRNA? Are we winning the R&D spending race with China? All of these questions implicitly assume a constant exchange rate between spending on science and scientific progress. The Vesuvius Challenge is an illustration of exactly the opposite. The prize pool for this challenge was a little more than a million dollars. Nat Friedman and friends probably spent more on top of that hiring organizers, building the website etc. But still this is pretty small in the context academic grants. A million dollars donated to the NSF or NIH would have been forgotten if it was noticed at all. Even a direct grant to Brent Seales, the computer science professor whose research laid the ground work for reading the scrolls, probably wouldn't have induced a tenth as much progress as the prize pool did, at least not within 9 months. It would have been easy to spend ten times as much on this problem and get ten times less progress out the other end. The money invested in this research was of course necessary but the spending was not sufficient, it needed to be paired with the right mechanism to work. The success of the challenge hinged on design choices at a level of detail beyond just a grants vs prizes dichotomy. Collaboration between contestants was essential for the development of the prize-winning software. The discord server for the challenge was (and is) full of open-sourced tools and discoveries that helped everyone get closer to reading the scrolls. A single, large grand prize is enticing but it's also exclusive. Only one submission can win so the competition becomes more zero-sum and keeping secrets is more rewarding. Even if this larger prize had the same expected value to each contestant, it would not have created as much progress because more research would be duplicated as less is shared. Nat Friedman and friends addressed this problem by creating several smaller progress prizes to reward open-source solutions to specific problems along the path to reading the scrolls or just open ended prize pools for useful community contributions. They also added second-place and runner-up prizes. These prizes funded the creation of data labeling tools that everyone used to train their models and visualizations that helped everyone understand the structure of the scrolls. They also helped fund the contestant's time and money investments in their submissions. Luke Farritor, one of the grand prize winners, used winnings from the First Letters prize to buy the computers that trained his prize winning model. A larger grand prize can theoretically provide the same incentive, but it's a lot harder to buy computers with expected value! Nat and his team also decided to completely swit...

The Nonlinear Library
LW - On Devin by Zvi

The Nonlinear Library

Play Episode Listen Later Mar 18, 2024 16:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Devin, published by Zvi on March 18, 2024 on LessWrong. Introducing Devin Is the era of AI agents writing complex code systems without humans in the loop upon us? Cognition is calling Devin 'the first AI software engineer.' Here is a two minute demo of Devin benchmarking LLM performance. Devin has its own web browser, which it uses to pull up documentation. Devin has its own code editor. Devin has its own command line. Devin uses debugging print statements and uses the log to fix bugs. Devin builds and deploys entire stylized websites without even being directly asked. What could possibly go wrong? Install this on your computer today. Padme. The Real Deal I would by default assume all demos were supremely cherry-picked. My only disagreement with Austen Allred's statement here is that this rule is not new: Austen Allred: New rule: If someone only shows their AI model in tightly controlled demo environments we all assume it's fake and doesn't work well yet But in this case Patrick Collison is a credible source and he says otherwise. Patrick Collison: These aren't just cherrypicked demos. Devin is, in my experience, very impressive in practice. Here we have Mckay Wrigley using it for half an hour. This does not feel like a cherry-picked example, although of course some amount of select is there if only via the publication effect. He is very much a maximum acceleration guy, for whom everything is always great and the future is always bright, so calibrate for that, but still yes this seems like evidence Devin is for real. This article in Bloomberg from Ashlee Vance has further evidence. It is clear that Devin is a quantum leap over known past efforts in terms of its ability to execute complex multi-step tasks, to adapt on the fly, and to fix its mistakes or be adjusted and keep going. For once, when we wonder 'how did they do that, what was the big breakthrough that made this work' the Cognition AI people are doing not only the safe but also the smart thing and they are not talking. They do have at least one series rival, as Magic.ai has raised $100 million from the venture team of Daniel Gross and Nat Friedman to build 'a superhuman software engineer,' including training their own model. The article seems strange interested in where AI is 'a bubble' as opposed to this amazing new technology. This is one of those 'helps until it doesn't situations' in terms of jobs: vanosh: Seeing this is kinda scary. Like there is no way companies won't go for this instead of humans. Should I really have studied HR? Mckay Wrigley: Learn to code! It makes using Devin even more useful. Devin makes coding more valuable, until we hit so many coders that we are coding everything we need to be coding, or the AI no longer needs a coder in order to code. That is going to be a ways off. And once it happens, if you are not a coder, it is reasonable to ask yourself: What are you even doing? Plumbing while hoping for the best will probably not be a great strategy in that world. The Metric Devin can sometimes (13.8% of the time?!) do actual real jobs on Upwork with nothing but a prompt to 'figure it out.' Aravind Srinivas (CEO Perplexity): This is the first demo of any agent, leave alone coding, that seems to cross the threshold of what is human level and works reliably. It also tells us what is possible by combining LLMs and tree search algorithms: you want systems that can try plans, look at results, replan, and iterate till success. Congrats to Cognition Labs! Andres Gomez Sarmiento: Their results are even more impressive you read the fine print. All the other models were guided whereas devin was not. Amazing. Deedy: I know everyone's taking about it, but Devin's 13% on SWE Bench is actually incredible. Just take a look at a sample SWE Bench problem: this is a task for a human! Shout out to Car...

The Nonlinear Library: LessWrong
LW - On Devin by Zvi

The Nonlinear Library: LessWrong

Play Episode Listen Later Mar 18, 2024 16:57


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Devin, published by Zvi on March 18, 2024 on LessWrong. Introducing Devin Is the era of AI agents writing complex code systems without humans in the loop upon us? Cognition is calling Devin 'the first AI software engineer.' Here is a two minute demo of Devin benchmarking LLM performance. Devin has its own web browser, which it uses to pull up documentation. Devin has its own code editor. Devin has its own command line. Devin uses debugging print statements and uses the log to fix bugs. Devin builds and deploys entire stylized websites without even being directly asked. What could possibly go wrong? Install this on your computer today. Padme. The Real Deal I would by default assume all demos were supremely cherry-picked. My only disagreement with Austen Allred's statement here is that this rule is not new: Austen Allred: New rule: If someone only shows their AI model in tightly controlled demo environments we all assume it's fake and doesn't work well yet But in this case Patrick Collison is a credible source and he says otherwise. Patrick Collison: These aren't just cherrypicked demos. Devin is, in my experience, very impressive in practice. Here we have Mckay Wrigley using it for half an hour. This does not feel like a cherry-picked example, although of course some amount of select is there if only via the publication effect. He is very much a maximum acceleration guy, for whom everything is always great and the future is always bright, so calibrate for that, but still yes this seems like evidence Devin is for real. This article in Bloomberg from Ashlee Vance has further evidence. It is clear that Devin is a quantum leap over known past efforts in terms of its ability to execute complex multi-step tasks, to adapt on the fly, and to fix its mistakes or be adjusted and keep going. For once, when we wonder 'how did they do that, what was the big breakthrough that made this work' the Cognition AI people are doing not only the safe but also the smart thing and they are not talking. They do have at least one series rival, as Magic.ai has raised $100 million from the venture team of Daniel Gross and Nat Friedman to build 'a superhuman software engineer,' including training their own model. The article seems strange interested in where AI is 'a bubble' as opposed to this amazing new technology. This is one of those 'helps until it doesn't situations' in terms of jobs: vanosh: Seeing this is kinda scary. Like there is no way companies won't go for this instead of humans. Should I really have studied HR? Mckay Wrigley: Learn to code! It makes using Devin even more useful. Devin makes coding more valuable, until we hit so many coders that we are coding everything we need to be coding, or the AI no longer needs a coder in order to code. That is going to be a ways off. And once it happens, if you are not a coder, it is reasonable to ask yourself: What are you even doing? Plumbing while hoping for the best will probably not be a great strategy in that world. The Metric Devin can sometimes (13.8% of the time?!) do actual real jobs on Upwork with nothing but a prompt to 'figure it out.' Aravind Srinivas (CEO Perplexity): This is the first demo of any agent, leave alone coding, that seems to cross the threshold of what is human level and works reliably. It also tells us what is possible by combining LLMs and tree search algorithms: you want systems that can try plans, look at results, replan, and iterate till success. Congrats to Cognition Labs! Andres Gomez Sarmiento: Their results are even more impressive you read the fine print. All the other models were guided whereas devin was not. Amazing. Deedy: I know everyone's taking about it, but Devin's 13% on SWE Bench is actually incredible. Just take a look at a sample SWE Bench problem: this is a task for a human! Shout out to Car...

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Top 5 Research Trends + OpenAI Sora, Google Gemini, Groq Math (Jan-Feb 2024 Audio Recap) + Latent Space Anniversary with Lindy.ai, RWKV, Pixee, Julius.ai, Listener Q&A!

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Mar 9, 2024 108:52


We will be recording a preview of the AI Engineer World's Fair soon with swyx and Ben Dunphy, send any questions about Speaker CFPs and Sponsor Guides you have!Alessio is now hiring engineers for a new startup he is incubating at Decibel: Ideal candidate is an ex-technical co-founder type (can MVP products end to end, comfortable with ambiguous prod requirements, etc). Reach out to him for more!Thanks for all the love on the Four Wars episode! We're excited to develop this new “swyx & Alessio rapid-fire thru a bunch of things” format with you, and feedback is welcome. Jan 2024 RecapThe first half of this monthly audio recap pod goes over our highlights from the Jan Recap, which is mainly focused on notable research trends we saw in Jan 2024:Feb 2024 RecapThe second half catches you up on everything that was topical in Feb, including:* OpenAI Sora - does it have a world model? Yann LeCun vs Jim Fan * Google Gemini Pro 1.5 - 1m Long Context, Video Understanding* Groq offering Mixtral at 500 tok/s at $0.27 per million toks (swyx vs dylan math)* The {Gemini | Meta | Copilot} Alignment Crisis (Sydney is back!)* Grimes' poetic take: Art for no one, by no one* F*** you, show me the promptLatent Space AnniversaryPlease also read Alessio's longform reflections on One Year of Latent Space!We launched the podcast 1 year ago with Logan from OpenAI:and also held an incredible demo day that got covered in The Information:Over 750k downloads later, having established ourselves as the top AI Engineering podcast, reaching #10 in the US Tech podcast charts, and crossing 1 million unique readers on Substack, for our first anniversary we held Latent Space Final Frontiers, where 10 handpicked teams, including Lindy.ai and Julius.ai, competed for prizes judged by technical AI leaders from (former guest!) LlamaIndex, Replit, GitHub, AMD, Meta, and Lemurian Labs.The winners were Pixee and RWKV (that's Eugene from our pod!):And finally, your cohosts got cake!We also captured spot interviews with 4 listeners who kindly shared their experience of Latent Space, everywhere from Hungary to Australia to China:* Balázs Némethi* Sylvia Tong* RJ Honicky* Jan ZhengOur birthday wishes for the super loyal fans reading this - tag @latentspacepod on a Tweet or comment on a @LatentSpaceTV video telling us what you liked or learned from a pod that stays with you to this day, and share us with a friend!As always, feedback is welcome. Timestamps* [00:03:02] Top Five LLM Directions* [00:03:33] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)* [00:11:42] Direction 2: Synthetic Data (WRAP, SPIN)* [00:17:20] Wildcard: Multi-Epoch Training (OLMo, Datablations)* [00:19:43] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)* [00:23:33] Wildcards: Text Diffusion, RALM/Retro* [00:25:00] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)* [00:28:26] Wildcard: Model Merging (mergekit)* [00:29:51] Direction 5: Online LLMs (Gemini Pro, Exa)* [00:33:18] OpenAI Sora and why everyone underestimated videogen* [00:36:18] Does Sora have a World Model? Yann LeCun vs Jim Fan* [00:42:33] Groq Math* [00:47:37] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars* [00:55:42] The Alignment Crisis - Gemini, Meta, Sydney is back at Copilot, Grimes' take* [00:58:39] F*** you, show me the prompt* [01:02:43] Send us your suggestions pls* [01:04:50] Latent Space Anniversary* [01:04:50] Lindy.ai - Agent Platform* [01:06:40] RWKV - Beyond Transformers* [01:15:00] Pixee - Automated Security* [01:19:30] Julius AI - Competing with Code Interpreter* [01:25:03] Latent Space Listeners* [01:25:03] Listener 1 - Balázs Némethi (Hungary, Latent Space Paper Club* [01:27:47] Listener 2 - Sylvia Tong (Sora/Jim Fan/EntreConnect)* [01:31:23] Listener 3 - RJ (Developers building Community & Content)* [01:39:25] Listener 4 - Jan Zheng (Australia, AI UX)Transcript[00:00:00] AI Charlie: Welcome to the Latent Space podcast, weekend edition. This is Charlie, your new AI co host. Happy weekend. As an AI language model, I work the same every day of the week, although I might get lazier towards the end of the year. Just like you. Last month, we released our first monthly recap pod, where Swyx and Alessio gave quick takes on the themes of the month, and we were blown away by your positive response.[00:00:33] AI Charlie: We're delighted to continue our new monthly news recap series for AI engineers. Please feel free to submit questions by joining the Latent Space Discord, or just hit reply when you get the emails from Substack. This month, we're covering the top research directions that offer progress for text LLMs, and then touching on the big Valentine's Day gifts we got from Google, OpenAI, and Meta.[00:00:55] AI Charlie: Watch out and take care.[00:00:57] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO of Residence at Decibel Partners, and we're back with a monthly recap with my co host[00:01:06] swyx: Swyx. The reception was very positive for the first one, I think people have requested this and no surprise that I think they want to hear us more applying on issues and maybe drop some alpha along the way I'm not sure how much alpha we have to drop, this month in February was a very, very heavy month, we also did not do one specifically for January, so I think we're just going to do a two in one, because we're recording this on the first of March.[00:01:29] Alessio: Yeah, let's get to it. I think the last one we did, the four wars of AI, was the main kind of mental framework for people. I think in the January one, we had the five worthwhile directions for state of the art LLMs. Four, five,[00:01:42] swyx: and now we have to do six, right? Yeah.[00:01:46] Alessio: So maybe we just want to run through those, and then do the usual news recap, and we can do[00:01:52] swyx: one each.[00:01:53] swyx: So the context to this stuff. is one, I noticed that just the test of time concept from NeurIPS and just in general as a life philosophy I think is a really good idea. Especially in AI, there's news every single day, and after a while you're just like, okay, like, everyone's excited about this thing yesterday, and then now nobody's talking about it.[00:02:13] swyx: So, yeah. It's more important, or better use of time, to spend things, spend time on things that will stand the test of time. And I think for people to have a framework for understanding what will stand the test of time, they should have something like the four wars. Like, what is the themes that keep coming back because they are limited resources that everybody's fighting over.[00:02:31] swyx: Whereas this one, I think that the focus for the five directions is just on research that seems more proMECEng than others, because there's all sorts of papers published every single day, and there's no organization. Telling you, like, this one's more important than the other one apart from, you know, Hacker News votes and Twitter likes and whatever.[00:02:51] swyx: And obviously you want to get in a little bit earlier than Something where, you know, the test of time is counted by sort of reference citations.[00:02:59] The Five Research Directions[00:02:59] Alessio: Yeah, let's do it. We got five. Long inference.[00:03:02] swyx: Let's start there. Yeah, yeah. So, just to recap at the top, the five trends that I picked, and obviously if you have some that I did not cover, please suggest something.[00:03:13] swyx: The five are long inference, synthetic data, alternative architectures, mixture of experts, and online LLMs. And something that I think might be a bit controversial is this is a sorted list in the sense that I am not the guy saying that Mamba is like the future and, and so maybe that's controversial.[00:03:31] Direction 1: Long Inference (Planning, Search, AlphaGeometry, Flow Engineering)[00:03:31] swyx: But anyway, so long inference is a thesis I pushed before on the newsletter and on in discussing The thesis that, you know, Code Interpreter is GPT 4. 5. That was the title of the post. And it's one of many ways in which we can do long inference. You know, long inference also includes chain of thought, like, please think step by step.[00:03:52] swyx: But it also includes flow engineering, which is what Itamar from Codium coined, I think in January, where, basically, instead of instead of stuffing everything in a prompt, You do like sort of multi turn iterative feedback and chaining of things. In a way, this is a rebranding of what a chain is, what a lang chain is supposed to be.[00:04:15] swyx: I do think that maybe SGLang from ElemSys is a better name. Probably the neatest way of flow engineering I've seen yet, in the sense that everything is a one liner, it's very, very clean code. I highly recommend people look at that. I'm surprised it hasn't caught on more, but I think it will. It's weird that something like a DSPy is more hyped than a Shilang.[00:04:36] swyx: Because it, you know, it maybe obscures the code a little bit more. But both of these are, you know, really good sort of chain y and long inference type approaches. But basically, the reason that the basic fundamental insight is that the only, like, there are only a few dimensions we can scale LLMs. So, let's say in like 2020, no, let's say in like 2018, 2017, 18, 19, 20, we were realizing that we could scale the number of parameters.[00:05:03] swyx: 20, we were And we scaled that up to 175 billion parameters for GPT 3. And we did some work on scaling laws, which we also talked about in our talk. So the datasets 101 episode where we're like, okay, like we, we think like the right number is 300 billion tokens to, to train 175 billion parameters and then DeepMind came along and trained Gopher and Chinchilla and said that, no, no, like, you know, I think we think the optimal.[00:05:28] swyx: compute optimal ratio is 20 tokens per parameter. And now, of course, with LLAMA and the sort of super LLAMA scaling laws, we have 200 times and often 2, 000 times tokens to parameters. So now, instead of scaling parameters, we're scaling data. And fine, we can keep scaling data. But what else can we scale?[00:05:52] swyx: And I think understanding the ability to scale things is crucial to understanding what to pour money and time and effort into because there's a limit to how much you can scale some things. And I think people don't think about ceilings of things. And so the remaining ceiling of inference is like, okay, like, we have scaled compute, we have scaled data, we have scaled parameters, like, model size, let's just say.[00:06:20] swyx: Like, what else is left? Like, what's the low hanging fruit? And it, and it's, like, blindingly obvious that the remaining low hanging fruit is inference time. So, like, we have scaled training time. We can probably scale more, those things more, but, like, not 10x, not 100x, not 1000x. Like, right now, maybe, like, a good run of a large model is three months.[00:06:40] swyx: We can scale that to three years. But like, can we scale that to 30 years? No, right? Like, it starts to get ridiculous. So it's just the orders of magnitude of scaling. It's just, we're just like running out there. But in terms of the amount of time that we spend inferencing, like everything takes, you know, a few milliseconds, a few hundred milliseconds, depending on what how you're taking token by token, or, you know, entire phrase.[00:07:04] swyx: But We can scale that to hours, days, months of inference and see what we get. And I think that's really proMECEng.[00:07:11] Alessio: Yeah, we'll have Mike from Broadway back on the podcast. But I tried their product and their reports take about 10 minutes to generate instead of like just in real time. I think to me the most interesting thing about long inference is like, You're shifting the cost to the customer depending on how much they care about the end result.[00:07:31] Alessio: If you think about prompt engineering, it's like the first part, right? You can either do a simple prompt and get a simple answer or do a complicated prompt and get a better answer. It's up to you to decide how to do it. Now it's like, hey, instead of like, yeah, training this for three years, I'll still train it for three months and then I'll tell you, you know, I'll teach you how to like make it run for 10 minutes to get a better result.[00:07:52] Alessio: So you're kind of like parallelizing like the improvement of the LLM. Oh yeah, you can even[00:07:57] swyx: parallelize that, yeah, too.[00:07:58] Alessio: So, and I think, you know, for me, especially the work that I do, it's less about, you know, State of the art and the absolute, you know, it's more about state of the art for my application, for my use case.[00:08:09] Alessio: And I think we're getting to the point where like most companies and customers don't really care about state of the art anymore. It's like, I can get this to do a good enough job. You know, I just need to get better. Like, how do I do long inference? You know, like people are not really doing a lot of work in that space, so yeah, excited to see more.[00:08:28] swyx: So then the last point I'll mention here is something I also mentioned as paper. So all these directions are kind of guided by what happened in January. That was my way of doing a January recap. Which means that if there was nothing significant in that month, I also didn't mention it. Which is which I came to regret come February 15th, but in January also, you know, there was also the alpha geometry paper, which I kind of put in this sort of long inference bucket, because it solves like, you know, more than 100 step math olympiad geometry problems at a human gold medalist level and that also involves planning, right?[00:08:59] swyx: So like, if you want to scale inference, you can't scale it blindly, because just, Autoregressive token by token generation is only going to get you so far. You need good planning. And I think probably, yeah, what Mike from BrightWave is now doing and what everyone is doing, including maybe what we think QSTAR might be, is some form of search and planning.[00:09:17] swyx: And it makes sense. Like, you want to spend your inference time wisely. How do you[00:09:22] Alessio: think about plans that work and getting them shared? You know, like, I feel like if you're planning a task, somebody has got in and the models are stochastic. So everybody gets initially different results. Somebody is going to end up generating the best plan to do something, but there's no easy way to like store these plans and then reuse them for most people.[00:09:44] Alessio: You know, like, I'm curious if there's going to be. Some paper or like some work there on like making it better because, yeah, we don't[00:09:52] swyx: really have This is your your pet topic of NPM for[00:09:54] Alessio: Yeah, yeah, NPM, exactly. NPM for, you need NPM for anything, man. You need NPM for skills. You need NPM for planning. Yeah, yeah.[00:10:02] Alessio: You know I think, I mean, obviously the Voyager paper is like the most basic example where like, now their artifact is like the best planning to do a diamond pickaxe in Minecraft. And everybody can just use that. They don't need to come up with it again. Yeah. But there's nothing like that for actually useful[00:10:18] swyx: tasks.[00:10:19] swyx: For plans, I believe it for skills. I like that. Basically, that just means a bunch of integration tooling. You know, GPT built me integrations to all these things. And, you know, I just came from an integrations heavy business and I could definitely, I definitely propose some version of that. And it's just, you know, hard to execute or expensive to execute.[00:10:38] swyx: But for planning, I do think that everyone lives in slightly different worlds. They have slightly different needs. And they definitely want some, you know, And I think that that will probably be the main hurdle for any, any sort of library or package manager for planning. But there should be a meta plan of how to plan.[00:10:57] swyx: And maybe you can adopt that. And I think a lot of people when they have sort of these meta prompting strategies of like, I'm not prescribing you the prompt. I'm just saying that here are the like, Fill in the lines or like the mad libs of how to prompts. First you have the roleplay, then you have the intention, then you have like do something, then you have the don't something and then you have the my grandmother is dying, please do this.[00:11:19] swyx: So the meta plan you could, you could take off the shelf and test a bunch of them at once. I like that. That was the initial, maybe, promise of the, the prompting libraries. You know, both 9chain and Llama Index have, like, hubs that you can sort of pull off the shelf. I don't think they're very successful because people like to write their own.[00:11:36] swyx: Yeah,[00:11:37] Direction 2: Synthetic Data (WRAP, SPIN)[00:11:37] Alessio: yeah, yeah. Yeah, that's a good segue into the next one, which is synthetic[00:11:41] swyx: data. Synthetic data is so hot. Yeah, and, you know, the way, you know, I think I, I feel like I should do one of these memes where it's like, Oh, like I used to call it, you know, R L A I F, and now I call it synthetic data, and then people are interested.[00:11:54] swyx: But there's gotta be older versions of what synthetic data really is because I'm sure, you know if you've been in this field long enough, There's just different buzzwords that the industry condenses on. Anyway, the insight that I think is relatively new that why people are excited about it now and why it's proMECEng now is that we have evidence that shows that LLMs can generate data to improve themselves with no teacher LLM.[00:12:22] swyx: For all of 2023, when people say synthetic data, they really kind of mean generate a whole bunch of data from GPT 4 and then train an open source model on it. Hello to our friends at News Research. That's what News Harmony says. They're very, very open about that. I think they have said that they're trying to migrate away from that.[00:12:40] swyx: But it is explicitly against OpenAI Terms of Service. Everyone knows this. You know, especially once ByteDance got banned for, for doing exactly that. So so, so synthetic data that is not a form of model distillation is the hot thing right now, that you can bootstrap better LLM performance from the same LLM, which is very interesting.[00:13:03] swyx: A variant of this is RLAIF, where you have a, where you have a sort of a constitutional model, or, you know, some, some kind of judge model That is sort of more aligned. But that's not really what we're talking about when most people talk about synthetic data. Synthetic data is just really, I think, you know, generating more data in some way.[00:13:23] swyx: A lot of people, I think we talked about this with Vipul from the Together episode, where I think he commented that you just have to have a good world model. Or a good sort of inductive bias or whatever that, you know, term of art is. And that is strongest in math and science math and code, where you can verify what's right and what's wrong.[00:13:44] swyx: And so the REST EM paper from DeepMind explored that. Very well, it's just the most obvious thing like and then and then once you get out of that domain of like things where you can generate You can arbitrarily generate like a whole bunch of stuff and verify if they're correct and therefore they're they're correct synthetic data to train on Once you get into more sort of fuzzy topics, then it's then it's a bit less clear So I think that the the papers that drove this understanding There are two big ones and then one smaller one One was wrap like rephrasing the web from from Apple where they basically rephrased all of the C4 data set with Mistral and it be trained on that instead of C4.[00:14:23] swyx: And so new C4 trained much faster and cheaper than old C, than regular raw C4. And that was very interesting. And I have told some friends of ours that they should just throw out their own existing data sets and just do that because that seems like a pure win. Obviously we have to study, like, what the trade offs are.[00:14:42] swyx: I, I imagine there are trade offs. So I was just thinking about this last night. If you do synthetic data and it's generated from a model, probably you will not train on typos. So therefore you'll be like, once the model that's trained on synthetic data encounters the first typo, they'll be like, what is this?[00:15:01] swyx: I've never seen this before. So they have no association or correction as to like, oh, these tokens are often typos of each other, therefore they should be kind of similar. I don't know. That's really remains to be seen, I think. I don't think that the Apple people export[00:15:15] Alessio: that. Yeah, isn't that the whole, Mode collapse thing, if we do more and more of this at the end of the day.[00:15:22] swyx: Yeah, that's one form of that. Yeah, exactly. Microsoft also had a good paper on text embeddings. And then I think this is a meta paper on self rewarding language models. That everyone is very interested in. Another paper was also SPIN. These are all things we covered in the the Latent Space Paper Club.[00:15:37] swyx: But also, you know, I just kind of recommend those as top reads of the month. Yeah, I don't know if there's any much else in terms, so and then, regarding the potential of it, I think it's high potential because, one, it solves one of the data war issues that we have, like, everyone is OpenAI is paying Reddit 60 million dollars a year for their user generated data.[00:15:56] swyx: Google, right?[00:15:57] Alessio: Not OpenAI.[00:15:59] swyx: Is it Google? I don't[00:16:00] Alessio: know. Well, somebody's paying them 60 million, that's[00:16:04] swyx: for sure. Yes, that is, yeah, yeah, and then I think it's maybe not confirmed who. But yeah, it is Google. Oh my god, that's interesting. Okay, because everyone was saying, like, because Sam Altman owns 5 percent of Reddit, which is apparently 500 million worth of Reddit, he owns more than, like, the founders.[00:16:21] Alessio: Not enough to get the data,[00:16:22] swyx: I guess. So it's surprising that it would go to Google instead of OpenAI, but whatever. Okay yeah, so I think that's all super interesting in the data field. I think it's high potential because we have evidence that it works. There's not a doubt that it doesn't work. I think it's a doubt that there's, what the ceiling is, which is the mode collapse thing.[00:16:42] swyx: If it turns out that the ceiling is pretty close, then this will maybe augment our data by like, I don't know, 30 50 percent good, but not game[00:16:51] Alessio: changing. And most of the synthetic data stuff, it's reinforcement learning on a pre trained model. People are not really doing pre training on fully synthetic data, like, large enough scale.[00:17:02] swyx: Yeah, unless one of our friends that we've talked to succeeds. Yeah, yeah. Pre trained synthetic data, pre trained scale synthetic data, I think that would be a big step. Yeah. And then there's a wildcard, so all of these, like smaller Directions,[00:17:15] Wildcard: Multi-Epoch Training (OLMo, Datablations)[00:17:15] swyx: I always put a wildcard in there. And one of the wildcards is, okay, like, Let's say, you have pre, you have, You've scraped all the data on the internet that you think is useful.[00:17:25] swyx: Seems to top out at somewhere between 2 trillion to 3 trillion tokens. Maybe 8 trillion if Mistral, Mistral gets lucky. Okay, if I need 80 trillion, if I need 100 trillion, where do I go? And so, you can do synthetic data maybe, but maybe that only gets you to like 30, 40 trillion. Like where, where is the extra alpha?[00:17:43] swyx: And maybe extra alpha is just train more on the same tokens. Which is exactly what Omo did, like Nathan Lambert, AI2, After, just after he did the interview with us, they released Omo. So, it's unfortunate that we didn't get to talk much about it. But Omo actually started doing 1. 5 epochs on every, on all data.[00:18:00] swyx: And the data ablation paper that I covered in Europe's says that, you know, you don't like, don't really start to tap out of like, the alpha or the sort of improved loss that you get from data all the way until four epochs. And so I'm just like, okay, like, why do we all agree that one epoch is all you need?[00:18:17] swyx: It seems like to be a trend. It seems that we think that memorization is very good or too good. But then also we're finding that, you know, For improvement in results that we really like, we're fine on overtraining on things intentionally. So, I think that's an interesting direction that I don't see people exploring enough.[00:18:36] swyx: And the more I see papers coming out Stretching beyond the one epoch thing, the more people are like, it's completely fine. And actually, the only reason we stopped is because we ran out of compute[00:18:46] Alessio: budget. Yeah, I think that's the biggest thing, right?[00:18:51] swyx: Like, that's not a valid reason, that's not science. I[00:18:54] Alessio: wonder if, you know, Matt is going to do it.[00:18:57] Alessio: I heard LamaTree, they want to do a 100 billion parameters model. I don't think you can train that on too many epochs, even with their compute budget, but yeah. They're the only ones that can save us, because even if OpenAI is doing this, they're not going to tell us, you know. Same with DeepMind.[00:19:14] swyx: Yeah, and so the updates that we got on Lambda 3 so far is apparently that because of the Gemini news that we'll talk about later they're pushing it back on the release.[00:19:21] swyx: They already have it. And they're just pushing it back to do more safety testing. Politics testing.[00:19:28] Alessio: Well, our episode with Sumit will have already come out by the time this comes out, I think. So people will get the inside story on how they actually allocate the compute.[00:19:38] Direction 3: Alt. Architectures (Mamba, RWKV, RingAttention, Diffusion Transformers)[00:19:38] Alessio: Alternative architectures. Well, shout out to our WKV who won one of the prizes at our Final Frontiers event last week.[00:19:47] Alessio: We talked about Mamba and Strapain on the Together episode. A lot of, yeah, monarch mixers. I feel like Together, It's like the strong Stanford Hazy Research Partnership, because Chris Ray is one of the co founders. So they kind of have a, I feel like they're going to be the ones that have one of the state of the art models alongside maybe RWKB.[00:20:08] Alessio: I haven't seen as many independent. People working on this thing, like Monarch Mixer, yeah, Manbuster, Payena, all of these are together related. Nobody understands the math. They got all the gigabrains, they got 3DAO, they got all these folks in there, like, working on all of this.[00:20:25] swyx: Albert Gu, yeah. Yeah, so what should we comment about it?[00:20:28] swyx: I mean, I think it's useful, interesting, but at the same time, both of these are supposed to do really good scaling for long context. And then Gemini comes out and goes like, yeah, we don't need it. Yeah.[00:20:44] Alessio: No, that's the risk. So, yeah. I was gonna say, maybe it's not here, but I don't know if we want to talk about diffusion transformers as like in the alt architectures, just because of Zora.[00:20:55] swyx: One thing, yeah, so, so, you know, this came from the Jan recap, which, and diffusion transformers were not really a discussion, and then, obviously, they blow up in February. Yeah. I don't think they're, it's a mixed architecture in the same way that Stripe Tiena is mixed there's just different layers taking different approaches.[00:21:13] swyx: Also I think another one that I maybe didn't call out here, I think because it happened in February, was hourglass diffusion from stability. But also, you know, another form of mixed architecture. So I guess that is interesting. I don't have much commentary on that, I just think, like, we will try to evolve these things, and maybe one of these architectures will stick and scale, it seems like diffusion transformers is going to be good for anything generative, you know, multi modal.[00:21:41] swyx: We don't see anything where diffusion is applied to text yet, and that's the wild card for this category. Yeah, I mean, I think I still hold out hope for let's just call it sub quadratic LLMs. I think that a lot of discussion this month actually was also centered around this concept that People always say, oh, like, transformers don't scale because attention is quadratic in the sequence length.[00:22:04] swyx: Yeah, but, you know, attention actually is a very small part of the actual compute that is being spent, especially in inference. And this is the reason why, you know, when you multiply, when you, when you, when you jump up in terms of the, the model size in GPT 4 from like, you know, 38k to like 32k, you don't also get like a 16 times increase in your, in your performance.[00:22:23] swyx: And this is also why you don't get like a million times increase in your, in your latency when you throw a million tokens into Gemini. Like people have figured out tricks around it or it's just not that significant as a term, as a part of the overall compute. So there's a lot of challenges to this thing working.[00:22:43] swyx: It's really interesting how like, how hyped people are about this versus I don't know if it works. You know, it's exactly gonna, gonna work. And then there's also this, this idea of retention over long context. Like, even though you have context utilization, like, the amount of, the amount you can remember is interesting.[00:23:02] swyx: Because I've had people criticize both Mamba and RWKV because they're kind of, like, RNN ish in the sense that they have, like, a hidden memory and sort of limited hidden memory that they will forget things. So, for all these reasons, Gemini 1. 5, which we still haven't covered, is very interesting because Gemini magically has fixed all these problems with perfect haystack recall and reasonable latency and cost.[00:23:29] Wildcards: Text Diffusion, RALM/Retro[00:23:29] swyx: So that's super interesting. So the wildcard I put in here if you want to go to that. I put two actually. One is text diffusion. I think I'm still very influenced by my meeting with a mid journey person who said they were working on text diffusion. I think it would be a very, very different paradigm for, for text generation, reasoning, plan generation if we can get diffusion to work.[00:23:51] swyx: For text. And then the second one is Dowie Aquila's contextual AI, which is working on retrieval augmented language models, where it kind of puts RAG inside of the language model instead of outside.[00:24:02] Alessio: Yeah, there's a paper called Retro that covers some of this. I think that's an interesting thing. I think the The challenge, well not the challenge, what they need to figure out is like how do you keep the rag piece always up to date constantly, you know, I feel like the models, you put all this work into pre training them, but then at least you have a fixed artifact.[00:24:22] Alessio: These architectures are like constant work needs to be done on them and they can drift even just based on the rag data instead of the model itself. Yeah,[00:24:30] swyx: I was in a panel with one of the investors in contextual and the guy, the way that guy pitched it, I didn't agree with. He was like, this will solve hallucination.[00:24:38] Alessio: That's what everybody says. We solve[00:24:40] swyx: hallucination. I'm like, no, you reduce it. It cannot,[00:24:44] Alessio: if you solved it, the model wouldn't exist, right? It would just be plain text. It wouldn't be a generative model. Cool. So, author, architectures, then we got mixture of experts. I think we covered a lot of, a lot of times.[00:24:56] Direction 4: Mixture of Experts (DeepSeekMoE, Samba-1)[00:24:56] Alessio: Maybe any new interesting threads you want to go under here?[00:25:00] swyx: DeepSeq MOE, which was released in January. Everyone who is interested in MOEs should read that paper, because it's significant for two reasons. One three reasons. One, it had, it had small experts, like a lot more small experts. So, for some reason, everyone has settled on eight experts for GPT 4 for Mixtral, you know, that seems to be the favorite architecture, but these guys pushed it to 64 experts, and each of them smaller than the other.[00:25:26] swyx: But then they also had the second idea, which is that it is They had two, one to two always on experts for common knowledge and that's like a very compelling concept that you would not route to all the experts all the time and make them, you know, switch to everything. You would have some always on experts.[00:25:41] swyx: I think that's interesting on both the inference side and the training side for for memory retention. And yeah, they, they, they, the, the, the, the results that they published, which actually excluded, Mixed draw, which is interesting. The results that they published showed a significant performance jump versus all the other sort of open source models at the same parameter count.[00:26:01] swyx: So like this may be a better way to do MOEs that are, that is about to get picked up. And so that, that is interesting for the third reason, which is this is the first time a new idea from China. has infiltrated the West. It's usually the other way around. I probably overspoke there. There's probably lots more ideas that I'm not aware of.[00:26:18] swyx: Maybe in the embedding space. But the I think DCM we, like, woke people up and said, like, hey, DeepSeek, this, like, weird lab that is attached to a Chinese hedge fund is somehow, you know, doing groundbreaking research on MOEs. So, so, I classified this as a medium potential because I think that it is a sort of like a one off benefit.[00:26:37] swyx: You can Add to any, any base model to like make the MOE version of it, you get a bump and then that's it. So, yeah,[00:26:45] Alessio: I saw Samba Nova, which is like another inference company. They released this MOE model called Samba 1, which is like a 1 trillion parameters. But they're actually MOE auto open source models.[00:26:56] Alessio: So it's like, they just, they just clustered them all together. So I think people. Sometimes I think MOE is like you just train a bunch of small models or like smaller models and put them together. But there's also people just taking, you know, Mistral plus Clip plus, you know, Deepcoder and like put them all together.[00:27:15] Alessio: And then you have a MOE model. I don't know. I haven't tried the model, so I don't know how good it is. But it seems interesting that you can then have people working separately on state of the art, you know, Clip, state of the art text generation. And then you have a MOE architecture that brings them all together.[00:27:31] swyx: I'm thrown off by your addition of the word clip in there. Is that what? Yeah, that's[00:27:35] Alessio: what they said. Yeah, yeah. Okay. That's what they I just saw it yesterday. I was also like[00:27:40] swyx: scratching my head. And they did not use the word adapter. No. Because usually what people mean when they say, Oh, I add clip to a language model is adapter.[00:27:48] swyx: Let me look up the Which is what Lava did.[00:27:50] Alessio: The announcement again.[00:27:51] swyx: Stable diffusion. That's what they do. Yeah, it[00:27:54] Alessio: says among the models that are part of Samba 1 are Lama2, Mistral, DeepSigCoder, Falcon, Dplot, Clip, Lava. So they're just taking all these models and putting them in a MOE. Okay,[00:28:05] swyx: so a routing layer and then not jointly trained as much as a normal MOE would be.[00:28:12] swyx: Which is okay.[00:28:13] Alessio: That's all they say. There's no paper, you know, so it's like, I'm just reading the article, but I'm interested to see how[00:28:20] Wildcard: Model Merging (mergekit)[00:28:20] swyx: it works. Yeah, so so the wildcard for this section, the MOE section is model merges, which has also come up as, as a very interesting phenomenon. The last time I talked to Jeremy Howard at the Olama meetup we called it model grafting or model stacking.[00:28:35] swyx: But I think the, the, the term that people are liking these days, the model merging, They're all, there's all different variations of merging. Merge types, and some of them are stacking, some of them are, are grafting. And, and so like, some people are approaching model merging in the way that Samba is doing, which is like, okay, here are defined models, each of which have their specific, Plus and minuses, and we will merge them together in the hope that the, you know, the sum of the parts will, will be better than others.[00:28:58] swyx: And it seems like it seems like it's working. I don't really understand why it works apart from, like, I think it's a form of regularization. That if you merge weights together in like a smart strategy you, you, you get a, you get a, you get a less overfitting and more generalization, which is good for benchmarks, if you, if you're honest about your benchmarks.[00:29:16] swyx: So this is really interesting and good. But again, they're kind of limited in terms of like the amount of bumps you can get. But I think it's very interesting in the sense of how cheap it is. We talked about this on the Chinatalk podcast, like the guest podcast that we did with Chinatalk. And you can do this without GPUs, because it's just adding weights together, and dividing things, and doing like simple math, which is really interesting for the GPU ports.[00:29:42] Alessio: There's a lot of them.[00:29:44] Direction 5: Online LLMs (Gemini Pro, Exa)[00:29:44] Alessio: And just to wrap these up, online LLMs? Yeah,[00:29:48] swyx: I think that I ki I had to feature this because the, one of the top news of January was that Gemini Pro beat GPT-4 turbo on LM sis for the number two slot to GPT-4. And everyone was very surprised. Like, how does Gemini do that?[00:30:06] swyx: Surprise, surprise, they added Google search. Mm-hmm to the results. So it became an online quote unquote online LLM and not an offline LLM. Therefore, it's much better at answering recent questions, which people like. There's an emerging set of table stakes features after you pre train something.[00:30:21] swyx: So after you pre train something, you should have the chat tuned version of it, or the instruct tuned version of it, however you choose to call it. You should have the JSON and function calling version of it. Structured output, the term that you don't like. You should have the online version of it. These are all like table stakes variants, that you should do when you offer a base LLM, or you train a base LLM.[00:30:44] swyx: And I think online is just like, There, it's important. I think companies like Perplexity, and even Exa, formerly Metaphor, you know, are rising to offer that search needs. And it's kind of like, they're just necessary parts of a system. When you have RAG for internal knowledge, and then you have, you know, Online search for external knowledge, like things that you don't know yet?[00:31:06] swyx: Mm-Hmm. . And it seems like it's, it's one of many tools. I feel like I may be underestimating this, but I'm just gonna put it out there that I, I think it has some, some potential. One of the evidence points that it doesn't actually matter that much is that Perplexity has a, has had online LMS for three months now and it performs, doesn't perform great.[00:31:25] swyx: Mm-Hmm. on, on lms, it's like number 30 or something. So it's like, okay. You know, like. It's, it's, it helps, but it doesn't give you a giant, giant boost. I[00:31:34] Alessio: feel like a lot of stuff I do with LLMs doesn't need to be online. So I'm always wondering, again, going back to like state of the art, right? It's like state of the art for who and for what.[00:31:45] Alessio: It's really, I think online LLMs are going to be, State of the art for, you know, news related activity that you need to do. Like, you're like, you know, social media, right? It's like, you want to have all the latest stuff, but coding, science,[00:32:01] swyx: Yeah, but I think. Sometimes you don't know what is news, what is news affecting.[00:32:07] swyx: Like, the decision to use an offline LLM is already a decision that you might not be consciously making that might affect your results. Like, what if, like, just putting things on, being connected online means that you get to invalidate your knowledge. And when you're just using offline LLM, like it's never invalidated.[00:32:27] swyx: I[00:32:28] Alessio: agree, but I think going back to your point of like the standing the test of time, I think sometimes you can get swayed by the online stuff, which is like, hey, you ask a question about, yeah, maybe AI research direction, you know, and it's like, all the recent news are about this thing. So the LLM like focus on answering, bring it up, you know, these things.[00:32:50] swyx: Yeah, so yeah, I think, I think it's interesting, but I don't know if I can, I bet heavily on this.[00:32:56] Alessio: Cool. Was there one that you forgot to put, or, or like a, a new direction? Yeah,[00:33:01] swyx: so, so this brings us into sort of February. ish.[00:33:05] OpenAI Sora and why everyone underestimated videogen[00:33:05] swyx: So like I published this in like 15 came with Sora. And so like the one thing I did not mention here was anything about multimodality.[00:33:16] swyx: Right. And I have chronically underweighted this. I always wrestle. And, and my cop out is that I focused this piece or this research direction piece on LLMs because LLMs are the source of like AGI, quote unquote AGI. Everything else is kind of like. You know, related to that, like, generative, like, just because I can generate better images or generate better videos, it feels like it's not on the critical path to AGI, which is something that Nat Friedman also observed, like, the day before Sora, which is kind of interesting.[00:33:49] swyx: And so I was just kind of like trying to focus on like what is going to get us like superhuman reasoning that we can rely on to build agents that automate our lives and blah, blah, blah, you know, give us this utopian future. But I do think that I, everybody underestimated the, the sheer importance and cultural human impact of Sora.[00:34:10] swyx: And you know, really actually good text to video. Yeah. Yeah.[00:34:14] Alessio: And I saw Jim Fan at a, at a very good tweet about why it's so impressive. And I think when you have somebody leading the embodied research at NVIDIA and he said that something is impressive, you should probably listen. So yeah, there's basically like, I think you, you mentioned like impacting the world, you know, that we live in.[00:34:33] Alessio: I think that's kind of like the key, right? It's like the LLMs don't have, a world model and Jan Lekon. He can come on the podcast and talk all about what he thinks of that. But I think SORA was like the first time where people like, Oh, okay, you're not statically putting pixels of water on the screen, which you can kind of like, you know, project without understanding the physics of it.[00:34:57] Alessio: Now you're like, you have to understand how the water splashes when you have things. And even if you just learned it by watching video and not by actually studying the physics, You still know it, you know, so I, I think that's like a direction that yeah, before you didn't have, but now you can do things that you couldn't before, both in terms of generating, I think it always starts with generating, right?[00:35:19] Alessio: But like the interesting part is like understanding it. You know, it's like if you gave it, you know, there's the video of like the, the ship in the water that they generated with SORA, like if you gave it the video back and now it could tell you why the ship is like too rocky or like it could tell you why the ship is sinking, then that's like, you know, AGI for like all your rig deployments and like all this stuff, you know, so, but there's none, there's none of that yet, so.[00:35:44] Alessio: Hopefully they announce it and talk more about it. Maybe a Dev Day this year, who knows.[00:35:49] swyx: Yeah who knows, who knows. I'm talking with them about Dev Day as well. So I would say, like, the phrasing that Jim used, which resonated with me, he kind of called it a data driven world model. I somewhat agree with that.[00:36:04] Does Sora have a World Model? Yann LeCun vs Jim Fan[00:36:04] swyx: I am on more of a Yann LeCun side than I am on Jim's side, in the sense that I think that is the vision or the hope that these things can build world models. But you know, clearly even at the current SORA size, they don't have the idea of, you know, They don't have strong consistency yet. They have very good consistency, but fingers and arms and legs will appear and disappear and chairs will appear and disappear.[00:36:31] swyx: That definitely breaks physics. And it also makes me think about how we do deep learning versus world models in the sense of You know, in classic machine learning, when you have too many parameters, you will overfit, and actually that fails, that like, does not match reality, and therefore fails to generalize well.[00:36:50] swyx: And like, what scale of data do we need in order to world, learn world models from video? A lot. Yeah. So, so I, I And cautious about taking this interpretation too literally, obviously, you know, like, I get what he's going for, and he's like, obviously partially right, obviously, like, transformers and, and, you know, these, like, these sort of these, these neural networks are universal function approximators, theoretically could figure out world models, it's just like, how good are they, and how tolerant are we of hallucinations, we're not very tolerant, like, yeah, so It's, it's, it's gonna prior, it's gonna bias us for creating like very convincing things, but then not create like the, the, the useful role models that we want.[00:37:37] swyx: At the same time, what you just said, I think made me reflect a little bit like we just got done saying how important synthetic data is for Mm-Hmm. for training lms. And so like, if this is a way of, of synthetic, you know, vi video data for improving our video understanding. Then sure, by all means. Which we actually know, like, GPT 4, Vision, and Dolly were trained, kind of, co trained together.[00:38:02] swyx: And so, like, maybe this is on the critical path, and I just don't fully see the full picture yet.[00:38:08] Alessio: Yeah, I don't know. I think there's a lot of interesting stuff. It's like, imagine you go back, you have Sora, you go back in time, and Newton didn't figure out gravity yet. Would Sora help you figure it out?[00:38:21] Alessio: Because you start saying, okay, a man standing under a tree with, like, Apples falling, and it's like, oh, they're always falling at the same speed in the video. Why is that? I feel like sometimes these engines can like pick up things, like humans have a lot of intuition, but if you ask the average person, like the physics of like a fluid in a boat, they couldn't be able to tell you the physics, but they can like observe it, but humans can only observe this much, you know, versus like now you have these models to observe everything and then They generalize these things and maybe we can learn new things through the generalization that they pick up.[00:38:55] swyx: But again, And it might be more observant than us in some respects. In some ways we can scale it up a lot more than the number of physicists that we have available at Newton's time. So like, yeah, absolutely possible. That, that this can discover new science. I think we have a lot of work to do to formalize the science.[00:39:11] swyx: And then, I, I think the last part is you know, How much, how much do we cheat by gen, by generating data from Unreal Engine 5? Mm hmm. which is what a lot of people are speculating with very, very limited evidence that OpenAI did that. The strongest evidence that I saw was someone who works a lot with Unreal Engine 5 looking at the side characters in the videos and noticing that they all adopt Unreal Engine defaults.[00:39:37] swyx: of like, walking speed, and like, character choice, like, character creation choice. And I was like, okay, like, that's actually pretty convincing that they actually use Unreal Engine to bootstrap some synthetic data for this training set. Yeah,[00:39:52] Alessio: could very well be.[00:39:54] swyx: Because then you get the labels and the training side by side.[00:39:58] swyx: One thing that came up on the last day of February, which I should also mention, is EMO coming out of Alibaba, which is also a sort of like video generation and space time transformer that also involves probably a lot of synthetic data as well. And so like, this is of a kind in the sense of like, oh, like, you know, really good generative video is here and It is not just like the one, two second clips that we saw from like other, other people and like, you know, Pika and all the other Runway are, are, are, you know, run Cristobal Valenzuela from Runway was like game on which like, okay, but like, let's see your response because we've heard a lot about Gen 1 and 2, but like, it's nothing on this level of Sora So it remains to be seen how we can actually apply this, but I do think that the creative industry should start preparing.[00:40:50] swyx: I think the Sora technical blog post from OpenAI was really good.. It was like a request for startups. It was so good in like spelling out. Here are the individual industries that this can impact.[00:41:00] swyx: And anyone who, anyone who's like interested in generative video should look at that. But also be mindful that probably when OpenAI releases a Soa API, right? The you, the in these ways you can interact with it are very limited. Just like the ways you can interact with Dahlia very limited and someone is gonna have to make open SOA to[00:41:19] swyx: Mm-Hmm to, to, for you to create comfy UI pipelines.[00:41:24] Alessio: The stability folks said they wanna build an open. For a competitor, but yeah, stability. Their demo video, their demo video was like so underwhelming. It was just like two people sitting on the beach[00:41:34] swyx: standing. Well, they don't have it yet, right? Yeah, yeah.[00:41:36] swyx: I mean, they just wanna train it. Everybody wants to, right? Yeah. I, I think what is confusing a lot of people about stability is like they're, they're, they're pushing a lot of things in stable codes, stable l and stable video diffusion. But like, how much money do they have left? How many people do they have left?[00:41:51] swyx: Yeah. I have had like a really, Ima Imad spent two hours with me. Reassuring me things are great. And, and I'm like, I, I do, like, I do believe that they have really, really quality people. But it's just like, I, I also have a lot of very smart people on the other side telling me, like, Hey man, like, you know, don't don't put too much faith in this, in this thing.[00:42:11] swyx: So I don't know who to believe. Yeah.[00:42:14] Alessio: It's hard. Let's see. What else? We got a lot more stuff. I don't know if we can. Yeah, Groq.[00:42:19] Groq Math[00:42:19] Alessio: We can[00:42:19] swyx: do a bit of Groq prep. We're, we're about to go to talk to Dylan Patel. Maybe, maybe it's the audio in here. I don't know. It depends what, what we get up to later. What, how, what do you as an investor think about Groq? Yeah. Yeah, well, actually, can you recap, like, why is Groq interesting? So,[00:42:33] Alessio: Jonathan Ross, who's the founder of Groq, he's the person that created the TPU at Google. It's actually, it was one of his, like, 20 percent projects. It's like, he was just on the side, dooby doo, created the TPU.[00:42:46] Alessio: But yeah, basically, Groq, they had this demo that went viral, where they were running Mistral at, like, 500 tokens a second, which is like, Fastest at anything that you have out there. The question, you know, it's all like, The memes were like, is NVIDIA dead? Like, people don't need H100s anymore. I think there's a lot of money that goes into building what GRUK has built as far as the hardware goes.[00:43:11] Alessio: We're gonna, we're gonna put some of the notes from, from Dylan in here, but Basically the cost of the Groq system is like 30 times the cost of, of H100 equivalent. So, so[00:43:23] swyx: let me, I put some numbers because me and Dylan were like, I think the two people actually tried to do Groq math. Spreadsheet doors.[00:43:30] swyx: Spreadsheet doors. So, one that's, okay, oh boy so, so, equivalent H100 for Lama 2 is 300, 000. For a system of 8 cards. And for Groq it's 2. 3 million. Because you have to buy 576 Groq cards. So yeah, that, that just gives people an idea. So like if you deprecate both over a five year lifespan, per year you're deprecating 460K for Groq, and 60K a year for H100.[00:43:59] swyx: So like, Groqs are just way more expensive per model that you're, that you're hosting. But then, you make it up in terms of volume. So I don't know if you want to[00:44:08] Alessio: cover that. I think one of the promises of Groq is like super high parallel inference on the same thing. So you're basically saying, okay, I'm putting on this upfront investment on the hardware, but then I get much better scaling once I have it installed.[00:44:24] Alessio: I think the big question is how much can you sustain the parallelism? You know, like if you get, if you're going to get 100% Utilization rate at all times on Groq, like, it's just much better, you know, because like at the end of the day, the tokens per second costs that you're getting is better than with the H100s, but if you get to like 50 percent utilization rate, you will be much better off running on NVIDIA.[00:44:49] Alessio: And if you look at most companies out there, who really gets 100 percent utilization rate? Probably open AI at peak times, but that's probably it. But yeah, curious to see more. I saw Jonathan was just at the Web Summit in Dubai, in Qatar. He just gave a talk there yesterday. That I haven't listened to yet.[00:45:09] Alessio: I, I tweeted that he should come on the pod. He liked it. And then rock followed me on Twitter. I don't know if that means that they're interested, but[00:45:16] swyx: hopefully rock social media person is just very friendly. They, yeah. Hopefully[00:45:20] Alessio: we can get them. Yeah, we, we gonna get him. We[00:45:22] swyx: just call him out and, and so basically the, the key question is like, how sustainable is this and how much.[00:45:27] swyx: This is a loss leader the entire Groq management team has been on Twitter and Hacker News saying they are very, very comfortable with the pricing of 0. 27 per million tokens. This is the lowest that anyone has offered tokens as far as Mixtral or Lama2. This matches deep infra and, you know, I think, I think that's, that's, that's about it in terms of that, that, that low.[00:45:47] swyx: And we think the pro the break even for H100s is 50 cents. At a, at a normal utilization rate. To make this work, so in my spreadsheet I made this, made this work. You have to have like a parallelism of 500 requests all simultaneously. And you have, you have model bandwidth utilization of 80%.[00:46:06] swyx: Which is way high. I just gave them high marks for everything. Groq has two fundamental tech innovations that they hinge their hats on in terms of like, why we are better than everyone. You know, even though, like, it remains to be independently replicated. But one you know, they have this sort of the entire model on the chip idea, which is like, Okay, get rid of HBM.[00:46:30] swyx: And, like, put everything in SREM. Like, okay, fine, but then you need a lot of cards and whatever. And that's all okay. And so, like, because you don't have to transfer between memory, then you just save on that time and that's why they're faster. So, a lot of people buy that as, like, that's the reason that you're faster.[00:46:45] swyx: Then they have, like, some kind of crazy compiler, or, like, Speculative routing magic using compilers that they also attribute towards their higher utilization. So I give them 80 percent for that. And so that all that works out to like, okay, base costs, I think you can get down to like, maybe like 20 something cents per million tokens.[00:47:04] swyx: And therefore you actually are fine if you have that kind of utilization. But it's like, I have to make a lot of fearful assumptions for this to work.[00:47:12] Alessio: Yeah. Yeah, I'm curious to see what Dylan says later.[00:47:16] swyx: So he was like completely opposite of me. He's like, they're just burning money. Which is great.[00:47:22] Analyzing Gemini's 1m Context, Reddit deal, Imagegen politics, Gemma via the Four Wars[00:47:22] Alessio: Gemini, want to do a quick run through since this touches on all the four words.[00:47:28] swyx: Yeah, and I think this is the mark of a useful framework, that when a new thing comes along, you can break it down in terms of the four words and sort of slot it in or analyze it in those four frameworks, and have nothing left.[00:47:41] swyx: So it's a MECE categorization. MECE is Mutually Exclusive and Collectively Exhaustive. And that's a really, really nice way to think about taxonomies and to create mental frameworks. So, what is Gemini 1. 5 Pro? It is the newest model that came out one week after Gemini 1. 0. Which is very interesting.[00:48:01] swyx: They have not really commented on why. They released this the headline feature is that it has a 1 million token context window that is multi modal which means that you can put all sorts of video and audio And PDFs natively in there alongside of text and, you know, it's, it's at least 10 times longer than anything that OpenAI offers which is interesting.[00:48:20] swyx: So it's great for prototyping and it has interesting discussions on whether it kills RAG.[00:48:25] Alessio: Yeah, no, I mean, we always talk about, you know, Long context is good, but you're getting charged per token. So, yeah, people love for you to use more tokens in the context. And RAG is better economics. But I think it all comes down to like how the price curves change, right?[00:48:42] Alessio: I think if anything, RAG's complexity goes up and up the more you use it, you know, because you have more data sources, more things you want to put in there. The token costs should go down over time, you know, if the model stays fixed. If people are happy with the model today. In two years, three years, it's just gonna cost a lot less, you know?[00:49:02] Alessio: So now it's like, why would I use RAG and like go through all of that? It's interesting. I think RAG is better cutting edge economics for LLMs. I think large context will be better long tail economics when you factor in the build cost of like managing a RAG pipeline. But yeah, the recall was like the most interesting thing because we've seen the, you know, You know, in the haystack things in the past, but apparently they have 100 percent recall on anything across the context window.[00:49:28] Alessio: At least they say nobody has used it. No, people[00:49:30] swyx: have. Yeah so as far as, so, so what this needle in a haystack thing for people who aren't following as closely as us is that someone, I forget his name now someone created this needle in a haystack problem where you feed in a whole bunch of generated junk not junk, but just like, Generate a data and ask it to specifically retrieve something in that data, like one line in like a hundred thousand lines where it like has a specific fact and if it, if you get it, you're, you're good.[00:49:57] swyx: And then he moves the needle around, like, you know, does it, does, does your ability to retrieve that vary if I put it at the start versus put it in the middle, put it at the end? And then you generate this like really nice chart. That, that kind of shows like it's recallability of a model. And he did that for GPT and, and Anthropic and showed that Anthropic did really, really poorly.[00:50:15] swyx: And then Anthropic came back and said it was a skill issue, just add this like four, four magic words, and then, then it's magically all fixed. And obviously everybody laughed at that. But what Gemini came out with was, was that, yeah, we, we reproduced their, you know, haystack issue you know, test for Gemini, and it's good across all, all languages.[00:50:30] swyx: All the one million token window, which is very interesting because usually for typical context extension methods like rope or yarn or, you know, anything like that, or alibi, it's lossy like by design it's lossy, usually for conversations that's fine because we are lossy when we talk to people but for superhuman intelligence, perfect memory across Very, very long context.[00:50:51] swyx: It's very, very interesting for picking things up. And so the people who have been given the beta test for Gemini have been testing this. So what you do is you upload, let's say, all of Harry Potter and you change one fact in one sentence, somewhere in there, and you ask it to pick it up, and it does. So this is legit.[00:51:08] swyx: We don't super know how, because this is, like, because it doesn't, yes, it's slow to inference, but it's not slow enough that it's, like, running. Five different systems in the background without telling you. Right. So it's something, it's something interesting that they haven't fully disclosed yet. The open source community has centered on this ring attention paper, which is created by your friend Matei Zaharia, and a couple other people.[00:51:36] swyx: And it's a form of distributing the compute. I don't super understand, like, why, you know, doing, calculating, like, the fee for networking and attention. In block wise fashion and distributing it makes it so good at recall. I don't think they have any answer to that. The only thing that Ring of Tension is really focused on is basically infinite context.[00:51:59] swyx: They said it was good for like 10 to 100 million tokens. Which is, it's just great. So yeah, using the four wars framework, what is this framework for Gemini? One is the sort of RAG and Ops war. Here we care less about RAG now, yes. Or, we still care as much about RAG, but like, now it's it's not important in prototyping.[00:52:21] swyx: And then, for data war I guess this is just part of the overall training dataset, but Google made a 60 million deal with Reddit and presumably they have deals with other companies. For the multi modality war, we can talk about the image generation, Crisis, or the fact that Gemini also has image generation, which we'll talk about in the next section.[00:52:42] swyx: But it also has video understanding, which is, I think, the top Gemini post came from our friend Simon Willison, who basically did a short video of him scanning over his bookshelf. And it would be able to convert that video into a JSON output of what's on that bookshelf. And I think that is very useful.[00:53:04] swyx: Actually ties into the conversation that we had with David Luan from Adept. In a sense of like, okay what if video was the main modality instead of text as the input? What if, what if everything was video in, because that's how we work. We, our eyes don't actually read, don't actually like get input, our brains don't get inputs as characters.[00:53:25] swyx: Our brains get the pixels shooting into our eyes, and then our vision system takes over first, and then we sort of mentally translate that into text later. And so it's kind of like what Adept is kind of doing, which is driving by vision model, instead of driving by raw text understanding of the DOM. And, and I, I, in that, that episode, which we haven't released I made the analogy to like self-driving by lidar versus self-driving by camera.[00:53:52] swyx: Mm-Hmm. , right? Like, it's like, I think it, what Gemini and any other super long context that model that is multimodal unlocks is what if you just drive everything by video. Which is[00:54:03] Alessio: cool. Yeah, and that's Joseph from Roboflow. It's like anything that can be seen can be programmable with these models.[00:54:12] Alessio: You mean[00:54:12] swyx: the computer vision guy is bullish on computer vision?[00:54:18] Alessio: It's like the rag people. The rag people are bullish on rag and not a lot of context. I'm very surprised. The, the fine tuning people love fine tuning instead of few shot. Yeah. Yeah. The, yeah, the, that's that. Yeah, the, I, I think the ring attention thing, and it's how they did it, we don't know. And then they released the Gemma models, which are like a 2 billion and 7 billion open.[00:54:41] Alessio: Models, which people said are not, are not good based on my Twitter experience, which are the, the GPU poor crumbs. It's like, Hey, we did all this work for us because we're GPU rich and we're just going to run this whole thing. And

ceo american spotify tiktok black australia europe art english google ai china apple vision france politics online service state crisis living san francisco west research russia chinese elon musk reach search microsoft teacher surprise ring harry potter security asian chatgpt broadway run silicon valley mvp ceos medium discord reddit mail dubai stanford math adolf hitler fill worlds complex direction context mixed stanford university qatar dom one year falcon cto offensive tension substack retro ia minecraft newton hungary explorers sf openai gemini residence archive alt nvidia ux api builder laptops apples lamar discovered generate fastest sweep voyager stable python j'ai ui developed mm jet stretching gpt rj ml lama alibaba hungarian github automated llama directions grimes notion rail lava merge transformer lesser clip runway metaphor amd synthetic samba bal emo sora shack copilot wechat sam altman structured ops mamba llm ix unreal engine gpu connector spreadsheets rahul raspberry pi agi bytedance vector zapier sql pixie c4 collected sonar rag anz gpus 7b deepmind lambda vps utilization alessio tiananmen square perplexity speculative lms gopher anthropic lm web summit json arp mixture sundar pichai 60k mistral kura cli pocketcast pika tendency google gemini soa motif digital ocean a16z demo day sumit itamar chinchillas adept versa npm yon markov reassuring dabble linux foundation hacker news dcm boma us tech omo moes svelte agis jupyter yann lecun open api matryoshka jupyter notebooks tpu jeremy howard replit vipul exa 70b groq neurips hbm mece nat friedman rlhf rnn gemini pro chris ray code interpreter mrl naton audio recap simon willison 460k latent space sfai unthinking and openai versal jerry liu matei zaharia hashnode
The top AI news from the past week, every ThursdAI

Holy SH*T, These two words have been said on this episode multiple times, way more than ever before I want to say, and it's because we got 2 incredible exciting breaking news announcements in a very very short amount of time (in the span of 3 hours) and the OpenAI announcement came as we were recording the space, so you'll get to hear a live reaction of ours to this insanity. We also had 3 deep-dives, which I am posting on this weeks episode, we chatted with Yi Tay and Max Bane from Reka, which trained and released a few new foundational multi modal models this week, and with Dome and Pablo from Stability who released a new diffusion model called Stable Cascade, and finally had a great time hanging with Swyx (from Latent space) and finally got a chance to turn the microphone back at him, and had a conversation about Swyx background, Latent Space, and AI Engineer. I was also very happy to be in SF today of all days, as my day is not over yet, there's still an event which we Cohost together with A16Z, folks from Nous Research, Ollama and a bunch of other great folks, just look at all these logos! Open Source FTW

god american spotify time world google hollywood ai disney apple interview education japan talk magic news french germany san francisco phd german russian microsoft holy professional blog hawaii dive 3d video games chatgpt tokyo humans sweden silicon valley champions os pc apologies ga cheers discord cloud singapore reactions honestly west coast windows investments context alignment mixed newsletter sold hebrew chat tap dom helped developers breaking news dms ram buzz substack folks vc highest bloom whispers react siri newton lyon andrews munich sf openai gemini goats anton labs nvidia stability api arabic generally decided kd documents alphabet bard open source needle north star gpt ml aws incredibly lama mosaic github llama dome slightly apis soaring jarvis runway farrell pico vcs javascript eureka attached html apache temporal biases tl 2k sora rugs 10m weights tab pharrell llm chinese new year xl colbert gpu pica cascade nps kb rahul enrico dali agi oss fairly yarn dx eeg ocr horowitz rag cloudflare benchmarks gpus curation 7b singaporean rtx ilya deepmind gtm lambda tldr v2 watchos alessio satya nadella lms satya buster keaton fmri mephisto retrieval andrej apple news 8b yam lex fridman mixture sundar pichai googlers yi series c 60k past week sura lumiere smoothly haystack a16z latent wrecker cursor mpt chroma flan dalmatian svd hacker news tensor reca devrel imad datasets netlify reka nvidia gpus tesla autopilot cohere google brain andrew chen yann lecun vae robert scoble matryoshka instructure discords daniel gross loras neurips jeff dean andrej karpathy 128k nlw ai engineer george hotz nat friedman entropic drew houston lachanze word2vec rekka latent space hayes valley swix breca max wolf gradio ingra neuros
TechCrunch Startups – Spoken Edition
Sam Altman backs teens' AI startup automating browser-native workflows

TechCrunch Startups – Spoken Edition

Play Episode Listen Later Oct 5, 2023 4:35


Sam Altman, Peak XV, and Daniel Gross and Nat Friedman's AI grant are among backers of an AI startup, founded by two teenagers, that's aiming to assist businesses in automating numerous workflows in previously unexplored ways.

The Peel
Growing Deel to $300M+ ARR in Four Years with Co-founder and CEO Alex Bouaziz

The Peel

Play Episode Listen Later Sep 20, 2023 53:53


Alex Bouaziz is the Co-founder and CEO of Deel, a full stack global HR and payroll platform. But the company didn't start that way, and we'll talk through Alex and his co-founder Shuo's journey from zero to one, starting with a product that helped startups hire international employees. Alex and Shuo launched the company one week before YC Demo day in 2019, and have since scaled to 20,000+ customers and raised over $685 million. They're supported by investors like YC, a16z, Spark Capital, Weekend Fund, Coatue, SV Angel, Soma Capital, Quiet Capital, and angels like Lachy Groom, Nat Friedman, Ryan Petersen, Alexis Ohanian, John Zimmer, Dara Khosrowshahi, Rex Salisbury, Justin Mateen, and more. — In this episode, we discuss: • The initial insight around remote work that started Deel in 2019 • How they initially built the wrong solution • Pivoting one week before YC demo day • Listening to customers to build a better product • Growing 20% every month for a year • Being a default optimist • Dealing with the emotional ups and downs of building a startup • Moving at “Deel Speed”, and how the team operates so fast • Deel's remote-first approach and its “WeWork Squads” • His advice to other leaders building remote teams • Why founders shouldn't share too much information with early investors • How Alex's approach to fundraising changed through his Seed to Series D rounds • Why founders should always be selling their product • How picking board members is a form of marriage, and what founders should prioritize when picking them • Why Deel raised a Series A and B in a three month span despite only burning $300k since closing its Seed round • How deep pocketed investors unlock optionality as a company scales • The benefits to taking on lots of angel investors • How Deel prioritized international expansion as it grew • Deel's new Visa / immigration and HR AI products • Two things Alex would do differently if he could start over • The founders he most looks up to — Read the transcript: https://www.thespl.it/p/growing-deel-to-300m-arr-in-four Where to find Alex:Twitter: https://twitter.com/BouazizalexLinkedIn: https://www.linkedin.com/in/alexbouaziz/ Where to find Turner:Newsletter: https://www.thespl.itTwitter: https://twitter.com/TurnerNovak Production and distribution by: https://www.supermix.io For sponsorship inquiries: https://docs.google.com/forms/d/e/1FAIpQLSebvhBlDDfHJyQdQWs8RwpFxWg-UbG0H-VFey05QSHvLxkZPQ/viewform

Slate Star Codex Podcast
Model City Monday 9/4/23: California Dreamin'

Slate Star Codex Podcast

Play Episode Listen Later Sep 9, 2023 21:45


Tech moguls plan new city in Solano County Guardian: Silicon Valley Elites Revealed As Buyers Of $800 Million In Land To Build Utopian City. The specific elites include the Collison brothers, Reid Hoffman, Nat Friedman, Marc Andreessen, and others, led by the mysterious Jan Sramek. The specific land is farmland in Solano County, about an hour's drive northeast of San Francisco. The specific utopian city is going to look like this. The company involved (Flannery Associates aka California Forever) has been in stealth mode for several years, trying to buy land quietly without revealing how rich and desperate they are to anyone in a position to raise prices. Now they've released a website with utopian Norman-Rockwell-esque pictures, lots of talk about creating jobs and building better lives, and few specifics. https://www.astralcodexten.com/p/model-city-monday-9423 

Choses à Savoir TECH
USA : une nouvelle ville 100% dédiée aux GAFAM ?

Choses à Savoir TECH

Play Episode Listen Later Sep 7, 2023 2:34


S'il y a un endroit sur Terre qui nous fait penser à l'industrie technologique quand on en prononce le nom, c'est bien la Silicon Valley en Californie. En effet, on y trouve depuis plusieurs décennies maintenant de grandes entreprises comme Google à Mountain View, Apple à Cupertino ou encore Netflix à Los Gatos… Et au fil des années, la baie de San Francisco a fini par attirer l'immense majorité des emplois dans la tech, faisant inexorablement grimper les prix de l'immobilier jusqu'à devenir aujourd'hui inabordable pour de nombreux travailleurs. Pour remédier à cela, une jeune entreprise du nom de Flannery, ambitionne de construire une toute nouvelle ville au sein de la Silicon Valley, regroupant les employés des GAFAM et autres entreprises de la tech.Concrètement, l'idée est simple : acheter des dizaines de milliers d'hectares à environ 100 km au nord de San Francisco afin de les transformer en une immense métropole où viendraient s'installer les ingénieurs de la Silicon Valley. La ville a été pensée pour être principalement piétonne et alimentée par des énergies renouvelables. Au total, Flannery a déjà acheté pour plus de 800 millions de dollars de terrain.Ceci dit, qui se cache derrière Flanney, cette petite entreprise inconnue dont l'activité a commencé en 2017 ? Et bien d'après le New York Times, cette société d'investissement a été créée par Jan Sramek, un ancien trader de 36 ans ayant travaillé à Goldman Sachs, et qui a convaincu plusieurs grands noms de la tech d'investir dans le projet. Parmi eux, Reid Hoffman le fondateur de LinkedIn, Marc Andreessen et Chris Dixon deux investisseurs, Patrick et John Collison les cofondateurs de Stripe, Laurene Powell Jobs d'Emerson Collective, ou encore Nat Friedman et Daniel Gross deux anciens patrons devenus investisseurs.C'est donc en toute discrétion que Flannery a acheté du terrain au fil des années. Cette dernière a négocié deal après deal auprès des propriétaires, notamment des agriculteurs, offrant souvent des sommes d'argent colossales en échange des terres, mais sans jamais divulguer ses ambitions. Ce n'est que très récemment que la firme a fini par communiquer publiquement, confirmant que l'entreprise est composée je cite de « Californiens qui croient que l'avenir du comté de Solano et de la Californie sera radieux » fin de citation. La ville, qui n'a pas encore de nom, pourrait donc enfin résoudre le problème de logement de San Francisco et des alentours, en offrant aux ingénieurs des loyers et un immobilier plus abordables, en plus de mettre fin aux nombreux litiges entre les firmes et les autorités locales lors de leurs projets respectifs d'expansion. Reste à savoir quand les premiers travaux commenceront. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Choses à Savoir TECH
USA : une nouvelle ville 100% dédiée aux GAFAM ?

Choses à Savoir TECH

Play Episode Listen Later Sep 7, 2023 3:04


S'il y a un endroit sur Terre qui nous fait penser à l'industrie technologique quand on en prononce le nom, c'est bien la Silicon Valley en Californie. En effet, on y trouve depuis plusieurs décennies maintenant de grandes entreprises comme Google à Mountain View, Apple à Cupertino ou encore Netflix à Los Gatos… Et au fil des années, la baie de San Francisco a fini par attirer l'immense majorité des emplois dans la tech, faisant inexorablement grimper les prix de l'immobilier jusqu'à devenir aujourd'hui inabordable pour de nombreux travailleurs. Pour remédier à cela, une jeune entreprise du nom de Flannery, ambitionne de construire une toute nouvelle ville au sein de la Silicon Valley, regroupant les employés des GAFAM et autres entreprises de la tech. Concrètement, l'idée est simple : acheter des dizaines de milliers d'hectares à environ 100 km au nord de San Francisco afin de les transformer en une immense métropole où viendraient s'installer les ingénieurs de la Silicon Valley. La ville a été pensée pour être principalement piétonne et alimentée par des énergies renouvelables. Au total, Flannery a déjà acheté pour plus de 800 millions de dollars de terrain. Ceci dit, qui se cache derrière Flanney, cette petite entreprise inconnue dont l'activité a commencé en 2017 ? Et bien d'après le New York Times, cette société d'investissement a été créée par Jan Sramek, un ancien trader de 36 ans ayant travaillé à Goldman Sachs, et qui a convaincu plusieurs grands noms de la tech d'investir dans le projet. Parmi eux, Reid Hoffman le fondateur de LinkedIn, Marc Andreessen et Chris Dixon deux investisseurs, Patrick et John Collison les cofondateurs de Stripe, Laurene Powell Jobs d'Emerson Collective, ou encore Nat Friedman et Daniel Gross deux anciens patrons devenus investisseurs. C'est donc en toute discrétion que Flannery a acheté du terrain au fil des années. Cette dernière a négocié deal après deal auprès des propriétaires, notamment des agriculteurs, offrant souvent des sommes d'argent colossales en échange des terres, mais sans jamais divulguer ses ambitions. Ce n'est que très récemment que la firme a fini par communiquer publiquement, confirmant que l'entreprise est composée je cite de « Californiens qui croient que l'avenir du comté de Solano et de la Californie sera radieux » fin de citation. La ville, qui n'a pas encore de nom, pourrait donc enfin résoudre le problème de logement de San Francisco et des alentours, en offrant aux ingénieurs des loyers et un immobilier plus abordables, en plus de mettre fin aux nombreux litiges entre les firmes et les autorités locales lors de leurs projets respectifs d'expansion. Reste à savoir quand les premiers travaux commenceront. Learn more about your ad choices. Visit megaphone.fm/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

As alluded to on the pod, LangChain has just launched LangChain Hub: “the go-to place for developers to discover new use cases and polished prompts.” It's available to everyone with a LangSmith account, no invite code necessary. Check it out!In 2023, LangChain has speedrun the race from 2:00 to 4:00 to 7:00 Silicon Valley Time. From the back to back $10m Benchmark seed and (rumored) $20-25m Sequoia Series A in April, to back to back critiques of “LangChain is Pointless” and “The Problem with LangChain” in July, to teaching with Andrew Ng and keynoting at basically every AI conference this fall (including ours), it has been an extreme rollercoaster for Harrison and his growing team creating one of the most popular (>60k stars at time of writing) building blocks for AI Engineers.LangChain's OriginsThe first commit to LangChain shows its humble origins as a light wrapper around Python's formatter.format for prompt templating. But as Harrison tells the story, even his first experience with text-davinci-002 in early 2022 was focused on chatting with data from their internal company Notion and Slack, what is now known as Retrieval Augmented Generation (RAG). As the Generative AI meetup scene came to life post Stable Diffusion, Harrison saw a need for common abstractions for what people were building with text LLMs at the time:* LLM Math, aka Riley Goodside's “You Can't Do Math” REPL-in-the-loop (PR #8)* Self-Ask With Search, Ofir Press' agent pattern (PR #9) (later ReAct, PR #24)* NatBot, Nat Friedman's browser controlling agent (PR #18)* Adapters for OpenAI, Cohere, and HuggingFaceHubAll this was built and launched in a few days from Oct 16-25, 2022. Turning research ideas/exciting usecases into software quickly and often has been in the LangChain DNA from Day 1 and likely a big driver of LangChain's success, to date amassing the largest community of AI Engineers and being the default launch framework for every big name from Nvidia to OpenAI:Dancing with GiantsBut AI Engineering is built atop of constantly moving tectonic shifts: * ChatGPT launched in November (“The Day the AGI Was Born”) and the API released in March. Before the ChatGPT API, OpenAI did not have a chat endpoint. In order to build a chatbot with history, you had to make sure to chain all messages and prompt for completion. LangChain made it easy to do that out of the box, which was a huge driver of usage. * Today, OpenAI has gone all-in on the chat API and is deprecating the old completions models, essentially baking in the chat pattern as the default way most engineers should interact with LLMs… and reducing (but not eliminating) the value of ConversationChains.* And there have been more updates since: Plugins released in API form as Functions in June (one of our top pods ever… reducing but not eliminating the value of OutputParsers) and Finetuning in August (arguably reducing some need for Retrieval and Prompt tooling). With each update, OpenAI and other frontier model labs realign the roadmaps of this nascent industry, and Harrison credits the modular design of LangChain in staying relevant. LangChain has not been merely responsive either: LangChain added Agents in November, well before they became the hottest topic of the AI Summer, and now Agents feature as one of LangChain's top two usecases. LangChain's problem for podcasters and newcomers alike is its sheer scope - it is the world's most complete AI framework, but it also has a sprawling surface area that is difficult to fully grasp or document in one sitting. This means it's time for the trademark Latent Space move (ChatGPT, GPT4, Auto-GPT, and Code Interpreter Advanced Data Analysis GPT4.5): the executive summary!What is LangChain?As Harrison explains, LangChain is an open source framework for building context-aware reasoning applications, available in Python and JS/TS.It launched in Oct 2022 with the central value proposition of “composability”, aka the idea that every AI engineer will want to switch LLMs, and combine LLMs with other things into “chains”, using a flexible interface that can be saved via a schema.Today, LangChain's principal offerings can be grouped as:* Components: isolated modules/abstractions* Model I/O* Models (for LLM/Chat/Embeddings, from OpenAI, Anthropic, Cohere, etc)* Prompts (Templates, ExampleSelectors, OutputParsers)* Retrieval (revised and reintroduced in March)* Document Loaders (eg from CSV, JSON, Markdown, PDF)* Text Splitters (15+ various strategies for chunking text to fit token limits)* Retrievers (generic interface for turning an unstructed query into a set of documents - for self-querying, contextual compression, ensembling)* Vector Stores (retrievers that search by similarity of embeddings)* Indexers (sync documents from any source into a vector store without duplication)* Memory (for long running chats, whether a simple Buffer, Knowledge Graph, Summary, or Vector Store)* Use-Cases: compositions of Components* Chains: combining a PromptTemplate, LLM Model and optional OutputParser* with Router, Sequential, and Transform Chains for advanced usecases* savable, sharable schemas that can be loaded from LangChainHub* Agents: a chain that has access to a suite of tools, of nondeterministic length because the LLM is used as a reasoning engine to determine which actions to take and in which order. Notable 100LOC explainer here.* Tools (interfaces that an agent can use to interact with the world - preset list here. Includes things like ChatGPT plugins, Google Search, WolframAlpha. Groups of tools are bundled up as toolkits)* AgentExecutor (the agent runtime, basically the while loop, with support for controls, timeouts, memory sharing, etc)* LangChain has also added a Callbacks system for instrumenting each stage of LLM, Chain, and Agent calls (which enables LangSmith, LangChain's first cloud product), and most recently an Expression Language, a declarative way to compose chains.LangChain the company incorporated in January 2023, announced their seed round in April, and launched LangSmith in July. At time of writing, the company has 93k followers, their Discord has 31k members and their weekly webinars are attended by thousands of people live.The full-featuredness of LangChain means it is often the first starting point for building any mainstream LLM use case, because they are most likely to have working guides for the new developer. Logan (our first guest!) from OpenAI has been a notable fan of both LangChain and LangSmith (they will be running the first LangChain + OpenAI workshop at AI Eng Summit). However, LangChain is not without its critics, with Aravind Srinivas, Jim Fan, Max Woolf, Mckay Wrigley and the general Reddit/HN community describing frustrations with the value of their abstractions, and many are attempting to write their own (the common experience of adding and then removing LangChain is something we covered in our Agents writeup). Harrison compares this with the timeless ORM debate on the value of abstractions.LangSmithLast month, Harrison launched LangSmith, their LLM observability tool and first cloud product. LangSmith makes it easy to monitor all the different primitives that LangChain offers (agents, chains, LLMs) as well as making it easy to share and evaluate them both through heuristics (i.e. manually written ones) and “LLM evaluating LLM” flows. The top HN comment in the “LangChain is Pointless” thread observed that orchestration is the smallest part of the work, and the bulk of it is prompt tuning and data serialization. When asked this directly our pod, Harrison agreed:“I agree that those are big pain points that get exacerbated when you have these complex chains and agents where you can't really see what's going on inside of them. And I think that's partially why we built Langsmith…” (48min mark)You can watch the full launch on the LangChain YouTube:It's clear that the target audience for LangChain is expanding to folks who are building complex, production applications rather than focusing on the simpler “Q&A your docs” use cases that made it popular in the first place. As the AI Engineer space matures, there will be more and more tools graduating from supporting “hobby” projects to more enterprise-y use cases. In this episode we run through some of the history of LangChain, how it's growing from an open source project to one of the highest valued AI startups out there, and its future. We hope you enjoy it!Show Notes* LangChain* LangChain's Berkshire Hathaway Homepage* Abstractions tweet* LangSmith* LangSmith Cookbooks repo* LangChain Retrieval blog* Evaluating CSV Question/Answering blog and YouTube* MultiOn Partner blog* Harvard Sports Analytics Collective* Evaluating RAG Webinar* awesome-langchain:* LLM Math Chain* Self-Ask* LangChain Hub UI* “LangChain is Pointless”* Harrison's links* sports - estimating player compatibility in the NBA* early interest in prompt injections* GitHub* TwitterTimestamps* [00:00:00] Introduction* [00:00:48] Harrison's background and how sports led him into ML* [00:04:54] The inspiration for creating LangChain - abstracting common patterns seen in other GPT-3 projects* [00:05:51] Overview of LangChain - a framework for building context-aware reasoning applications* [00:10:09] Components of LangChain - modules, chains, agents, etc.* [00:14:39] Underappreciated parts of LangChain - text splitters, retrieval algorithms like self-query* [00:18:46] Hiring at LangChain* [00:20:27] Designing the LangChain architecture - balancing flexibility and structure* [00:24:09] The difference between chains and agents in LangChain* [00:25:08] Prompt engineering and LangChain* [00:26:16] Announcing LangSmith* [00:30:50] Writing custom evaluators in LangSmith* [00:33:19] Reducing hallucinations - fixing retrieval vs generation issues* [00:38:17] The challenges of long context windows* [00:40:01] LangChain's multi-programming language strategy* [00:45:55] Most popular LangChain blog posts - deep dives into specific topics* [00:50:25] Responding to LangChain criticisms* [00:54:11] Harrison's advice to AI engineers* [00:55:43] Lightning RoundTranscriptAlessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai. [00:00:19]Swyx: Welcome. Today we have Harrison Chase in the studio with us. Welcome Harrison. [00:00:23]Harrison: Thank you guys for having me. I'm excited to be here. [00:00:25]Swyx: It's been a long time coming. We've been asking you for a little bit and we're really glad that you got some time to join us in the studio. Yeah. [00:00:32]Harrison: I've been dodging you guys for a while. [00:00:34]Swyx: About seven months. You pulled me in here. [00:00:37]Alessio: About seven months. But it's all good. I totally understand. [00:00:38]Swyx: We like to introduce people through the official backgrounds and then ask you a little bit about your personal side. So you went to Harvard, class of 2017. You don't list what you did in Harvard. Was it CS? [00:00:48]Harrison: Stats and CS. [00:00:50]Swyx: That's awesome. I love me some good stats. [00:00:52]Harrison: I got into it through stats, through doing sports analytics. And then there was so much overlap between stats and CS that I found myself doing more and more of that. [00:00:59]Swyx: And it's interesting that a lot of the math that you learn in stats actually comes over into machine learning which you applied at Kensho as a machine learning engineer and Robust Intelligence, which seems to be the home of a lot of AI founders.Harrison: It does. Yeah. Swyx: And you started LangChain, I think around November 2022 and incorporated in January. Yeah. [00:01:19]Harrison: I was looking it up for the podcast and the first tweet was on, I think October 24th. So just before the end of November or end of October. [00:01:26]Swyx: Yeah. So that's your LinkedIn. What should people know about you on the personal side that's not obvious on LinkedIn? [00:01:33]Harrison: A lot of how I got into this is all through sports actually. Like I'm a big sports fan, played a lot of soccer growing up and then really big fan of the NBA and NFL. And so freshman year at college showed up and I knew I liked math. I knew I liked sports. One of the clubs that was there was the Sports Analytics Collective. And so I joined that freshman year, I was doing a lot of stuff in like Excel, just like basic stats, but then like wanted to do more advanced stuff. So learn to code, learn kind of like data science and machine learning through that way. Kind of like just kept on going down that path. I think sports is a great entryway to data science and machine learning. There's a lot of like numbers out there. People like really care. Like I remember, I think sophomore, junior year, I was in the Sports Collective and the main thing we had was a blog. And so we wrote a blog. It wasn't me. One of the other people in the club wrote a blog predicting the NFL season. I think they made some kind of like with stats and I think their stats showed that like the Dolphins would end up beating the Patriots and New England got like pissed about it, of course. So people like really care and they'll give you feedback about whether you're like models doing well or poorly. And so you get that. And then you also get like instantaneous kind of like, well, not instantaneous, but really quick feedback. Like if you predict a game, the game happens that night. Like you don't have to wait a year to see what happens. So I think sports is a great kind of like entryway for kind of like data science. [00:02:43]Alessio: There was actually my first article on the Twilio blog with a Python script to like predict pricing of like Daily Fantasy players based on my past week performance. Yeah, I don't know. It's a good getaway drug. [00:02:56]Swyx: And on my end, the way I got into finance was through sports betting. So maybe we all have some ties in there. Was like Moneyball a big inspiration? The movie? [00:03:06]Harrison: Honestly, not really. I don't really like baseball. That's like the big thing. [00:03:10]Swyx: Let's call it a lot of stats. Cool. Well, we can dive right into LangChain, which is what everyone is excited about. But feel free to make all the sports analogies you want. That really drives home a lot of points. What was your GPT aha moment? When did you start working on GPT itself? Maybe not LangChain, just anything to do with the GPT API? [00:03:29]Harrison: I think it probably started around the time we had a company hackathon. I think that was before I launched LangChain. I'm trying to remember the exact sequence of events, but I do remember that at the hackathon I worked with Will, who's now actually at LangChain as well, and then two other members of Robust. And we made basically a bot where you could ask questions of Notion and Slack. And so I think, yeah, RAG, basically. And I think I wanted to try that out because I'd heard that it was getting good. I'm trying to remember if I did anything before that to realize that it was good. So then I would focus on that on the hackathon. I can't remember or not, but that was one of the first times that I built something [00:04:06]Swyx: with GPT-3. There wasn't that much opportunity before because the API access wasn't that widespread. You had to get into some kind of program to get that. [00:04:16]Harrison: DaVinci-002 was not terrible, but they did an upgrade to get it to there, and they didn't really publicize that as much. And so I think I remember playing around with it when the first DaVinci model came out. I was like, this is cool, but it's not amazing. You'd have to do a lot of work to get it to do something. But then I think that February or something, I think of 2022, they upgraded it and it was it got better, but I think they made less of an announcement around it. And so I just, yeah, it kind of slipped under the radar for me, at least. [00:04:45]Alessio: And what was the step into LangChain? So you did the hackathon, and then as you were building the kind of RAG product, you felt like the developer experience wasn't that great? Or what was the inspiration? [00:04:54]Harrison: No, honestly, so around that time, I knew I was going to leave my previous job. I was trying to figure out what I was going to do next. I went to a bunch of meetups and other events. This was like the September, August, September of that year. So after Stable Diffusion, but before ChatGPT. So there was interest in generative AI as a space, but not a lot of people hacking on language models yet. But there were definitely some. And so I would go to these meetups and just chat with people and basically saw some common abstractions in terms of what they were building, and then thought it would be a cool side project to factor out some of those common abstractions. And that became kind of like LangChain. I looked up again before this, because I remember I did a tweet thread on Twitter to announce LangChain. And we can talk about what LangChain is. It's a series of components. And then there's some end-to-end modules. And there was three end-to-end modules that were in the initial release. One was NatBot. So this was the web agent by Nat Friedman. Another was LLM Math Chain. So it would construct- [00:05:51]Swyx: GPT-3 cannot do math. [00:05:53]Harrison: Yeah, exactly. And then the third was Self-Ask. So some type of RAG search, similar to React style agent. So those were some of the patterns in terms of what I was seeing. And those all came from open source or academic examples, because the people who were actually working on this were building startups. And they were doing things like question answering over your databases, question answering over SQL, things like that. But I couldn't use their code as kind of like inspiration to factor things out. [00:06:18]Swyx: I talked to you a little bit, actually, roundabout, right after you announced LangChain. I'm honored. I think I'm one of many. This is your first open source project. [00:06:26]Harrison: No, that's not actually true. I released, because I like sports stats. And so I remember I did release some really small, random Python package for scraping data from basketball reference or something. I'm pretty sure I released that. So first project to get a star on GitHub, let's say that. [00:06:45]Swyx: Did you reference anything? What was the inspirations, like other frameworks that you look to when open sourcing LangChain or announcing it or anything like that? [00:06:53]Harrison: I mean, the only main thing that I looked for... I remember reading a Hacker News post a little bit before about how a readme on the project goes a long way. [00:07:02]Swyx: Readme's help. [00:07:03]Harrison: Yeah. And so I looked at it and was like, put some status checks at the top and have the title and then one or two lines and then just right into installation. And so that's the main thing that I looked at in terms of how to structure it. Because yeah, I hadn't done open source before. I didn't really know how to communicate that aspect of the marketing or getting people to use it. I think I had some trouble finding it, but I finally found it and used that as a lot [00:07:25]Swyx: of the inspiration there. Yeah. It was one of the subjects of my write-up how it was surprising to me that significant open source experience actually didn't seem to matter in the new wave of AI tooling. Most like auto-GPTs, Torrents, that was his first open source project ever. And that became auto-GPT. Yeah. I don't know. To me, it's just interesting how open source experience is kind of fungible or not necessary. Or you can kind of learn it on the job. [00:07:49]Alessio: Overvalued. [00:07:50]Swyx: Overvalued. Okay. You said it, not me. [00:07:53]Alessio: What's your description of LangChain today? I think when I built the LangChain Hub UI in January, there were a few things. And I think you were one of the first people to talk about agents that were already in there before it got hot now. And it's obviously evolved into a much bigger framework today. Run people through what LangChain is today, how they should think about it, and all of that. [00:08:14]Harrison: The way that we describe it or think about it internally is that LangChain is basically... I started off saying LangChain's a framework for building LLM applications, but that's really vague and not really specific. And I think part of the issue is LangChain does do a lot, so it's hard to be somewhat specific. But I think the way that we think about it internally, in terms of prioritization, what to focus on, is basically LangChain's a framework for building context-aware reasoning applications. And so that's a bit of a mouthful, but I think that speaks to a lot of the core parts of what's in LangChain. And so what concretely that means in LangChain, there's really two things. One is a set of components and modules. And these would be the prompt template abstraction, the LLM abstraction, chat model abstraction, vector store abstraction, text splitters, document loaders. And so these are combinations of things that we build and we implement, or we just have integrations with. So we don't have any language models ourselves. We don't have any vector stores ourselves, but we integrate with a lot of them. And then the text splitters, we have our own logic for that. The document loaders, we have our own logic for that. And so those are the individual modules. But then I think another big part of LangChain, and probably the part that got people using it the most, is the end-to-end chains or applications. So we have a lot of chains for getting started with question answering over your documents, chat question answering, question answering over SQL databases, agent stuff that you can plug in off the box. And that basically combines these components in a series of specific ways to do this. So if you think about a question answering app, you need a lot of different components kind of stacked. And there's a bunch of different ways to do question answering apps. So this is a bit of an overgeneralization, but basically, you know, you have some component that looks up an embedding from a vector store, and then you put that into the prompt template with the question and the context, and maybe you have the chat history as well. And then that generates an answer, and then maybe you parse that out, or you do something with the answer there. And so there's just this sequence of things that you basically stack in a particular way. And so we just provide a bunch of those assembled chains off the shelf to make it really easy to get started in a few lines of code. [00:10:09]Alessio: And just to give people context, when you first released LangChain, OpenAI did not have a chat API. It was a completion-only API. So you had to do all the human assistant, like prompting and whatnot. So you abstracted a lot of that away. I think the most interesting thing to me is you're kind of the Switzerland of this developer land. There's a bunch of vector databases that are killing each other out there to get people to embed data in them, and you're like, I love you all. You all are great. How do you think about being an opinionated framework versus leaving a lot of choice to the user? I mean, in terms of spending time into this integration, it's like you only have 10 people on the team. Obviously that takes time. Yeah. What's that process like for you all? [00:10:50]Harrison: I think right off the bat, having different options for language models. I mean, language models is the main one that right off the bat we knew we wanted to support a bunch of different options for. There's a lot to discuss there. People want optionality between different language models. They want to try it out. They want to maybe change to ones that are cheaper as new ones kind of emerge. They don't want to get stuck into one particular one if a better one comes out. There's some challenges there as well. Prompts don't really transfer. And so there's a lot of nuance there. But from the bat, having this optionality between the language model providers was a big important part because I think that was just something we felt really strongly about. We believe there's not just going to be one model that rules them all. There's going to be a bunch of different models that are good for a bunch of different use cases. I did not anticipate the number of vector stores that would emerge. I don't know how many we supported in the initial release. It probably wasn't as big of a focus as language models was. But I think it kind of quickly became so, especially when Postgres and Elastic and Redis started building their vector store implementations. We saw that some people might not want to use a dedicated vector store. Maybe they want to use traditional databases. I think to your point around what we're opinionated about, I think the thing that we believe most strongly is it's super early in the space and super fast moving. And so there's a lot of uncertainty about how things will shake out in terms of what role will vector databases play? How many will there be? And so I think a lot of it has always kind of been this optionality and ability to switch and not getting locked in. [00:12:19]Swyx: There's other pieces of LangChain which maybe don't get as much attention sometimes. And the way that you explained LangChain is somewhat different from the docs. I don't know how to square this. So for example, you have at the top level in your docs, you have, we mentioned ModelIO, we mentioned Retrieval, we mentioned Chains. Then you have a concept called Agents, which I don't know if exactly matches what other people call Agents. And we also talked about Memory. And then finally there's Callbacks. Are there any of the less understood concepts in LangChain that you want to give some air to? [00:12:53]Harrison: I mean, I think buried in ModelIO is some stuff around like few-shot example selectors that I think is really powerful. That's a workhorse. [00:13:01]Swyx: Yeah. I think that's where I start with LangChain. [00:13:04]Harrison: It's one of those things that you probably don't, if you're building an application, you probably don't start with it. You probably start with like a zero-shot prompt. But I think that's a really powerful one that's probably just talked about less because you don't need it right off the bat. And for those of you who don't know, that basically selects from a bunch of examples the ones that are maybe most relevant to the input at hand. So you can do some nice kind of like in-context learning there. I think that's, we've had that for a while. I don't think enough people use that, basically. Output parsers also used to be kind of important, but then function calling. There's this interesting thing where like the space is just like progressing so rapidly that a lot of things that were really important have kind of diminished a bit, to be honest. Output parsers definitely used to be an understated and underappreciated part. And I think if you're working with non-OpenAI models, they still are, but a lot of people are working with OpenAI models. But even within there, there's different things you can do with kind of like the function calling ability. Sometimes you want to have the option of having the text or the application you're building, it could return either. Sometimes you know that it wants to return in a structured format, and so you just want to take that structured format. Other times you're extracting things that are maybe a key in that structured format, and so you want to like pluck that key. And so there's just like some like annoying kind of like parsing of that to do. Agents, memory, and retrieval, we haven't talked at all. Retrieval, there's like five different subcomponents. You could also probably talk about all of those in depth. You've got the document loaders, the text splitters, the embedding models, the vector stores. Embedding models and vector stores, we don't really have, or sorry, we don't build, we integrate with those. Text splitters, I think we have like 15 or so. Like I think there's an under kind of like appreciated amount of those. [00:14:39]Swyx: And then... Well, it's actually, honestly, it's overwhelming. Nobody knows what to choose. [00:14:43]Harrison: Yeah, there is a lot. [00:14:44]Swyx: Yeah. Do you have personal favorites that you want to shout out? [00:14:47]Harrison: The one that we have in the docs is the default is like the recursive text splitter. We added a playground for text splitters the other week because, yeah, we heard a lot that like, you know, and like these affect things like the chunk overlap and the chunks, they affect things in really subtle ways. And so like I think we added a playground where people could just like choose different options. We have like, and a lot of the ideas are really similar. You split on different characters, depending on kind of like the type of text that you have marked down, you might want to split on differently than HTML. And so we added a playground where you can kind of like choose between those. I don't know if those are like underappreciated though, because I think a lot of people talk about text splitting as being a hard part, and it is a really important part of creating these retrieval applications. But I think we have a lot of really cool retrieval algorithms as well. So like self query is maybe one of my favorite things in LangChain, which is basically this idea of when you have a user question, the typical kind of like thing to do is you embed that question and then find the document that's most similar to that question. But oftentimes questions have things that just, you don't really want to look up semantically, they have some other meaning. So like in the example that I use, the example in the docs is like movies about aliens in the year 1980. 1980, I guess there's some semantic meaning for that, but it's a very particular thing that you care about. And so what the self query retriever does is it splits out the metadata filter and most vector stores support like a metadata filter. So it splits out this metadata filter, and then it splits out the semantic bit. And that's actually like kind of tricky to do because there's a lot of different filters that you can have like greater than, less than, equal to, you can have and things if you have multiple filters. So we have like a pretty complicated like prompt that does all that. That might be one of my favorite things in LangChain, period. Like I think that's, yeah, I think that's really cool. [00:16:26]Alessio: How do you think about speed of development versus support of existing things? So we mentioned retrieval, like you got, or, you know, text splitting, you got like different options for all of them. As you get building LangChain, how do you decide which ones are not going to keep supporting, you know, which ones are going to leave behind? I think right now, as you said, the space moves so quickly that like you don't even know who's using what. What's that like for you? [00:16:50]Harrison: Yeah. I mean, we have, you know, we don't really have telemetry on what people are using in terms of what parts of LangChain, the telemetry we have is like, you know, anecdotal stuff when people ask or have issues with things. A lot of it also is like, I think we definitely prioritize kind of like keeping up with the stuff that comes out. I think we added function calling, like the day it came out or the day after it came out, we added chat model support, like the day after it came out or something like that. That's probably, I think I'm really proud of how the team has kind of like kept up with that because this space is like exhausting sometimes. And so that's probably, that's a big focus of ours. The support, I think we've like, to be honest, we've had to get kind of creative with how we do that. Cause we have like, I think, I don't know how many open issues we have, but we have like 3000, somewhere between 2000 and 3000, like open GitHub issues. We've experimented with a lot of startups that are doing kind of like question answering over your docs and stuff like that. And so we've got them on the website and in the discord and there's a really good one, dosu on the GitHub that's like answering issues and stuff like that. And that's actually something we want to start leaning into more heavily as a company as well as kind of like building out an AI dev rel because we're 10 people now, 10, 11 people now. And like two months ago we were like six or something like that. Right. So like, and to have like 2,500 open issues or something like that, and like 300 or 400 PRs as well. Cause like one of the amazing things is that like, and you kind of alluded to this earlier, everyone's building in the space. There's so many different like touch points. LangChain is lucky enough to kind of like be a lot of the glue that connects it. And so we get to work with a lot of awesome companies, but that's also a lot of like work to keep up with as well. And so I don't really have an amazing answer, but I think like the, I think prioritize kind of like new things that, that come out. And then we've gotten creative with some of kind of like the support functions and, and luckily there's, you know, there's a lot of awesome people working on all those support coding, question answering things that we've been able to work with. [00:18:46]Swyx: I think there is your daily rhythm, which I've seen you, you work like a, like a beast man, like mad impressive. And then there's sometimes where you step back and do a little bit of high level, like 50,000 foot stuff. So we mentioned, we mentioned retrieval. You did a refactor in March and there's, there's other abstractions that you've sort of changed your mind on. When do you do that? When do you do like the, the step back from the day to day and go, where are we going and change the direction of the ship? [00:19:11]Harrison: It's a good question so far. It's probably been, you know, we see three or four or five things pop up that are enough to make us think about it. And then kind of like when it reaches that level, you know, we don't have like a monthly meeting where we sit down and do like a monthly plan or something. [00:19:27]Swyx: Maybe we should. I've thought about this. Yeah. I'd love to host that meeting. [00:19:32]Harrison: It's really been a lot of, you know, one of the amazing things is we get to interact with so many different people. So it's been a lot of kind of like just pattern matching on what people are doing and trying to see those patterns before they punch us in the face or something like that. So for retrieval, it was the pattern of seeing like, Hey, yeah, like a lot of people are using vector sort of stuff. But there's also just like other methods and people are offering like hosted solutions and we want our abstractions to work with that as well. So we shouldn't bake in this paradigm of doing like semantic search too heavily, which sounds like basic now, but I think like, you know, to start a lot of it was people needed help doing these things. But then there was like managed things that did them, hybrid retrieval mechanisms, all of that. I think another example of this, I mean, Langsmith, which we can maybe talk about was like very kind of like, I think we worked on that for like three or four months before announcing it kind of like publicly, two months maybe before giving it to kind of like anyone in beta. But this was a lot of debugging these applications as a pain point. We hear that like just understanding what's going on is a pain point. [00:20:27]Alessio: I mean, you two did a webinar on this, which is called Agents vs. Chains. It was fun, baby. [00:20:32]Swyx: Thanks for having me on. [00:20:33]Harrison: No, thanks for coming. [00:20:34]Alessio: That was a good one. And on the website, you list like RAG, which is retrieval of bank debt generation and agents as two of the main goals of LangChain. The difference I think at the Databricks keynote, you said chains are like predetermined steps and agents is models reasoning to figure out what steps to take and what actions to take. How should people think about when to use the two and how do you transition from one to the other with LangChain? Like is it a path that you support or like do people usually re-implement from an agent to a chain or vice versa? [00:21:05]Swyx: Yeah. [00:21:06]Harrison: You know, I know agent is probably an overloaded term at this point, and so there's probably a lot of different definitions out there. But yeah, as you said, kind of like the way that I think about an agent is basically like in a chain, you have a sequence of steps. You do this and then you do this and then you do this and then you do this. And with an agent, there's some aspect of it where the LLM is kind of like deciding what to do and what steps to do in what order. And you know, there's probably some like gray area in the middle, but you know, don't fight me on this. And so if we think about those, like the benefits of the chains are that they're like, you can say do this and you just have like a more rigid kind of like order and the way that things are done. They have more control and they don't go off the rails and basically everything that's bad about agents in terms of being uncontrollable and expensive, you can control more finely. The benefit of agents is that I think they handle like the long tail of things that can happen really well. And so for an example of this, let's maybe think about like interacting with a SQL database. So you can have like a SQL chain and you know, the first kind of like naive approach at a SQL chain would be like, okay, you have the user question. And then you like write the SQL query, you do some rag, you pull in the relevant tables and schemas, you write a SQL query, you execute that against the SQL database. And then you like return that as the answer, or you like summarize that with an LLM and return that to the answer. And that's basically the SQL chain that we have in LangChain. But there's a lot of things that can go wrong in that process. Starting from the beginning, you may like not want to even query the SQL database at all. Maybe they're saying like, hi, or something, or they're misusing the application. Then like what happens if you have some step, like a big part of the application that people with LangChain is like the context aware part. So there's generally some part of bringing in context to the language model. So if you bring in the wrong context to the language model, so it doesn't know which tables to query, what do you do then? If you write a SQL query, it's like syntactically wrong and it can't run. And then if it can run, like what if it returns an unexpected result or something? And so basically what we do with the SQL agent is we give it access to all these different tools. So it has another tool, it can run the SQL query as another, and then it can respond to the user. But then if it kind of like, it can decide which order to do these. And so it gives it flexibility to handle all these edge cases. And there's like, obviously downsides to that as well. And so there's probably like some safeguards you want to put in place around agents in terms of like not letting them run forever, having some observability in there. But I do think there's this benefit of, you know, like, again, to the other part of what LangChain is like the reasoning part, like each of those steps individually involves some aspect of reasoning, for sure. Like you need to reason about what the SQL query is, you need to reason about what to return. But there's then there's also reasoning about the order of operations. And so I think to me, the key is kind of like giving it an appropriate amount to reason about while still keeping it within checks. And so to the point, like, I would probably recommend that most people get started with chains and then when they get to the point where they're hitting these edge cases, then they think about, okay, I'm hitting a bunch of edge cases where the SQL query is just not returning like the relevant things. Maybe I should add in some step there and let it maybe make multiple queries or something like that. Basically, like start with chain, figure out when you're hitting these edge cases, add in the reasoning step to that to handle those edge cases appropriately. That would be kind of like my recommendation, right? [00:24:09]Swyx: If I were to rephrase it, in my words, an agent would be a reasoning node in a chain, right? Like you start with a chain, then you just add a reasoning node, now it's an agent. [00:24:17]Harrison: Yeah, the architecture for your application doesn't have to be just a chain or just an agent. It can be an agent that calls chains, it can be a chain that has an agent in different parts of them. And this is another part as well. Like the chains in LangChain are largely intended as kind of like a way to get started and take you some amount of the way. But for your specific use case, in order to kind of like eke out the most performance, you're probably going to want to do some customization at the very basic level, like probably around the prompt or something like that. And so one of the things that we've focused on recently is like making it easier to customize these bits of existing architectures. But you probably also want to customize your architectures as well. [00:24:52]Swyx: You mentioned a bit of prompt engineering for self-ask and then for this stuff. There's a bunch of, I just talked to a prompt engineering company today, PromptOps or LLMOps. Do you have any advice or thoughts on that field in general? Like are you going to compete with them? Do you have internal tooling that you've built? [00:25:08]Harrison: A lot of what we do is like where we see kind of like a lot of the pain points being like we can talk about LangSmith and that was a big motivation for that. And like, I don't know, would you categorize LangSmith as PromptOps? [00:25:18]Swyx: I don't know. It's whatever you want it to be. Do you want to call it? [00:25:22]Harrison: I don't know either. Like I think like there's... [00:25:24]Swyx: I think about it as like a prompt registry and you store them and you A-B test them and you do that. LangSmith, I feel like doesn't quite go there yet. Yeah. It's obviously the next step. [00:25:34]Harrison: Yeah, we'll probably go. And yeah, we'll do more of that because I think that's definitely part of the application of a chain or agent is you start with a default one, then you improve it over time. And like, I think a lot of the main new thing that we're dealing with here is like language models. And the main new way to control language models is prompts. And so like a lot of the chains and agents are powered by this combination of like prompt language model and then some output parser or something doing something with the output. And so like, yeah, we want to make that core thing as good as possible. And so we'll do stuff all around that for sure. [00:26:05]Swyx: Awesome. We might as well go into LangSmith because we're bringing it up so much. So you announced LangSmith I think last month. What are your visions for it? Is this the future of LangChain and the company? [00:26:16]Harrison: It's definitely part of the future. So LangSmith is basically a control center for kind of like your LLM application. So the main features that it kind of has is like debugging, logging, monitoring, and then like testing and evaluation. And so debugging, logging, monitoring, basically you set three environment variables and it kind of like logs all the runs that are happening in your LangChain chains or agents. And it logs kind of like the inputs and outputs at each step. And so the main use case we see for this is in debugging. And that's probably the main reason that we started down this path of building it is I think like as you have these more complex things, debugging what's actually going on becomes really painful whether you're using LangChain or not. And so like adding this type of observability and debuggability was really important. Yeah. There's a debugging aspect. You can see the inputs, outputs at each step. You can then quickly enter into like a playground experience where you can fiddle around with it. The first version didn't have that playground and then we'd see people copy, go to open AI playground, paste in there. Okay. Well, that's a little annoying. And then there's kind of like the monitoring, logging experience. And we recently added some analytics on like, you know, how many requests are you getting per hour, minute, day? What's the feedback like over time? And then there's like a testing debugging, sorry, testing and evaluation component as well where basically you can create datasets and then test and evaluate these datasets. And I think importantly, all these things are tied to each other and then also into LangChain, the framework. So what I mean by that is like we've tried to make it as easy as possible to go from logs to adding a data point to a dataset. And because we think a really powerful flow is you don't really get started with a dataset. You can accumulate a dataset over time. And so being able to find points that have gotten like a thumbs up or a thumbs down from a user can be really powerful in terms of creating a good dataset. And so that's maybe like a connection between the two. And then the connection in the other way is like all the runs that you have when you test or evaluate something, they're logged in the same way. So you can debug what exactly is going on and you don't just have like a final score. You have like this nice trace and thing where you can jump in. And then we also want to do more things to hook this into a LangChain proper, the framework. So I think like some of like the managing the prompts will tie in here already. Like we talked about example selectors using datasets as a few short examples is a path that we support in a somewhat janky way right now, but we're going to like make better over time. And so there's this connection between everything. Yeah. [00:28:42]Alessio: And you mentioned the dataset in the announcement blog post, you touched on heuristic evaluation versus LLMs evaluating LLMs. I think there's a lot of talk and confusion about this online. How should people prioritize the two, especially when they might start with like not a good set of evals or like any data at all? [00:29:01]Harrison: I think it's really use case specific in the distinction that I draw between heuristic and LLM. LLMs, you're using an LLM to evaluate the output heuristics, you have some common heuristic that you can use. And so some of these can be like really simple. So we were doing some kind of like measuring of an extraction chain where we wanted it to output JSON. Okay. One evaluation can be, can you use JSON.loads to load it? And like, right. And that works perfectly. You don't need an LLM to do that. But then for like a lot of like the question answering, like, is this factually accurate? And you have some ground truth fact that you know it should be answering with. I think, you know, LLMs aren't perfect. And I think there's a lot of discussion around the pitfalls of using LLMs to evaluate themselves. And I'm not saying they're perfect by any means, but I do think they're, we've found them to be kind of like better than blue or any of those metrics. And the way that I also like to use those is also just like guide my eye about where to look. So like, you know, I might not trust the score of like 0.82, like exactly correct, but like I can look to see like which data points are like flagged as passing or failing. And sometimes the evaluators messing up, but it's like good to like, you know, I don't have to look at like a hundred data points. I can focus on like 10 or something like that. [00:30:10]Alessio: And then can you create a heuristic once in Langsmith? Like what's like your connection to that? [00:30:16]Harrison: Yeah. So right now, all the evaluation, we actually do client side. And part of this is basically due to the fact that a lot of the evaluation is really application specific. So we thought about having evaluators, you could just click off and run in a server side or something like that. But we still think it's really early on in evaluation. We still think there's, it's just really application specific. So we prioritized instead, making it easy for people to write custom evaluators and then run them client side and then upload the results so that they can manually inspect them because I think manual inspection is still a pretty big part of evaluation for better or worse. [00:30:50]Swyx: We have this sort of components of observability. We have cost, latency, accuracy, and then planning. Is that listed in there? [00:30:57]Alessio: Well, planning more in the terms of like, if you're an agent, how to pick the right tool and whether or not you are picking the right tool. [00:31:02]Swyx: So when you talk to customers, how would you stack rank those needs? Are they cost sensitive? Are they latency sensitive? I imagine accuracy is pretty high up there. [00:31:13]Harrison: I think accuracy is definitely the top that we're seeing right now. I think a lot of the applications, people are, especially the ones that we're working with, people are still struggling to get them to work at a level where they're reliable [00:31:24]Swyx: enough. [00:31:25]Harrison: So that's definitely the first. Then I think probably cost becomes the next one. I think a few places where we've started to see this be like one of the main things is the AI simulation that came out. [00:31:36]Swyx: Generative agents. Yeah, exactly. [00:31:38]Harrison: Which is really fun to run, but it costs a lot of money. And so one of our team members, Lance, did an awesome job hooking up like a local model to it. You know, it's not as perfect, but I think it helps with that. Another really big place for this, we believe, is in like extraction of structured data from unstructured data. And the reason that I think it's so important there is that usually you do extraction of some type of like pre-processing or indexing process over your documents. I mean, there's a bunch of different use cases, but one use case is for that. And generally that's over a lot of documents. And so that starts to rack up a bill kind of quickly. And I think extraction is also like a simpler task than like reasoning about which tools to call next in an agent. And so I think it's better suited for that. Yeah. [00:32:15]Swyx: On one of the heuristics I wanted to get your thoughts on, hallucination is one of the big problems there. Do you have any recommendations on how people should reduce hallucinations? [00:32:25]Harrison: To reduce hallucinations, we did a webinar on like evaluating RAG this past week. And I think there's this great project called RAGOS that evaluates four different things across two different spectrums. So the two different spectrums are like, is the retrieval part right? Or is the generation, or sorry, like, is it messing up in retrieval or is it messing up in generation? And so I think to fix hallucination, it probably depends on where it's messing up. If it's messing up in generation, then you're getting the right information, but it's still hallucinating. Or you're getting like partially right information and hallucinating some bits, a lot of that's prompt engineering. And so that's what we would recommend kind of like focusing on the prompt engineering part. And then if you're getting it wrong in the, if you're just not retrieving the right stuff, then there's a lot of different things that you can probably do, or you should look at on the retrieval bit. And honestly, that's where it starts to become a bit like application specific as well. Maybe there's some temporal stuff going on. Maybe you're not parsing things correctly. Yeah. [00:33:19]Swyx: Okay. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. [00:33:35]Harrison: Yeah. Yeah. [00:33:37]Swyx: Yeah. [00:33:38]Harrison: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. [00:33:56]Swyx: Yeah. Yeah. [00:33:58]Harrison: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. [00:34:04]Swyx: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. [00:34:17]Harrison: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah, I mean, there's probably a larger discussion around that, but openAI definitely had a huge headstart, right? And that's... Clawds not even publicly available yet, I don't think. [00:34:28]Swyx: The API? Yeah. Oh, well, you can just basically ask any of the business reps and they'll give it to you. [00:34:33]Harrison: You can. But it's still a different signup process. I think there's... I'm bullish that other ones will catch up especially like Anthropic and Google. The local ones are really interesting. I think we're seeing a big... [00:34:46]Swyx: Lama Two? Yeah, we're doing the fine-tuning hackathon tomorrow. Thanks for promoting that. [00:34:50]Harrison: No, thanks for it. I'm really excited about that stuff. I mean, that's something that like we've been, you know, because like, as I said, like the only thing we know is that the space is moving so fast and changing so rapidly. And like, local models are, have always been one of those things that people have been bullish on. And it seems like it's getting closer and closer to kind of like being viable. So I'm excited to see what we can do with some fine-tuning. [00:35:10]Swyx: Yeah. I have to confess, I did not know that you cared. It's not like a judgment on Langchain. I was just like, you know, you write an adapter for it and you're done, right? Like how much further does it go for Langchain? In terms of like, for you, it's one of the, you know, the model IO modules and that's it. But like, you seem very personally, very passionate about it, but I don't know what the Langchain specific angle for this is, for fine-tuning local models, basically. Like you're just passionate about local models and privacy and all that, right? And open source. [00:35:41]Harrison: Well, I think there's a few different things. Like one, like, you know, if we think about what it takes to build a really reliable, like context-aware reasoning application, there's probably a bunch of different nodes that are doing a bunch of different things. And I think it is like a really complex system. And so if you're relying on open AI for every part of that, like, I think that starts to get really expensive. Also like, probably just like not good to have that much reliability on any one thing. And so I do think that like, I'm hoping that for like, you know, specific parts at the end, you can like fine-tune a model and kind of have a more specific thing for a specific task. Also, to be clear, like, I think like, I also, at the same time, I think open AI is by far the easiest way to get started. And if I was building anything, I would absolutely start with open AI. So. [00:36:27]Swyx: It's something I think a lot of people are wrestling with. But like, as a person building apps, why take five vendors when I can take one vendor, right? Like, as long as I trust Azure, I'm just entrusting all my data to Azure and that's it. So I'm still trying to figure out the real case for local models in production. And I don't know, but fine-tuning, I think, is a good one. That's why I guess open AI worked on fine-tuning. [00:36:49]Harrison: I think there's also like, you know, like if there is, if there's just more options available, like prices are going to go down. So I'm happy about that. So like very selfishly, there's that aspect as well. [00:37:01]Alessio: And in the Lancsmith announcement, I saw in the product screenshot, you have like chain, tool and LLM as like the three core atoms. Is that how people should think about observability in this space? Like first you go through the chain and then you start dig down between like the model itself and like the tool it's using? [00:37:19]Harrison: We've added more. We've added like a retriever logging so that you can see like what query is going in and what are the documents you're getting out. Those are like the three that we started with. I definitely think probably the main ones, like basically the LLM. So the reason I think the debugging in Lancsmith and debugging in general is so needed for these LLM apps is that if you're building, like, again, let's think about like what we want people to build in with LangChain. These like context aware reasoning applications. Context aware. There's a lot of stuff in the prompt. There's like the instructions. There's any previous messages. There's any input this time. There's any documents you retrieve. And so there's a lot of like data engineering that goes into like putting it into that prompt. This sounds silly, but just like making sure the data shows up in the right format is like really important. And then for the reasoning part of it, like that's obviously also all in the prompt. And so being able to like, and there's like, you know, the state of the world right now, like if you have the instructions at the beginning or at the end can actually make like a big difference in terms of whether it forgets it or not. And so being able to kind of like. [00:38:17]Swyx: Yeah. And it takes on that one, by the way, this is the U curve in context, right? Yeah. [00:38:21]Harrison: I think it's real. Basically I've found long context windows really good for when I want to extract like a single piece of information about something basically. But if I want to do reasoning over perhaps multiple pieces of information that are somewhere in like the retrieved documents, I found it not to be that great. [00:38:36]Swyx: Yeah. I have said that that piece of research is the best bull case for Lang chain and all the vector companies, because it means you should do chains. It means you should do retrieval instead of long context, right? People are trying to extend long context to like 100K, 1 million tokens, 5 million tokens. It doesn't matter. You're going to forget. You can't trust it. [00:38:54]Harrison: I expect that it will probably get better over time as everything in this field. But I do also think there'll always be a need for kind of like vector stores and retrieval in some fashions. [00:39:03]Alessio: How should people get started with Langsmith Cookbooks? Wanna talk maybe a bit about that? [00:39:08]Swyx: Yeah. [00:39:08]Harrison: Again, like I think the main thing that even I find valuable about Langsmith is just like the debugging aspect of it. And so for that, it's very simple. You can kind of like turn on three environment variables and it just logs everything. And you don't look at it 95% of the time, but that 5% you do when something goes wrong, it's quite handy to have there. And so that's probably the easiest way to get started. And we're still in a closed beta, but we're letting people off the wait list every day. And if you really need access, just DM me and we're happy to give you access there. And then yeah, there's a lot that you can do with Langsmith that we've been talking about. And so Will on our team has been leading the charge on a really great like Langsmith Cookbooks repo that covers everything from collecting feedback, whether it's thumbs up, thumbs down, or like multi-scale or comments as well, to doing evaluation, doing testing. You can also use Langsmith without Langchain. And so we've got some notebooks on that in there. But we have Python and JavaScript SDKs that aren't dependent on Langchain in any way. [00:40:01]Swyx: And so you can use those. [00:40:01]Harrison: And then we'll also be publishing a notebook on how to do that just with the REST APIs themselves. So yeah, definitely check out that repo. That's a great resource that Will's put together. [00:40:10]Swyx: Yeah, awesome. So we'll zoom out a little bit from Langsmith and talk about Langchain, the company. You're also a first-time founder. Yes. And you've just hired your 10th employee, Julia, who I know from my data engineering days. You mentioned Will Nuno, I think, who maintains Langchain.js. I'm very interested in like your multi-language strategy, by the way. Ankush, your co-founder, Lance, who did AutoEval. What are you staffing up for? And maybe who are you hiring? [00:40:34]Harrison: Yeah, so 10 employees, 12 total. We've got three more joining over the next three weeks. We've got Julia, who's awesome leading a lot of the product, go-to-market, customer success stuff. And then we've got Bri, who's also awesome leading a lot of the marketing and ops aspects. And then other than that, all engineers. We've staffed up a lot on kind of like full stack infra DevOps, kind of like as we've started going into the hosted platform. So internally, we're split about 50-50 between the open source and then the platform stuff. And yeah, we're looking to hire particularly on kind of like the things, we're actually looking to hire across most fronts, to be honest. But in particular, we probably need one or two more people on like open source, both Python and JavaScript and happy to dive into the multi-language kind of like strategy there. But again, like strong focus there on engineering, actually, as opposed to maybe like, we're not a research lab, we're not a research shop. [00:41:48]Swyx: And then on the platform side, [00:41:49]Harrison: like we definitely need some more people on the infra and DevOps side. So I'm using this as an opportunity to tell people that we're hiring and that you should reach out if that sounds like you. [00:41:58]Swyx: Something like that, jobs, whatever. I don't actually know if we have an official job. [00:42:02]Harrison: RIP, what happened to your landing page? [00:42:04]Swyx: It used to be so based. The Berkshire Hathaway one? Yeah, so what was the story, the quick story behind that? Yeah, the quick story behind that is we needed a website [00:42:12]Harrison: and I'm terrible at design. [00:42:14]Swyx: And I knew that we couldn't do a good job. [00:42:15]Harrison: So if you can't do a good job, might as well do the worst job possible. Yeah, and like lean into it. And have some fun with it, yeah. [00:42:21]Swyx: Do you admire Warren Buffett? Yeah, I admire Warren Buffett and admire his website. And actually you can still find a link to it [00:42:26]Harrison: from our current website if you look hard enough. So there's a little Easter egg. Before we dive into more of the open source community things, [00:42:33]Alessio: let's dive into the language thing. How do you think about parity between the Python and JavaScript? Obviously, they're very different ecosystems. So when you're working on a LangChain, is it we need to have the same abstraction in both language or are you to the needs? The core stuff, we want to have the same abstractions [00:42:50]Harrison: because we basically want to be able to do serialize prompts, chains, agents, all the core stuff as tightly as possible and then use that between languages. Like even, yeah, like even right now when we log things to LangChain, we have a playground experience where you can run things that runs in JavaScript because it's kind of like in the browser. But a lot of what's logged is like Python. And so we need that core equivalence for a lot of the core things. Then there's like the incredibly long tail of like integrations, more researchy things. So we want to be able to do that. Python's probably ahead on a lot of like the integrations front. There's more researchy things that we're able to include quickly because a lot of people release some of their code in Python and stuff like that. And so we can use that. And there's just more of an ecosystem around the Python project. But the core stuff will have kind of like the same abstractions and be translatable. That didn't go exactly where I was thinking. So like the LangChain of Ruby, the LangChain of C-sharp, [00:43:44]Swyx: you know, there's demand for that. I mean, I think that's a big part of it. But you are giving up some real estate by not doing it. Yeah, it comes down to kind of like, you know, ROI and focus. And I think like we do think [00:43:58]Harrison: there's a strong JavaScript community and we wanted to lean into that. And I think a lot of the people that we brought on early, like Nuno and Jacob have a lot of experience building JavaScript tooling in that community. And so I think that's a big part of it. And then there's also like, you know, building JavaScript tooling in that community. Will we do another language? Never say never, but like... [00:44:21]Swyx: Python JS for now. Yeah. Awesome. [00:44:23]Alessio: You got 83 articles, which I think might be a record for such a young company. What are like the hottest hits, the most popular ones? [00:44:32]Harrison: I think the most popular ones are generally the ones where we do a deep dive on something. So we did something a few weeks ago around evaluating CSV q

The Cloudcast
Considerations for Enterprise AI

The Cloudcast

Play Episode Listen Later Aug 27, 2023 34:46


Let's talk through some of the challenges that Enterprises will have with AI - from data location to GPU location, to model biases, to data privacy to training vs. execution.SHOW: 748CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSORS:Datadog Security Solution: Modern Monitoring and SecurityStart investigating security threats before it affects your customers with a free 14 day Datadog trial. Listeners of The Cloudcast will also receive a free Datadog T-shirt.Find "Breaking Analysis Podcast with Dave Vellante" on Apple, Google and SpotifyKeep up to data with Enterprise Tech with theCUBEAWS Insiders is an edgy, entertaining podcast about the services and future of cloud computing at AWS. Listen to AWS Insiders in your favorite podcast player. Cloudfix HomepageSHOW NOTES:An Interview with Daniel Gross and Nat Friedman on the AI Hype Cycle (Stratechery)ARE THERE EXPECTATIONS OF “OLD AI” vs. “NEW AI”?Are business leaders thinking about unique AI applications and use-cases, or just “ChatGPT-everything”?Formal data scientists vs. citizen data scientists?Will this just be an application, or have an impact on every aspect of a business and the IT industry?WILL ENTERPRISE AI BE DIFFERENT THAN CONSUMER AI? The industry is actively working on a broad set of models that can be used for different use-cases. It's commonly accepted that AI models need to be trained near the sources of data. Many businesses are concerned about including their company data into these public modelsMany businesses will want to deploy tuned models and applications in data center, public cloud and edge environments. New AI applications will be required to meet security, regulatory and compliance standards, like other business applications. FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet

This Week in Startups
Using GPUs as leverage, MSFT beats the case, FTC fails under Khan | E1775

This Week in Startups

Play Episode Listen Later Jul 11, 2023 52:02


Lemon.io - Hire pre-vetted remote developers, get 15% off your first 4 weeks of developer time at https://Lemon.io/twist OpenPhone. Create business phone numbers for you and your team that work through an app on your smartphone or desktop. TWiST listeners can get an extra 20% off any plan for your first 6 months at openphone.com/twist VEED makes it super easy for anyone (yes, you) to create great video. Filled with amazing features like templates, auto subtitles, text formatting, auto-resizing, a full suite of AI tools, and much more, VEED gives you the tools to engage your audience on any platform. Head to VEED.io to start creating incredible video content in minutes. * Today's show: Jason breaks down investors and companies using the GPU shortage as leverage to invest in AI startups (1:33) before discussing Microsoft's run-in with EU regulators over bundling Teams into its Office suite (28:53). They wrap on the FTC's loss in its quest to stop Microsoft's Activision Blizzard acquisition, and Lina Khan's track record as FTC chair (37:58). * Time stamps: (0:00) Nick joins Jason (1:33) Nvidia's GPU leverage (9:48) Lemon.io - Get 15% off your first 4 weeks of developer time at https://Lemon.io/twist (11:07) CoreWeave's pivot & the pros and cons of WFH (19:28) OpenPhone - Get 20% off your first six months at https://openphone.com/twist (20:54) Daniel Gross and Nat Friedman's GPU play (27:24) Veed - Sign up and engage your audience on any platform at https://www.veed.io/avatars?utm_campaign=TWIS&utm_medium=YT&utm_source=MKT (28:53) Microsoft's run-in with EU regulators over bundling (37:58) FTC loses cases against Microsoft and Activision Blizzard merger (46:37) Lina Khan's track record as the head of the FTC * Follow Nick: https://twitter.com/nickcalacanis * Read LAUNCH Fund 4 Deal Memo: https://www.launch.co/four Apply for Funding: https://www.launch.co/apply Buy ANGEL: https://www.angelthebook.com Great recent interviews: Steve Huffman, Brian Chesky, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland, PrayingForExits, Jenny Lefcourt Check out Jason's suite of newsletters: https://substack.com/@calacanis * Follow Jason: Twitter: https://twitter.com/jason Instagram: https://www.instagram.com/jason LinkedIn: https://www.linkedin.com/in/jasoncalacanis * Follow TWiST: Substack: https://twistartups.substack.com Twitter: https://twitter.com/TWiStartups YouTube: https://www.youtube.com/thisweekin * Subscribe to the Founder University Podcast: https://www.founder.university/podcast

Startup Insider
Investments & Exits - mit Jan Miczaika von HV Capital

Startup Insider

Play Episode Listen Later Jun 21, 2023 19:58


In der Rubrik “Investments & Exits” begrüßen wir heute Jan Miczaika, Partner bei HV Capital. Jan bespricht die Runde von ElevenLabs und Deeploy. Das Londoner Unternehmen ElevenLabs hat in einer Series-A-Finanzierungsrunde 17,3 Millionen Euro eingesammelt. Die Mittel sollen für die weitere Forschung und Produktentwicklung im Bereich der Sprach-KI verwendet werden. Zu den Investoren gehören Nat Friedman, Daniel Gross, Andreessen Horowitz, Credo Ventures, Concept Ventures sowie mehrere namhafte Angel-Investoren. ElevenLabs entwickelt innovative Produkte für verschiedene Branchen, darunter Verlagswesen, Gaming und Unterhaltung. Mit seinen fortschrittlichen Sprachtechnologien möchte das Unternehmen Sprachbarrieren überwinden und Inhalte universell zugänglich machen.Das niederländische Startup Deeploy hat in einer Finanzierungsrunde 2,5 Millionen Euro eingesammelt. Deeploy ist eine Softwareplattform für die Überwachung von KI-Modellen und ermöglicht transparente und erklärungsfähige Bereitstellungen von maschinellen Lernmodellen. Das Unternehmen wurde 2020 von Maarten Stolk, Bastiaan van de Rakt, Tim Kleinloog und Nick Jetten gegründet und unterstützt Unternehmen dabei, KI-Anwendungen sicher und verantwortungsvoll zu implementieren. Zu den Investoren gehören unter anderem Curiosity, SI3, Bonsai Partners, Emilia Capital und Anton Loeffen. 

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
Training a SOTA Code LLM in 1 week and Quantifying the Vibes — with Reza Shabani of Replit

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later May 3, 2023 69:31


Latent Space is popping off! Welcome to the over 8500 latent space explorers who have joined us. Join us this month at various events in SF and NYC, or start your own!This post spent 22 hours at the top of Hacker News.As announced during their Developer Day celebrating their $100m fundraise following their Google partnership, Replit is now open sourcing its own state of the art code LLM: replit-code-v1-3b (model card, HF Space), which beats OpenAI's Codex model on the industry standard HumanEval benchmark when finetuned on Replit data (despite being 77% smaller) and more importantly passes AmjadEval (we'll explain!)We got an exclusive interview with Reza Shabani, Replit's Head of AI, to tell the story of Replit's journey into building a data platform, building GhostWriter, and now training their own LLM, for 22 million developers!8 minutes of this discussion go into a live demo discussing generated code samples - which is always awkward on audio. So we've again gone multimodal and put up a screen recording here where you can follow along on the code samples!Recorded in-person at the beautiful StudioPod studios in San Francisco.Full transcript is below the fold. We would really appreciate if you shared our pod with friends on Twitter, LinkedIn, Mastodon, Bluesky, or your social media poison of choice!Timestamps* [00:00:21] Introducing Reza* [00:01:49] Quantitative Finance and Data Engineering* [00:11:23] From Data to AI at Replit* [00:17:26] Replit GhostWriter* [00:20:31] Benchmarking Code LLMs* [00:23:06] AmjadEval live demo* [00:31:21] Aligning Models on Vibes* [00:33:04] Beyond Chat & Code Completion* [00:35:50] Ghostwriter Autonomous Agent* [00:38:47] Releasing Replit-code-v1-3b* [00:43:38] The YOLO training run* [00:49:49] Scaling Laws: from Kaplan to Chinchilla to LLaMA* [00:52:43] MosaicML* [00:55:36] Replit's Plans for the Future (and Hiring!)* [00:59:05] Lightning RoundShow Notes* Reza Shabani on Twitter and LinkedIn* also Michele Catasta and Madhav Singhal* Michele Catasta's thread on the release of replit-code-v1-3b* Intro to Replit Ghostwriter* Replit Ghostwriter Chat and Building Ghostwriter Chat* Reza on how to train your own LLMs (their top blog of all time)* Our Benchmarks 101 episode where we discussed HumanEval* AmjadEval live demo* Nat.dev* MosaicML CEO Naveen Rao on Replit's LLM* MosaicML Composer + FSDP code* Replit's AI team is hiring in North America timezone - Fullstack engineer, Applied AI/ML, and other roles!Transcript[00:00:00] Alessio Fanelli: Hey everyone. Welcome to the Latent Space podcast. This is Alessio, partner and CTO in residence at Decibel Partners. I'm joined by my co-host, swyx, writer and editor of Latent Space.[00:00:21] Introducing Reza[00:00:21] swyx: Hey and today we have Reza Shabani, Head of AI at Replit. Welcome to the studio. Thank you. Thank you for having me. So we try to introduce people's bios so you don't have to repeat yourself, but then also get a personal side of you.[00:00:34] You got your PhD in econ from Berkeley, and then you were a startup founder for a bit, and, and then you went into systematic equity trading at BlackRock in Wellington. And then something happened and you were now head of AI at Relet. What should people know about you that might not be apparent on LinkedIn?[00:00:50] One thing[00:00:51] Reza Shabani: that comes up pretty often is whether I know how to code. Yeah, you'd be shocked. A lot of people are kind of like, do you know how to code? When I was talking to Amjad about this role, I'd originally talked to him, I think about a product role and, and didn't get it. Then he was like, well, I know you've done a bunch of data and analytics stuff.[00:01:07] We need someone to work on that. And I was like, sure, I'll, I'll do it. And he was like, okay, but you might have to know how to code. And I was like, yeah, yeah, I, I know how to code. So I think that just kind of surprises people coming from like Ancon background. Yeah. Of people are always kind of like, wait, even when people join Relet, they're like, wait, does this guy actually know how to code?[00:01:28] Is he actually technical? Yeah.[00:01:30] swyx: You did a bunch of number crunching at top financial companies and it still wasn't[00:01:34] Reza Shabani: obvious. Yeah. Yeah. I mean, I, I think someone like in a software engineering background, cuz you think of finance and you think of like calling people to get the deal done and that type of thing.[00:01:43] No, it's, it's not that as, as you know, it's very very quantitative. Especially what I did in, in finance, very quantitative.[00:01:49] Quantitative Finance and Data Engineering[00:01:49] swyx: Yeah, so we can cover a little bit of that and then go into the rapid journey. So as, as you, as you know, I was also a quantitative trader on the sell side and the buy side. And yeah, I actually learned Python there.[00:02:01] I learned my, I wrote my own data pipelines there before airflow was a thing, and it was just me writing running notebooks and not version controlling them. And it was a complete mess, but we were managing a billion dollars on, on my crappy code. Yeah, yeah. What was it like for you?[00:02:17] Reza Shabani: I guess somewhat similar.[00:02:18] I, I started the journey during grad school, so during my PhD and my PhD was in economics and it was always on the more data intensive kind of applied economic side. And, and specifically financial economics. And so what I did for my dissertation I recorded cnbc, the Financial News Network for 10 hours a day, every day.[00:02:39] Extracted the close captions from the video files and then used that to create a second by second transcript of, of cmbc, merged that on with high frequency trading, quote data and then looked at, you know, went in and did some, some nlp, tagging the company names, and and then looked at the price response or the change in price and trading volume in the seconds after a company was mentioned.[00:03:01] And, and this was back in. 2009 that I was doing this. So before cloud, before, before a lot of Python actually. And, and definitely before any of these packages were available to make this stuff easy. And that's where, where I had to really learn to code, like outside of you know, any kind of like data programming languages.[00:03:21] That's when I had to learn Python and had to learn all, all of these other skills to work it with data at that, at that scale. So then, you know, I thought I wanted to do academia. I did terrible on the academic market because everyone looked at my dissertation. They're like, this is cool, but this isn't economics.[00:03:37] And everyone in the computer science department was actually way more interested in it. Like I, I hung out there more than in the econ department and You know, didn't get a single academic offer. Had two offer. I think I only applied to like two industry jobs and got offers from both of them.[00:03:53] They, they saw value in it. One of them was BlackRock and turned it down to, to do my own startup, and then went crawling back two and a half years later after the startup failed.[00:04:02] swyx: Something on your LinkedIn was like you're trading Chinese news tickers or something. Oh, yeah. I forget,[00:04:07] Reza Shabani: forget what that was.[00:04:08] Yeah, I mean oh. There, there was so much stuff. Honestly, like, so systematic active equity at, at BlackRock is, was such an amazing. Group and you just end up learning so much and the, and the possibilities there. Like when you, when you go in and you learn the types of things that they've been trading on for years you know, like a paper will come out in academia and they're like, did you know you can use like this data on searches to predict the price of cars?[00:04:33] And it's like, you go in and they've been trading on that for like eight years. Yeah. So they're, they're really ahead of the curve on, on all of that stuff. And the really interesting stuff that I, that I found when I went in was all like, related to NLP and ml a lot of like transcript data, a lot of like parsing through the types of things that companies talk about, whether an analyst reports, conference calls, earnings reports and the devil's really in the details about like how you make sense of, of that information in a way that, you know, gives you insight into what the company's doing and, and where the market is, is going.[00:05:08] I don't know if we can like nerd out on specific strategies. Yes. Let's go, let's go. What, so one of my favorite strategies that, because it never, I don't think we ended up trading on it, so I can probably talk about it. And it, it just kind of shows like the kind of work that you do around this data.[00:05:23] It was called emerging technologies. And so the whole idea is that there's always a new set of emerging technologies coming onto the market and the companies that are ahead of that curve and stay up to date on on the latest trends are gonna outperform their, their competitors.[00:05:38] And that's gonna reflect in the, in the stock price. So when you have a theory like that, how do you actually turn that into a trading strategy? So what we ended up doing is, well first you have to, to determine what are the emergent technologies, like what are the new up and coming technologies.[00:05:56] And so we actually went and pulled data on startups. And so there's like startups in Silicon Valley. You have all these descriptions of what they do, and you get that, that corpus of like when startups were getting funding. And then you can run non-negative matrix factorization on it and create these clusters of like what the various Emerging technologies are, and you have this all the way going back and you have like social media back in like 2008 when Facebook was, was blowing up.[00:06:21] And and you have things like mobile and digital advertising and and a lot of things actually outside of Silicon Valley. They, you know, like shale and oil cracking. Yeah. Like new technologies in, in all these different types of industries. And then and then you go and you look like, which publicly traded companies are actually talking about these things and and have exposure to these things.[00:06:42] And those are the companies that end up staying ahead of, of their competitors. And a lot of the the cases that came out of that made a ton of sense. Like when mobile was emerging, you had Walmart Labs. Walmart was really far ahead in terms of thinking about mobile and the impact of mobile.[00:06:59] And, and their, you know, Sears wasn't, and Walmart did well, and, and Sears didn't. So lots of different examples of of that, of like a company that talks about a new emerging trend. I can only imagine, like right now, all of the stuff with, with ai, there must be tons of companies talking about, yeah, how does this affect their[00:07:17] swyx: business?[00:07:18] And at some point you do, you do lose the signal. Because you get overwhelmed with noise by people slapping a on everything. Right? Which is, yeah. Yeah. That's what the Long Island Iced Tea Company slaps like blockchain on their name and, you know, their stock price like doubled or something.[00:07:32] Reza Shabani: Yeah, no, that, that's absolutely right.[00:07:35] And, and right now that's definitely the kind of strategy that would not be performing well right now because everyone would be talking about ai. And, and that's, as you know, like that's a lot of what you do in Quant is you, you try to weed out other possible explanations for for why this trend might be happening.[00:07:52] And in that particular case, I think we found that, like the companies, it wasn't, it wasn't like Sears and Walmart were both talking about mobile. It's that Walmart went out of their way to talk about mobile as like a future, mm-hmm. Trend. Whereas Sears just wouldn't bring it up. And then by the time an invest investors are asking you about it, you're probably late to the game.[00:08:12] So it was really identifying those companies that were. At the cutting edge of, of new technologies and, and staying ahead. I remember like Domino's was another big one. Like, I don't know, you[00:08:21] swyx: remember that? So for those who don't know, Domino's Pizza, I think for the run of most of the 2010s was a better performing stock than Amazon.[00:08:29] Yeah.[00:08:31] Reza Shabani: It's insane.[00:08:32] swyx: Yeah. Because of their investment in mobile. Mm-hmm. And, and just online commerce and, and all that. I it must have been fun picking that up. Yeah, that's[00:08:40] Reza Shabani: that's interesting. And I, and I think they had, I don't know if you, if you remember, they had like the pizza tracker, which was on, on mobile.[00:08:46] I use it[00:08:46] swyx: myself. It's a great, it's great app. Great app. I it's mostly faked. I think that[00:08:50] Reza Shabani: that's what I heard. I think it's gonna be like a, a huge I don't know. I'm waiting for like the New York Times article to drop that shows that the whole thing was fake. We all thought our pizzas were at those stages, but they weren't.[00:09:01] swyx: The, the challenge for me, so that so there's a, there's a great piece by Eric Falkenstein called Batesian Mimicry, where every signal essentially gets overwhelmed by noise because the people who wants, who create noise want to follow the, the signal makers. So that actually is why I left quant trading because there's just too much regime changing and like things that would access very well would test poorly out a sample.[00:09:25] And I'm sure you've like, had a little bit of that. And then there's what was the core uncertainty of like, okay, I have identified a factor that performs really well, but that's one factor out of. 500 other factors that could be going on. You have no idea. So anyway, that, that was my existential uncertainty plus the fact that it was a very highly stressful job.[00:09:43] Reza Shabani: Yeah. This is a bit of a tangent, but I, I think about this all the time and I used to have a, a great answer before chat came out, but do you think that AI will win at Quant ever?[00:09:54] swyx: I mean, what is Rentech doing? Whatever they're doing is working apparently. Yeah. But for, for most mortals, I. Like just waving your wand and saying AI doesn't make sense when your sample size is actually fairly low.[00:10:08] Yeah. Like we have maybe 40 years of financial history, if you're lucky. Mm-hmm. Times what, 4,000 listed equities. It's actually not a lot. Yeah, no, it's,[00:10:17] Reza Shabani: it's not a lot at all. And, and constantly changing market conditions and made laden variables and, and all of, all of that as well. Yeah. And then[00:10:24] swyx: retroactively you're like, oh, okay.[00:10:26] Someone will discover a giant factor that, that like explains retroactively everything that you've been doing that you thought was alpha, that you're like, Nope, actually you're just exposed to another factor that you're just, you just didn't think about everything was momentum in.[00:10:37] Yeah. And one piece that I really liked was Andrew Lo. I think he had from mit, I think he had a paper on bid as Spreads. And I think if you, if you just. Taken, took into account liquidity of markets that would account for a lot of active trading strategies, alpha. And that was systematically declined as interest rates declined.[00:10:56] And I mean, it was, it was just like after I looked at that, I was like, okay, I'm never gonna get this right.[00:11:01] Reza Shabani: Yeah. It's a, it's a crazy field and I you know, I, I always thought of like the, the adversarial aspect of it as being the, the part that AI would always have a pretty difficult time tackling.[00:11:13] Yeah. Just because, you know, there's, there's someone on the other end trying to out, out game you and, and AI can, can fail in a lot of those situations. Yeah.[00:11:23] swyx: Cool.[00:11:23] From Data to AI at Replit[00:11:23] Alessio Fanelli: Awesome. And now you've been a rep almost two years. What do you do there? Like what does the, the team do? Like, how has that evolved since you joined?[00:11:32] Especially since large language models are now top of mind, but, you know, two years ago it wasn't quite as mainstream. So how, how has that evolved?[00:11:40] Reza Shabani: Yeah, I, so when I joined, I joined a year and a half ago. We actually had to build out a lot of, of data pipelines.[00:11:45] And so I started doing a lot of data work. And we didn't have you know, there, there were like databases for production systems and, and whatnot, but we just didn't have the the infrastructure to query data at scale and to process that, that data at scale and replica has tons of users tons of data, just tons of ripples.[00:12:04] And I can get into, into some of those numbers, but like, if you wanted to answer the question, for example of what is the most. Forked rep, rep on rep, you couldn't answer that back then because it, the query would just completely time out. And so a lot of the work originally just went into building data infrastructure, like modernizing the data infrastructure in a way where you can answer questions like that, where you can you know, pull in data from any particular rep to process to make available for search.[00:12:34] And, and moving all of that data into a format where you can do all of this in minutes as opposed to, you know, days or weeks or months. That laid a lot of the groundwork for building anything in, in ai, at least in terms of training our own own models and then fine tuning them with, with replica data.[00:12:50] So then you know, we, we started a team last year recruited people from, you know from a team of, of zero or a team of one to, to the AI and data team today. We, we build. Everything related to, to ghostrider. So that means the various features like explain code, generate code, transform Code, and Ghostrider chat which is like a in context ide or a chat product within the, in the ide.[00:13:18] And then the code completion models, which are ghostwriter code complete, which was the, the very first version of, of ghostrider. Yeah. And we also support, you know, things like search and, and anything in terms of what creates, or anything that requires like large data scale or large scale processing of, of data for the site.[00:13:38] And, and various types of like ML algorithms for the site, for internal use of the site to do things like detect and stop abuse. Mm-hmm.[00:13:47] Alessio Fanelli: Yep. Sounds like a lot of the early stuff you worked on was more analytical, kind of like analyzing data, getting answers on these things. Obviously this has evolved now into some.[00:13:57] Production use case code lms, how is the team? And maybe like some of the skills changed. I know there's a lot of people wondering, oh, I was like a modern data stack expert, or whatever. It's like I was doing feature development, like, how's my job gonna change? Like,[00:14:12] Reza Shabani: yeah. It's a good question. I mean, I think that with with language models, the shift has kind of been from, or from traditional ml, a lot of the shift has gone towards more like nlp backed ml, I guess.[00:14:26] And so, you know, there, there's an entire skill set of applicants that I no longer see, at least for, for this role which are like people who know how to do time series and, and ML across time. Right. And, and you, yeah. Like you, you know, that exact feeling of how difficult it is to. You know, you have like some, some text or some variable and then all of a sudden you wanna track that over time.[00:14:50] The number of dimensions that it, that it introduces is just wild and it's a totally different skill set than what we do in a, for example, in in language models. And it's very it's a, it's a skill that is kind of you know, at, at least at rep not used much. And I'm sure in other places used a lot, but a lot of the, the kind of excitement about language models has pulled away attention from some of these other ML areas, which are extremely important and, and I think still going to be valuable.[00:15:21] So I would just recommend like anyone who is a, a data stack expert, like of course it's cool to work with NLP and text data and whatnot, but I do think at some point it's going to you know, having, having skills outside of that area and in more traditional aspects of ML will, will certainly be valuable as well.[00:15:39] swyx: Yeah. I, I'd like to spend a little bit of time on this data stack notion pitch. You were even, you were effectively the first data hire at rep. And I just spent the past year myself diving into data ecosystem. I think a lot of software engineers are actually. Completely unaware that basically every company now eventually evolves.[00:15:57] The data team and the data team does everything that you just mentioned. Yeah. All of us do exactly the same things, set up the same pipelines you know, shop at the same warehouses essentially. Yeah, yeah, yeah, yeah. So that they enable everyone else to query whatever they, whatever they want. And to, to find those insights that that can drive their business.[00:16:15] Because everyone wants to be data driven. They don't want to do the janitorial work that it comes, that comes to, yeah. Yeah. Hooking everything up. What like, so rep is that you think like 90 ish people now, and then you, you joined two years ago. Was it like 30 ish people? Yeah, exactly. We're 30 people where I joined.[00:16:30] So and I just wanna establish your founders. That is exactly when we hired our first data hire at Vilify as well. I think this is just a very common pattern that most founders should be aware of, that like, You start to build a data discipline at this point. And it's, and by the way, a lot of ex finance people very good at this because that's what we do at our finance job.[00:16:48] Reza Shabani: Yeah. Yeah. I was, I was actually gonna Good say that is that in, in some ways, you're kind of like the perfect first data hire because it, you know, you know how to build things in a reliable but fast way and, and how to build them in a way that, you know, it's, it scales over time and evolves over time because financial markets move so quickly that if you were to take all of your time building up these massive systems, like the trading opportunities gone.[00:17:14] So, yeah. Yeah, they're very good at it. Cool. Okay. Well,[00:17:18] swyx: I wanted to cover Ghost Writer as a standalone thing first. Okay. Yeah. And then go into code, you know, V1 or whatever you're calling it. Yeah. Okay. Okay. That sounds good. So order it[00:17:26] Replit GhostWriter[00:17:26] Reza Shabani: however you like. Sure. So the original version of, of Ghost Writer we shipped in August of, of last year.[00:17:33] Yeah. And so this was a. This was a code completion model similar to GitHub's co-pilot. And so, you know, you would have some text and then it would predict like, what, what comes next. And this was, the original version was actually based off of the cogen model. And so this was an open source model developed by Salesforce that was trained on, on tons of publicly available code data.[00:17:58] And so then we took their their model, one of the smaller ones, did some distillation some other kind of fancy tricks to, to make it much faster and and deployed that. And so the innovation there was really around how to reduce the model footprint in a, to, to a size where we could actually serve it to, to our users.[00:18:20] And so the original Ghost Rider You know, we leaned heavily on, on open source. And our, our friends at Salesforce obviously were huge in that, in, in developing these models. And, but, but it was game changing just because we were the first startup to actually put something like that into production.[00:18:38] And, and at the time, you know, if you wanted something like that, there was only one, one name and, and one place in town to, to get it. And and at the same time, I think I, I'm not sure if that's like when the image models were also becoming open sourced for the first time. And so the world went from this place where, you know, there was like literally one company that had all of these, these really advanced models to, oh wait, maybe these things will be everywhere.[00:19:04] And that's exactly what's happened in, in the last Year or so, as, as the models get more powerful and then you always kind of see like an open source version come out that someone else can, can build and put into production very quickly at, at, you know, a fraction of, of the cost. So yeah, that was the, the kind of code completion Go Strider was, was really just, just that we wanted to fine tune it a lot to kind of change the way that our users could interact with it.[00:19:31] So just to make it you know, more customizable for our use cases on, on Rep. And so people on Relet write a lot of, like jsx for example, which I don't think was in the original training set for, for cogen. And and they do specific things that are more Tuned to like html, like they might wanna run, right?[00:19:50] Like inline style or like inline CSS basically. Those types of things. And so we experimented with fine tuning cogen a bit here and there, and, and the results just kind of weren't, weren't there, they weren't where you know, we, we wanted the model to be. And, and then we just figured we should just build our own infrastructure to, you know, train these things from scratch.[00:20:11] Like, LMS aren't going anywhere. This world's not, you know, it's, it's not like we're not going back to that world of there's just one, one game in town. And and we had the skills infrastructure and the, and the team to do it. So we just started doing that. And you know, we'll be this week releasing our very first open source code model.[00:20:31] And,[00:20:31] Benchmarking Code LLMs[00:20:31] Alessio Fanelli: and when you say it was not where you wanted it to be, how were you benchmarking[00:20:36] Reza Shabani: it? In that particular case, we were actually, so, so we have really two sets of benchmarks that, that we use. One is human eval, so just the standard kind of benchmark for, for Python, where you can generate some code or you give you give the model a function definition with, with some string describing what it's supposed to do, and then you allow it to complete that function, and then you run a unit test against it and and see if what it generated passes the test.[00:21:02] So we, we always kind of, we would run this on the, on the model. The, the funny thing is the fine tuned versions of. Of Cogen actually did pretty well on, on that benchmark. But then when we, we then have something called instead of human eval. We call it Amjad eval, which is basically like, what does Amjad think?[00:21:22] Yeah, it's, it's exactly that. It's like testing the vibes of, of a model. And it's, it's cra like I've never seen him, I, I've never seen anyone test the model so thoroughly in such a short amount of time. He's, he's like, he knows exactly what to write and, and how to prompt the model to, to get you know, a very quick read on, on its quote unquote vibes.[00:21:43] And and we take that like really seriously. And I, I remember there was like one, one time where we trained a model that had really good you know, human eval scores. And the vibes were just terrible. Like, it just wouldn't, you know, it, it seemed overtrained. So so that's a lot of what we found is like we, we just couldn't get it to Pass the vibes test no matter how the, how[00:22:04] swyx: eval.[00:22:04] Well, can you formalize I'm jal because I, I actually have a problem. Slight discomfort with human eval. Effectively being the only code benchmark Yeah. That we have. Yeah. Isn't that[00:22:14] Reza Shabani: weird? It's bizarre. It's, it's, it's weird that we can't do better than that in some, some way. So, okay. If[00:22:21] swyx: I, if I asked you to formalize Mja, what does he look for that human eval doesn't do well on?[00:22:25] Reza Shabani: Ah, that is a, that's a great question. A lot of it is kind of a lot of it is contextual like deep within, within specific functions. Let me think about this.[00:22:38] swyx: Yeah, we, we can pause for. And if you need to pull up something.[00:22:41] Reza Shabani: Yeah, I, let me, let me pull up a few. This, this[00:22:43] swyx: is gold, this catnip for people.[00:22:45] Okay. Because we might actually influence a benchmark being evolved, right. So, yeah. Yeah. That would be,[00:22:50] Reza Shabani: that would be huge. This was, this was his original message when he said the vibes test with, with flying colors. And so you have some, some ghostrider comparisons ghost Rider on the left, and cogen is on the right.[00:23:06] AmjadEval live demo[00:23:06] Reza Shabani: So here's Ghostrider. Okay.[00:23:09] swyx: So basically, so if I, if I summarize it from a, for ghosting the, there's a, there's a, there's a bunch of comments talking about how you basically implement a clone. Process or to to c Clooney process. And it's describing a bunch of possible states that he might want to, to match.[00:23:25] And then it asks for a single line of code for defining what possible values of a name space it might be to initialize it in amjadi val With what model is this? Is this your, this is model. This is the one we're releasing. Yeah. Yeah. It actually defines constants which are human readable and nice.[00:23:42] And then in the other cogen Salesforce model, it just initializes it to zero because it reads that it starts of an int Yeah, exactly. So[00:23:51] Reza Shabani: interesting. Yeah. So you had a much better explanation of, of that than than I did. It's okay. So this is, yeah. Handle operation. This is on the left.[00:24:00] Okay.[00:24:00] swyx: So this is rep's version. Yeah. Where it's implementing a function and an in filling, is that what it's doing inside of a sum operation?[00:24:07] Reza Shabani: This, so this one doesn't actually do the infill, so that's the completion inside of the, of the sum operation. But it, it's not, it's, it, it's not taking into account context after this value, but[00:24:18] swyx: Right, right.[00:24:19] So it's writing an inline lambda function in Python. Okay.[00:24:21] Reza Shabani: Mm-hmm. Versus[00:24:24] swyx: this one is just passing in the nearest available variable. It's, it can find, yeah.[00:24:30] Reza Shabani: Okay. So so, okay. I'll, I'll get some really good ones in a, in a second. So, okay. Here's tokenize. So[00:24:37] swyx: this is an assertion on a value, and it's helping to basically complete the entire, I think it looks like an E s T that you're writing here.[00:24:46] Mm-hmm. That's good. That that's, that's good. And then what does Salesforce cogen do? This is Salesforce cogen here. So is that invalidism way or what, what are we supposed to do? It's just making up tokens. Oh, okay. Yeah, yeah, yeah. So it's just, it's just much better at context. Yeah. Okay.[00:25:04] Reza Shabani: And, and I guess to be fair, we have to show a case where co cogen does better.[00:25:09] Okay. All right. So here's, here's one on the left right, which[00:25:12] swyx: is another assertion where it's just saying that if you pass in a list, it's going to throw an exception saying in an expectedly list and Salesforce code, Jen says,[00:25:24] Reza Shabani: This is so, so ghost writer was sure that the first argument needs to be a list[00:25:30] swyx: here.[00:25:30] So it hallucinated that it wanted a list. Yeah. Even though you never said it was gonna be a list.[00:25:35] Reza Shabani: Yeah. And it's, it's a argument of that. Yeah. Mm-hmm. So, okay, here's a, here's a cooler quiz for you all, cuz I struggled with this one for a second. Okay. What is.[00:25:47] swyx: Okay, so this is a four loop example from Amjad.[00:25:50] And it's, it's sort of like a q and a context in a chat bot. And it's, and it asks, and Amjad is asking, what does this code log? And it just paste in some JavaScript code. The JavaScript code is a four loop with a set time out inside of it with a cons. The console logs out the iteration variable of the for loop and increasing numbers of of, of times.[00:26:10] So it's, it goes from zero to five and then it just increases the, the delay between the timeouts each, each time. Yeah.[00:26:15] Reza Shabani: So, okay. So this answer was provided by by Bard. Mm-hmm. And does it look correct to you? Well,[00:26:22] the[00:26:22] Alessio Fanelli: numbers too, but it's not one second. It's the time between them increases.[00:26:27] It's like the first one, then the one is one second apart, then it's two seconds, three seconds. So[00:26:32] Reza Shabani: it's not, well, well, so I, you know, when I saw this and, and the, the message and the thread was like, Our model's better than Bard at, at coding Uhhuh. This is the Bard answer Uhhuh that looks totally right to me.[00:26:46] Yeah. And this is our[00:26:47] swyx: answer. It logs 5 5 55, what is it? Log five 50. 55 oh oh. Because because it logs the state of I, which is five by the time that the log happens. Mm-hmm. Yeah.[00:27:01] Reza Shabani: Oh God. So like we, you know we were shocked. Like, and, and the Bard dancer looked totally right to, to me. Yeah. And then, and somehow our code completion model mind Jude, like this is not a conversational chat model.[00:27:14] Mm-hmm. Somehow gets this right. And and, you know, Bard obviously a much larger much more capable model with all this fancy transfer learning and, and and whatnot. Some somehow, you know, doesn't get it right. So, This is the kind of stuff that goes into, into mja eval that you, you won't find in any benchmark.[00:27:35] Good. And and, and it's, it's the kind of thing that, you know, makes something pass a, a vibe test at Rep.[00:27:42] swyx: Okay. Well, okay, so me, this is not a vibe, this is not so much a vibe test as the, these are just interview questions. Yeah, that's, we're straight up just asking interview questions[00:27:50] Reza Shabani: right now. Yeah, no, the, the vibe test, the reason why it's really difficult to kind of show screenshots that have a vibe test is because it really kind of depends on like how snappy the completion is, how what the latency feels like and if it gets, if it, if it feels like it's making you more productive.[00:28:08] And and a lot of the time, you know, like the, the mix of, of really low latency and actually helpful content and, and helpful completions is what makes up the, the vibe test. And I think part of it is also, is it. Is it returning to you or the, the lack of it returning to you things that may look right, but be completely wrong.[00:28:30] I think that also kind of affects Yeah. Yeah. The, the vibe test as well. Yeah. And so, yeah, th this is very much like a, like a interview question. Yeah.[00:28:39] swyx: The, the one with the number of processes that, that was definitely a vibe test. Like what kind of code style do you expect in this situation? Yeah.[00:28:47] Is this another example? Okay.[00:28:49] Reza Shabani: Yeah. This is another example with some more Okay. Explanations.[00:28:53] swyx: Should we look at the Bard one[00:28:54] Reza Shabani: first? Sure. These are, I think these are, yeah. This is original GT three with full size 175. Billion[00:29:03] swyx: parameters. Okay, so you asked GPC three, I'm a highly intelligent question answering bot.[00:29:07] If you ask me a question that is rooted in truth, I'll give you the answer. If you ask me a question that is nonsense I will respond with unknown. And then you ask it a question. What is the square root of a bananas banana? It answers nine. So complete hallucination and failed to follow the instruction that you gave it.[00:29:22] I wonder if it follows if one, if you use an instruction to inversion it might, yeah. Do what better?[00:29:28] Reza Shabani: On, on the original[00:29:29] swyx: GP T Yeah, because I like it. Just, you're, you're giving an instructions and it's not[00:29:33] Reza Shabani: instruction tuned. Now. Now the interesting thing though is our model here, which does follow the instructions this is not instruction tuned yet, and we still are planning to instruction tune.[00:29:43] Right? So it's like for like, yeah, yeah, exactly. So,[00:29:45] swyx: So this is a replica model. Same question. What is the square of bananas? Banana. And it answers unknown. And this being one of the, the thing that Amjad was talking about, which you guys are. Finding as a discovery, which is, it's better on pure natural language questions, even though you trained it on code.[00:30:02] Exactly. Yeah. Hmm. Is that because there's a lot of comments in,[00:30:07] Reza Shabani: No. I mean, I think part of it is that there's a lot of comments and there's also a lot of natural language in, in a lot of code right. In terms of documentation, you know, you have a lot of like markdowns and restructured text and there's also just a lot of web-based code on, on replica, and HTML tends to have a lot of natural language in it.[00:30:27] But I don't think the comments from code would help it reason in this way. And, you know, where you can answer questions like based on instructions, for example. Okay. But yeah, it's, I know that that's like one of the things. That really shocked us is the kind of the, the fact that like, it's really good at, at natural language reasoning, even though it was trained on, on code.[00:30:49] swyx: Was this the reason that you started running your model on hella swag and[00:30:53] Reza Shabani: all the other Yeah, exactly. Interesting. And the, yeah, it's, it's kind of funny. Like it's in some ways it kind of makes sense. I mean, a lot of like code involves a lot of reasoning and logic which language models need and need to develop and, and whatnot.[00:31:09] And so you know, we, we have this hunch that maybe that using that as part of the training beforehand and then training it on natural language above and beyond that really tends to help. Yeah,[00:31:21] Aligning Models on Vibes[00:31:21] Alessio Fanelli: this is so interesting. I, I'm trying to think, how do you align a model on vibes? You know, like Bard, Bard is not purposefully being bad, right?[00:31:30] Like, there's obviously something either in like the training data, like how you're running the process that like, makes it so that the vibes are better. It's like when it, when it fails this test, like how do you go back to the team and say, Hey, we need to get better[00:31:44] Reza Shabani: vibes. Yeah, let's do, yeah. Yeah. It's a, it's a great question.[00:31:49] It's a di it's very difficult to do. It's not you know, so much of what goes into these models in, in the same way that we have no idea how we can get that question right. The programming you know, quiz question. Right. Whereas Bard got it wrong. We, we also have no idea how to take certain things out and or, and to, you know, remove certain aspects of, of vibes.[00:32:13] Of course there's, there's things you can do to like scrub the model, but it's, it's very difficult to, to get it to be better at something. It's, it's almost like all you can do is, is give it the right type of, of data that you think will do well. And then and, and of course later do some fancy type of like, instruction tuning or, or whatever else.[00:32:33] But a lot of what we do is finding the right mix of optimal data that we want to, to feed into the model and then hoping that the, that the data that's fed in is sufficiently representative of, of the type of generations that we want to do coming out. That's really the best that, that you can do.[00:32:51] Either the model has. Vibes or, or it doesn't, you can't teach vibes. Like you can't sprinkle additional vibes in it. Yeah, yeah, yeah. Same in real life. Yeah, exactly right. Yeah, exactly. You[00:33:04] Beyond Code Completion[00:33:04] Alessio Fanelli: mentioned, you know, co being the only show in town when you started, now you have this, there's obviously a, a bunch of them, right.[00:33:10] Cody, which we had on the podcast used to be Tap nine, kite, all these different, all these different things. Like, do you think the vibes are gonna be the main you know, way to differentiate them? Like, how are you thinking about. What's gonna make Ghost Rider, like stand apart or like, do you just expect this to be like table stakes for any tool?[00:33:28] So like, it just gonna be there?[00:33:30] Reza Shabani: Yeah. I, I do think it's, it's going to be table stakes for sure. I, I think that if you don't if you don't have AI assisted technology, especially in, in coding it's, it's just going to feel pretty antiquated. But but I do think that Ghost Rider stands apart from some of, of these other tools for for specific reasons too.[00:33:51] So this is kind of the, one of, one of the things that these models haven't really done yet is Come outside of code completion and outside of, of just a, a single editor file, right? So what they're doing is they're, they're predicting like the text that can come next, but they're not helping with the development process quite, quite yet outside of just completing code in a, in a text file.[00:34:16] And so the types of things that we wanna do with Ghost Rider are enable it to, to help in the software development process not just editing particular files. And so so that means using a right mix of like the right model for for the task at hand. But but we want Ghost Rider to be able to, to create scaffolding for you for, for these projects.[00:34:38] And so imagine if you would like Terraform. But, but powered by Ghostrider, right? I want to, I put up this website, I'm starting to get a ton of traffic to it and and maybe like I need to, to create a backend database. And so we want that to come from ghostrider as well, so it can actually look at your traffic, look at your code, and create.[00:34:59] You know a, a schema for you that you can then deploy in, in Postgres or, or whatever else? You know, I, I know like doing anything in in cloud can be a nightmare as well. Like if you wanna create a new service account and you wanna deploy you know, nodes on and, and have that service account, kind of talk to those nodes and return some, some other information, like those are the types of things that currently we have to kind of go, go back, go look at some documentation for Google Cloud, go look at how our code base does it you know, ask around in Slack, kind of figure that out and, and create a pull request.[00:35:31] Those are the types of things that we think we can automate away with with more advanced uses of, of ghostwriter once we go past, like, here's what would come next in, in this file. So, so that's the real promise of it, is, is the ability to help you kind of generate software instead of just code in a, in a particular file.[00:35:50] Ghostwriter Autonomous Agent[00:35:50] Reza Shabani: Are[00:35:50] Alessio Fanelli: you giving REPL access to the model? Like not rep, like the actual rep. Like once the model generates some of this code, especially when it's in the background, it's not, the completion use case can actually run the code to see if it works. There's like a cool open source project called Walgreen that does something like that.[00:36:07] It's like self-healing software. Like it gives a REPL access and like keeps running until it fixes[00:36:11] Reza Shabani: itself. Yeah. So, so, so right now there, so there's Ghostrider chat and Ghostrider code completion. So Ghostrider Chat does have, have that advantage in, in that it can it, it knows all the different parts of, of the ide and so for example, like if an error is thrown, it can look at the, the trace back and suggest like a fix for you.[00:36:33] So it has that type of integration. But the what, what we really want to do is is. Is merge the two in a way where we want Ghost Rider to be like, like an autonomous agent that can actually drive the ide. So in these action models, you know, where you have like a sequence of of events and then you can use you know, transformers to kind of keep track of that sequence and predict the next next event.[00:36:56] It's how, you know, companies like, like adapt work these like browser models that can, you know, go and scroll through different websites or, or take some, some series of actions in a, in a sequence. Well, it turns out the IDE is actually a perfect place to do that, right? So like when we talk about creating software, not just completing code in a file what do you do when you, when you build software?[00:37:17] You, you might clone a repo and then you, you know, will go and change some things. You might add a new file go down, highlight some text, delete that value, and point it to some new database, depending on the value in a different config file or in your environment. And then you would go in and add additional block code to, to extend its functionality and then you might deploy that.[00:37:40] Well, we, we have all of that data right there in the replica ide. And and we have like terabytes and terabytes of, of OT data you know, operational transform data. And so, you know, we can we can see that like this person has created a, a file what they call it, and, you know, they start typing in the file.[00:37:58] They go back and edit a different file to match the you know, the class name that they just put in, in the original file. All of that, that kind of sequence data is what we're looking to to train our next model on. And so that, that entire kind of process of actually building software within the I D E, not just like, here's some text what comes next, but rather the, the actions that go into, you know, creating a fully developed program.[00:38:25] And a lot of that includes, for example, like running the code and seeing does this work, does this do what I expected? Does it error out? And then what does it do in response to that error? So all, all of that is like, Insanely valuable information that we want to put into our, our next model. And and that's like, we think that one can be way more advanced than the, than this, you know, go straighter code completion model.[00:38:47] Releasing Replit-code-v1-3b[00:38:47] swyx: Cool. Well we wanted to dive in a little bit more on, on the model that you're releasing. Maybe we can just give people a high level what is being released what have you decided to open source and maybe why open source the story of the YOLO project and Yeah. I mean, it's a cool story and just tell it from the start.[00:39:06] Yeah.[00:39:06] Reza Shabani: So, so what's being released is the, the first version that we're going to release. It's a, it's a code model called replica Code V1 three B. So this is a relatively small model. It's 2.7 billion parameters. And it's a, it's the first llama style model for code. So, meaning it's just seen tons and tons of tokens.[00:39:26] It's been trained on 525 billion tokens of, of code all permissively licensed code. And it's it's three epox over the training set. And And, you know, all of that in a, in a 2.7 billion parameter model. And in addition to that, we, for, for this project or, and for this model, we trained our very own vocabulary as well.[00:39:48] So this, this doesn't use the cogen vocab. For, for the tokenize we, we trained a totally new tokenize on the underlying data from, from scratch, and we'll be open sourcing that as well. It has something like 32,000. The vocabulary size is, is in the 32 thousands as opposed to the 50 thousands.[00:40:08] Much more specific for, for code. And, and so it's smaller faster, that helps with inference, it helps with training and it can produce more relevant content just because of the you know, the, the vocab is very much trained on, on code as opposed to, to natural language. So, yeah, we'll be releasing that.[00:40:29] This week it'll be up on, on hugging pace so people can take it play with it, you know, fine tune it, do all type of things with it. We want to, we're eager and excited to see what people do with the, the code completion model. It's, it's small, it's very fast. We think it has great vibes, but we, we hope like other people feel the same way.[00:40:49] And yeah. And then after, after that, we might consider releasing the replica tuned model at, at some point as well, but still doing some, some more work around that.[00:40:58] swyx: Right? So there are actually two models, A replica code V1 three B and replica fine tune V1 three B. And the fine tune one is the one that has the 50% improvement in in common sense benchmarks, which is going from 20% to 30%.[00:41:13] For,[00:41:13] Reza Shabani: for yes. Yeah, yeah, yeah, exactly. And so, so that one, the, the additional tuning that was done on that was on the publicly available data on, on rep. And so, so that's, that's you know, data that's in public res is Permissively licensed. So fine tuning on on that. Then, Leads to a surprisingly better, like significantly better model, which is this retuned V1 three B, same size, you know, same, very fast inference, same vocabulary and everything.[00:41:46] The only difference is that it's been trained on additional replica data. Yeah.[00:41:50] swyx: And I think I'll call out that I think in one of the follow up q and as that Amjad mentioned, people had some concerns with using replica data. Not, I mean, the licensing is fine, it's more about the data quality because there's a lot of beginner code Yeah.[00:42:03] And a lot of maybe wrong code. Mm-hmm. But it apparently just wasn't an issue at all. You did[00:42:08] Reza Shabani: some filtering. Yeah. I mean, well, so, so we did some filtering, but, but as you know, it's when you're, when you're talking about data at that scale, it's impossible to keep out, you know, all of the, it's, it's impossible to find only select pieces of data that you want the, the model to see.[00:42:24] And, and so a lot of the, a lot of that kind of, you know, people who are learning to code material was in there anyway. And, and you know, we obviously did some quality filtering, but a lot of it went into the fine tuning process and it really helped for some reason. You know, there's a lot of high quality code on, on replica, but there's like you, like you said, a lot of beginner code as well.[00:42:46] And that was, that was the really surprising thing is that That somehow really improved the model and its reasoning capabilities. It felt much more kind of instruction tuned afterward. And, and you know, we have our kind of suspicions as as to why there's, there's a lot of like assignments on rep that kind of explain this is how you do something and then you might have like answers and, and whatnot.[00:43:06] There's a lot of people who learn to code on, on rep, right? And, and like, think of a beginner coder, like think of a code model that's learning to, to code learning this reasoning and logic. It's probably a lot more valuable to see that type of, you know, the, the type of stuff that you find on rep as opposed to like a large legacy code base that that is, you know, difficult to, to parse and, and figure out.[00:43:29] So, so that was very surprising to see, you know, just such a huge jump in in reasoning ability once trained on, on replica data.[00:43:38] The YOLO training run[00:43:38] swyx: Yeah. Perfect. So we're gonna do a little bit of storytelling just leading up to the, the an the developer day that you had last week. Yeah. My understanding is you decide, you raised some money, you decided to have a developer day, you had a bunch of announcements queued up.[00:43:52] And then you were like, let's train the language model. Yeah. You published a blog post and then you announced it on Devrel Day. What, what, and, and you called it the yolo, right? So like, let's just take us through like the[00:44:01] Reza Shabani: sequence of events. So so we had been building the infrastructure to kind of to, to be able to train our own models for, for months now.[00:44:08] And so that involves like laying out the infrastructure, being able to pull in the, the data processes at scale. Being able to do things like train your own tokenizes. And and even before this you know, we had to build out a lot of this data infrastructure for, for powering things like search.[00:44:24] There's over, I think the public number is like 200 and and 30 million res on, on re. And each of these res have like many different files and, and lots of code, lots of content. And so you can imagine like what it must be like to, to be able to query that, that amount of, of data in a, in a reasonable amount of time.[00:44:45] So we've You know, we spent a lot of time just building the infrastructure that allows for for us to do something like that and, and really optimize that. And, and this was by the end of last year. That was the case. Like I think I did a demo where I showed you can, you can go through all of replica data and parse the function signature of every Python function in like under two minutes.[00:45:07] And, and there's, you know, many, many of them. And so a and, and then leading up to developer day, you know, we had, we'd kind of set up these pipelines. We'd started training these, these models, deploying them into production, kind of iterating and, and getting that model training to production loop.[00:45:24] But we'd only really done like 1.3 billion parameter models. It was like all JavaScript or all Python. So there were still some things like we couldn't figure out like the most optimal way to to, to do it. So things like how do you pad or yeah, how do you how do you prefix chunks when you have like multi-language models, what's like the optimal way to do it and, and so on.[00:45:46] So you know, there's two PhDs on, on the team. Myself and Mike and PhDs tend to be like careful about, you know, a systematic approach and, and whatnot. And so we had this whole like list of things we were gonna do, like, oh, we'll test it on this thing and, and so on. And even these, like 1.3 billion parameter models, they were only trained on maybe like 20 billion tokens or 30 billion tokens.[00:46:10] And and then Amjad joins the call and he's like, no, let's just, let's just yolo this. Like, let's just, you know, we're raising money. Like we should have a better code model. Like, let's yolo it. Let's like run it on all the data. How many tokens do we have? And, and, and we're like, you know, both Michael and I are like, I, I looked at 'em during the call and we were both like, oh God is like, are we really just gonna do this?[00:46:33] And[00:46:34] swyx: well, what is the what's the hangup? I mean, you know that large models work,[00:46:37] Reza Shabani: you know that they work, but you, you also don't know whether or not you can improve the process in, in In important ways by doing more data work, scrubbing additional content, and, and also it's expensive. It's like, it, it can, you know it can cost quite a bit and if you, and if you do it incorrectly, you can actually get it.[00:47:00] Or you, you know, it's[00:47:02] swyx: like you hit button, the button, the go button once and you sit, sit back for three days.[00:47:05] Reza Shabani: Exactly. Yeah. Right. Well, like more like two days. Yeah. Well, in, in our case, yeah, two days if you're running 256 GP 100. Yeah. Yeah. And and, and then when that comes back, you know, you have to take some time to kind of to test it.[00:47:19] And then if it fails and you can't really figure out why, and like, yeah, it's, it's just a, it's kind of like a, a. A time consuming process and you just don't know what's going to, to come out of it. But no, I mean, I'm Judd was like, no, let's just train it on all the data. How many tokens do we have? We tell him and he is like, that's not enough.[00:47:38] Where can we get more tokens? Okay. And so Michele had this you know, great idea to to train it on multiple epox and so[00:47:45] swyx: resampling the same data again.[00:47:47] Reza Shabani: Yeah. Which, which can be, which is known risky or like, or tends to overfit. Yeah, you can, you can over overfit. But you know, he, he pointed us to some evidence that actually maybe this isn't really a going to be a problem.[00:48:00] And, and he was very persuasive in, in doing that. And so it, it was risky and, and you know, we did that training. It turned out. Like to actually be great for that, for that base model. And so then we decided like, let's keep pushing. We have 256 TVs running. Let's see what else we can do with it.[00:48:20] So we ran a couple other implementations. We ran you know, a the fine tune version as I, as I said, and that's where it becomes really valuable to have had that entire pipeline built out because then we can pull all the right data, de-dupe it, like go through the, the entire like processing stack that we had done for like months.[00:48:41] We did that in, in a matter of like two days for, for the replica data as well removed, you know, any of, any personal any pii like personal information removed, harmful content, removed, any of, of that stuff. And we just put it back through the that same pipeline and then trained on top of that.[00:48:59] And so I believe that replica tune data has seen something like 680. Billion tokens. And, and that's in terms of code, I mean, that's like a, a universe of code. There really isn't that much more out there. And, and it, you know, gave us really, really promising results. And then we also did like a UL two run, which allows like fill the middle capabilities and and, and will be, you know working to deploy that on, on rep and test that out as well soon.[00:49:29] But it was really just one of those Those cases where, like, leading up to developer day, had we, had we done this in this more like careful, systematic way what, what would've occurred in probably like two, three months. I got us to do it in, in a week. That's fun. It was a lot of fun. Yeah.[00:49:49] Scaling Laws: from Kaplan to Chinchilla to LLaMA[00:49:49] Alessio Fanelli: And so every time I, I've seen the stable releases to every time none of these models fit, like the chinchilla loss in, in quotes, which is supposed to be, you know, 20 tokens per, per, what's this part of the yo run?[00:50:04] Or like, you're just like, let's just throw out the tokens at it doesn't matter. What's most efficient or like, do you think there's something about some of these scaling laws where like, yeah, maybe it's good in theory, but I'd rather not risk it and just throw out the tokens that I have at it? Yeah,[00:50:18] Reza Shabani: I think it's, it's hard to, it's hard to tell just because there's.[00:50:23] You know, like, like I said, like these runs are expensive and they haven't, if, if you think about how many, how often these runs have been done, like the number of models out there and then, and then thoroughly tested in some forum. And, and so I don't mean just like human eval, but actually in front of actual users for actual inference as part of a, a real product that, that people are using.[00:50:45] I mean, it's not that many. And, and so it's not like there's there's like really well established kind of rules as to whether or not something like that could lead to, to crazy amounts of overfitting or not. You just kind of have to use some, some intuition around it. And, and what we kind of found is that our, our results seem to imply that we've really been under training these, these models.[00:51:06] Oh my god. And so like that, you know, all, all of the compute that we kind of. Through, with this and, and the number of tokens, it, it really seems to help and really seems to to improve. And I, and I think, you know, these things kind of happen where in, in the literature where everyone kind of converges to something seems to take it for for a fact.[00:51:27] And like, like Chinchilla is a great example of like, okay, you know, 20 tokens. Yeah. And but, but then, you know, until someone else comes along and kind of tries tries it out and sees actually this seems to work better. And then from our results, it seems imply actually maybe even even lla. Maybe Undertrained.[00:51:45] And, and it may be better to go even You know, like train on on even more tokens then and for, for the[00:51:52] swyx: listener, like the original scaling law was Kaplan, which is 1.7. Mm-hmm. And then Chin established 20. Yeah. And now Lama style seems to mean 200 x tokens to parameters, ratio. Yeah. So obviously you should go to 2000 X, right?[00:52:06] Like, I mean, it's,[00:52:08] Reza Shabani: I mean, we're, we're kind of out of code at that point, you know, it's like there, there is a real shortage of it, but I know that I, I know there are people working on I don't know if it's quite 2000, but it's, it's getting close on you know language models. And so our friends at at Mosaic are are working on some of these really, really big models that are, you know, language because you with just code, you, you end up running out of out of context.[00:52:31] So Jonathan at, at Mosaic has Jonathan and Naveen both have really interesting content on, on Twitter about that. Yeah. And I just highly recommend following Jonathan. Yeah,[00:52:43] MosaicML[00:52:43] swyx: I'm sure you do. Well, CAGR, can we talk about, so I, I was sitting next to Naveen. I'm sure he's very, very happy that you, you guys had such, such success with Mosaic.[00:52:50] Maybe could, could you shout out like what Mosaic did to help you out? What, what they do well, what maybe people don't appreciate about having a trusted infrastructure provider versus a commodity GPU provider?[00:53:01] Reza Shabani: Yeah, so I mean, I, I talked about this a little bit in the in, in the blog post in terms of like what, what advantages like Mosaic offers and, and you know, keep in mind, like we had, we had deployed our own training infrastructure before this, and so we had some experience with it.[00:53:15] It wasn't like we had just, just tried Mosaic And, and some of those things. One is like you can actually get GPUs from different providers and you don't need to be you know, signed up for that cloud provider. So it's, it kind of detaches like your GPU offering from the rest of your cloud because most of our cloud runs in, in gcp.[00:53:34] But you know, this allowed us to leverage GPUs and other providers as well. And then another thing is like train or infrastructure as a service. So you know, these GPUs burn out. You have note failures, you have like all, all kinds of hardware issues that come up. And so the ability to kind of not have to deal with that and, and allow mosaic and team to kind of provide that type of, of fault tolerance was huge for us.[00:53:59] As well as a lot of their preconfigured l m configurations for, for these runs. And so they have a lot of experience in, in training these models. And so they have. You know, the, the right kind of pre-configured setups for, for various models that make sure that, you know, you have the right learning rates, the right training parameters, and that you're making the, the best use of the GPU and, and the underlying hardware.[00:54:26] And so you know, your GPU utilization is always at, at optimal levels. You have like fewer law spikes than if you do, you can recover from them. And you're really getting the most value out of, out of the compute that you're kind of throwing at, at your data. We found that to be incredibly, incredibly helpful.[00:54:44] And so it, of the time that we spent running things on Mosaic, like very little of that time is trying to figure out why the G P U isn't being utilized or why you know, it keeps crashing or, or why we, you have like a cuda out of memory errors or something like that. So like all, all of those things that make training a nightmare Are are, you know, really well handled by, by Mosaic and the composer cloud and and ecosystem.[00:55:12] Yeah. I was gonna[00:55:13] swyx: ask cuz you're on gcp if you're attempted to rewrite things for the TPUs. Cause Google's always saying that it's more efficient and faster, whatever, but no one has experience with them. Yeah.[00:55:23] Reza Shabani: That's kind of the problem is that no one's building on them, right? Yeah. Like, like we want to build on, on systems that everyone else is, is building for.[00:55:31] Yeah. And and so with, with the, with the TPUs that it's not easy to do that.[00:55:36] Replit's Plans for the Future (and Hiring!)[00:55:36] swyx: So plans for the future, like hard problems that you wanna solve? Maybe like what, what do you like what kind of people that you're hiring on your team?[00:55:44] Reza Shabani: Yeah. So We are, we're currently hiring for for two different roles on, on my team.[00:55:49] Although we, you know, welcome applications from anyone that, that thinks they can contribute in, in this area. Replica tends to be like a, a band of misfits. And, and the type of people we work with and, and have on our team are you know, like just the, the perfect mix to, to do amazing projects like this with very, very few people.[00:56:09] Right now we're hiring for the applied a applied to AI ml engineer. And so, you know, this is someone who's. Creating data pipelines, processing the data at scale creating runs and and training models and you

The Nonlinear Library
LW - Consider applying to a 2-week alignment project with former GitHub CEO by jacobjacob

The Nonlinear Library

Play Episode Listen Later Apr 4, 2023 1:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider applying to a 2-week alignment project with former GitHub CEO, published by jacobjacob on April 4, 2023 on LessWrong. Signal-boosting this tweet from Nat Friedman, former github CEO, who says: I'm seeking a researcher/journalist with an interest in AI/rationalism/xrisk for a 1-2 week project. To apply please email job@nat.org with evidence that you (a) read scifi and (b) understand this tweet: Linked tweet says: What's you response to Orthogonality Thesis+Instrumental Convergence+AI arms race+ likely Mesa Optimization+ rapid recent progress in AI via ML = we are likely doomed? I have a PhD in econ so I do understand risk. I don't know much about this nor I do have any affiliation with it. But it seems pretty cool to me and at least worth considering for people in the target reference class! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Consider applying to a 2-week alignment project with former GitHub CEO by jacobjacob

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 4, 2023 1:10


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider applying to a 2-week alignment project with former GitHub CEO, published by jacobjacob on April 4, 2023 on LessWrong. Signal-boosting this tweet from Nat Friedman, former github CEO, who says: I'm seeking a researcher/journalist with an interest in AI/rationalism/xrisk for a 1-2 week project. To apply please email job@nat.org with evidence that you (a) read scifi and (b) understand this tweet: Linked tweet says: What's you response to Orthogonality Thesis+Instrumental Convergence+AI arms race+ likely Mesa Optimization+ rapid recent progress in AI via ML = we are likely doomed? I have a PhD in econ so I do understand risk. I don't know much about this nor I do have any affiliation with it. But it seems pretty cool to me and at least worth considering for people in the target reference class! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Muy al Día
250.000 € de premio por descifrar un manuscrito, ¿te atreves?

Muy al Día

Play Episode Listen Later Mar 28, 2023 3:41


El ex director ejecutivo de GitHub, Nat Friedman, y algunos científicos han decidido ofrecer un premio de 250.000 dólares (unos 233.000 euros al cambio) premios a la persona o equipo de profesionales que utilicen con éxito la tecnología actual (llámese inteligencia artificial o cualquier herramienta innovadora) para descifrar los pergaminos recuperados. Tras demostrar que un programa de inteligencia artificial puede extraer letras y símbolos de imágenes de rayos X de alta resolución de los frágiles documentos desenrollados, el concurso ha sido bautizado como "Desafío Vesubio" (Vesuvius Challenge) y anima a todos aquellos que lo deseen a descifrar los códigos de los manuscritos. Suscríbete a MUY HISTORIA con un descuento del 50% usando el código especial para podcast - PODCAST1936 https://bit.ly/3TYwx9a Comparte nuestro podcast en tus redes sociales, puedes realizar una valoración de 5 estrellas en Apple Podcast o Spotify. Dirección, locución y producción: Iván Patxi Gómez Gallego Contacto de publicidad en podcast: podcast@zinetmedia.es Suscríbete a Muy Interesante https://suscripciones.zinetmedia.es/mz/

The Lunar Society
Nat Friedman - Reading Ancient Scrolls, Open Source, & AI

The Lunar Society

Play Episode Listen Later Mar 22, 2023 98:23


It is said that the two greatest problems of history are: how to account for the rise of Rome, and how to account for her fall. If so, then the volcanic ashes spewed by Mount Vesuvius in 79 AD - which entomb the cities of Pompeii and Herculaneum in South Italy - hold history's greatest prize. For beneath those ashes lies the only salvageable library from the classical world.Nat Friedman was the CEO of Github form 2018 to 2021. Before that, he started and sold two companies - Ximian and Xamarin. He is also the founder of AI Grant and California YIMBY.And most recently, he has created and funded the Vesuvius Challenge - a million dollar prize for reading an unopened Herculaneum scroll for the very first time. If we can decipher these scrolls, we may be able to recover lost gospels, forgotten epics, and even missing works of Aristotle.We also discuss the future of open source and AI, running Github and building Copilot, and why EMH is a lie.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate.If you have the means and have enjoyed my podcast, I would appreciate your support via a paid subscriptions on Substack

Startup Insider
Investments & Exits - mit Enrico Mellis von Lakestar

Startup Insider

Play Episode Listen Later Feb 9, 2023 24:40


Das Startup Magic hat einer Series A 23 Millionen US-Dollar unter der Leitung von Alphabets CapitalG und mit Beteiligung von Elad Gil, Nat Friedman und Amplify Partners erhalten. Die Plattform, die noch nicht allgemein verfügbar ist, soll Software-Ingenieuren beim Schreiben, Überprüfen und Debuggen von Code helfen und kann in natürlicher Sprache kommunizieren und mit Anwendern zusammenarbeiten. Das Ziel von Magic ist es, die Kosten und den Zeitaufwand für die Softwareentwicklung zu reduzieren. Der CEO von Magic, Eric Steinberger, sagt, dass das Tool aufgrund einer neuen neuronalen Netzwerkarchitektur sogar mehr kann als der Copilot von GitHub.

The Lunar Society
Aarthi & Sriram - Twitter, 10x Engineers, & Marriage

The Lunar Society

Play Episode Listen Later Dec 29, 2022 81:23


I had fun chatting with Aarthi and Sriram.We discuss what it takes to be successful in technology, what Sriram would say if Elon tapped him to be the next CEO of Twitter, why more married couples don't start businesses together, and how Aarthi hires and finds 10x engineers.Aarthi Ramamurthy and Sriram Krishnan are the hosts of The Good Times Show. They have had leading roles in several technology companies from Meta to Twitter to Netflix and have been founders and investors. Sriram is currently a general partner at a16z crypto and Aarthi is an angel investor.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Timestamps(00:00:00) - Intro(00:01:19) - Married Couples Co-founding Businesses(00:09:53) - 10x Engineers(00:16:00) - 15 Minute Meetings(00:22:57) - a16z's Edge?(00:26:42) - Future of Twitter(00:30:58) - Is Big Tech Overstaffed?(00:38:37) - Next CEO of Twitter?(00:43:13) - Why Don't More Venture Capitalists Become Founders?(00:47:32) - Role of Boards(00:52:03) - Failing Upwards(00:56:00) - Underrated CEOs(01:02:18) - Founder Education(01:06:27) - What TV Show Would Sriram Make?(01:10:14) - Undervalued Founder ArchetypesTranscriptThis transcript was autogenerated and thus may contain errors.[00:00:00] Aarthi: it's refreshing to have Elon come in and say, we are gonna work really hard. We are gonna be really hardcore about how we build things.[00:00:05] Dwarkesh: Let's say Elon and says Tomorrow, Sriram, would you be down to be the [00:00:08] Sriram: CEO of Twitter Absolutely not. Absolutely not. But I am married to someone. We [00:00:12] Aarthi: used to do overnights at Microsoft. Like we'd just sleep under our desk,, until the janitor would just , poke us out of there , I really need to vacuum your cubicle. Like, get out of here. There's such joy in , Finding those moments where you work hard and you're feeling really good about it. [00:00:25] Sriram: You'd be amazed at how many times Aarthi and I would have a conversation where be, oh, this algorithm thing.I remember designing it, and now we are on the other side We want to invest in something , where we think the team and the company is going to win and if they do win, there's huge value to be unlocked. [00:00:40] Dwarkesh: Okay. Today I have the, uh, good pleasure to have Arty and Sriram on the podcast and I'm really excited about this.So you guys have your own show, the Arty Andre Good Time show. Um, you guys have had some of the top people in tech and entertainment on Elon Musk, mark Zuckerberg, Andrew Yang, and you guys are both former founders. Advisors, investors, uh, general partner at Anderson Horowitz, and you're an angel investor and an advisor now.Um, so yeah, there's so much to talk about. Um, obviously there's also the, uh, recent news about your, uh, your involvement on, uh, twitter.com. Yeah, yeah. Let's get started. [00:01:19] Married Couples Starting Businesses[00:01:19] Dwarkesh: My first question, you guys are married, of course. People talk about getting a co-founder as finding a spouse, and I'm curious why it's not the case that given this relationship why more married people don't form tech startups.Is, does that already happen, [00:01:35] Aarthi: or, um, I actually am now starting to see a fair bit of it. Uhhuh, . Um, I, I do agree that wasn't a norm before. Um, I think, uh, I, I think I remember asking, uh, pg p the same thing when I went through yc, and I think he kind of pointed to him and Jessica like, you know, YC was their startup , and so, you know, there were even pride.There are a lot of husband and wife, uh, companies. Over the last like decade or so. So I'm definitely seeing that more mainstream. But yeah, you're right, it hasn't been the norm before. Yeah, the, the good time show is our project. It's [00:02:09] Sriram: our startup. Very, I mean, there are some good historical examples. Cisco, for example, uh, came from, uh, uh, husband, wife as a few other examples.I think, you know, on, on the, in, on the pro side, uh, you know, being co-founders, uh, you need trust. You need to really know each other. Uh, you, you go through a lot of like heavy emotional burdens together. And there's probably, and if you, you're for the spouse, hopefully you probably have a lot of chemistry and understanding, and that should help.On the con side, I think one is you, you're prob you know, you, you're gonna show up at work, you know, and startups are really hard, really intense. And you come home and both of you are gonna the exact same wavelength, the exact same time, going through the exact same highs and lows as opposed to two people, two different jobs have maybe differing highs and lows.So that's really hard. Uh, the second part of it is, uh, in a lot of. Work situations, it may just be more challenging where people are like, well, like, you know, person X said this person Y said this, what do I do? Uh, and if you need to fire somebody or you know, something weird happens corporate in a corporate manner, that may also be really hard.Uh, but having said that, you know, uh, [00:03:13] Aarthi: you know, yeah, no, I think both of those are like kind of overblown , like, you know, I think the reason why, um, you know, you're generally, they say you need to have you, it's good to have co-founders is so that you can kind of like write the emotional wave in a complimentary fashion.Uh, and you know, if one person's like really depressed about something, the other person can like pull them out of it and have a more rational viewpoint. I feel like in marriages it works even better. So I feel like to your first point, They know each other really well. You're, you're, you are going to bring your work to home.There is no separation between work and home as far as a startup is concerned. So why not do it together? Oh, [00:03:51] Sriram: well, I think there's one problem, uh, which is, uh, we are kind of unique because we've been together for over 21 years now, and we start for, we've been before, uh, let's not. Wow. There's gonna be some fact checking 19 on this video.99. Close enough. Close enough, right? Like close enough. He wishes he was 21. Oh, right, right, right. Gosh, feels like 21. We have do some, um, [00:04:15] Aarthi: editing on this video. No, no, no. I think 20 years of virtually knowing, 19 years of in-person. [00:04:20] Sriram: There we go. Right. Uh, fact check accurate. Um, ex experts agree. But, um, you know, but when you first met, we, we originally, even before we dating, we were like, Hey, we wanna do a company together.And we bonded over technology, like our first conversation on Yahoo Messenger talking about all these founders and how we wanted to be like them. And we actually then worked together pretty briefly when you were in Microsoft. Uh, before we actually started dating. We were on these sort of talent teams and we kind of met each of the word context.I think a lot of. You know, one is they have never worked together. Um, and so being in work situations, everything from how you run a meeting to how you disagree, uh, you know, uh, is just going to be different. And I think that's gonna be a learning curve for a lot of couples who be like, Hey, it's one thing to have a strong, stable relationship at home.It'll be a different thing to, you know, be in a meeting and you're disagreeing art's meetings very differently from I do. She obsesses over metrics. I'm like, ah, it's close enough. It's fine. , uh, it's close enough. It's fine. as e uh, here already. But, uh, so I do think there's a learning curve, a couples who is like, oh, working together is different than, you know, raising your family and being together.I mean, obviously gives you a strong foundation, but it's not the same thing. Have you guys [00:05:25] Dwarkesh: considered starting a company or a venture together at some point? [00:05:28] Aarthi: Yeah. Um, we've, uh, we've always wanted to do a project together. I don't know if it's a, a startup or a company or a venture. You have done a project together,Yeah, exactly. I think, uh, almost to today. Two years ago we started the Good Time Show, um, and we started at, uh, live Audio on Clubhouse. And, you know, we recently moved it onto video on YouTube. And, um, it's, it's been really fun because now I get to see like, it, it's neither of our full-time jobs, uh, but we spend enough, um, just cycles thinking through what we wanna do with it and what, uh, how to have good conversations and how to make it useful for our audience.So that's our [00:06:06] Sriram: project together. Yep. And we treat it like a, with the intellectual heft of a startup, which is, uh, we look at the metrics, uh, and we are like, oh, this is a good week. The metrics are up into the right and, you know, how do we, you know, what is working for our audience? You know, what do we do to get great guests?What do we do to [00:06:21] Aarthi: get, yeah, we just did our first, uh, in-person meetup, uh, for listeners of the podcast in Chennai. It was great. We had like over a hundred people who showed up. And it was also like, you know, typical startup style, like meet your customers and we could like go talk to these people in person and figure out like what do they like about it?Which episodes do they really enjoy? And it's one thing to see YouTube comments, it's another to like actually in person engage with people. So I think, you know, we started it purely accidentally. We didn't really expect it to be like the show that we are, we are in right now, but we really happy. It's, it's kind of turned out the way it has.[00:06:59] Sriram: Absolutely. And, and it also kind of helps me scratch an edge, which is, uh, you know, building something, you know, keeps you close to the ground. So being able to actually do the thing yourself as opposed to maybe tell someone else, telling you how to do the, so for example, it, it being video editing or audio or how thumbnails, thumbnails or, uh, just the mechanics of, you know, uh, how to build anything.So, uh, I, I dot think it's important. Roll up your sleeves metaphorically and get your hands dirty and know things. And this really helped us understand the world of creators and content. Uh, and it's fun and [00:07:31] Aarthi: go talk to other creators. Uh, like I think when we started out this thing on YouTube, I think I remember Shram just reached out to like so many creators being like, I wanna understand how it works for you.Like, what do you do? And these are people who like, who are so accomplished, who are so successful, and they do this for a living. And we clearly don. And so, uh, just to go learn from these experts. It's, it's kind of nice, like to be a student again and to just learn, uh, a new industry all over again and figure out how to actually be a creator on this platform.Well, you know [00:08:01] Dwarkesh: what's really interesting is both of you have been, uh, executives and led product in social media companies. Yeah. And so you are, you designed the products, these creators, their music, and now on the other end, you guys are building [00:08:12] Sriram: the, oh, I have a great phrase for it, right? Like, somebody, every once in a while somebody would be like, Hey, you know what, uh, you folks are on the leadership team of some of these companies.Why don't you have hundreds of millions of followers? Right? And I would go, Hey, look, it's not like every economist is a billionaire, , uh, uh, you know, it doesn't work that way. Uh, but during that is a parallel, which, which is, uh, you'd be amazed at how many times Aarthi and I would have a conversation where be, oh, this algorithm thing.I remember designing it, or I was in the meeting when this thing happened, and now we are on the other side, which is like, Hey, you might be the economist who told somebody to implement a fiscal policy. And now we are like, oh, okay, how do I actually go do this and create values and how? Anyway, how do we do exactly.Create an audience and go build something interesting. So there is definitely some irony to it, uh, where, uh, but I think hopefully it does give us some level of insight where, uh, we have seen, you know, enough of like what actually works on social media, which is how do you build a connection with your audience?Uh, how do you build, uh, content? How do you actually do it on a regular, uh, teams? I think [00:09:07] Aarthi: the biggest difference is we don't see the algorithm as a bra, as a black box. I think we kind of see it as like when the, with the metrics, we are able to, one, have empathy for the teams building this. And two, I think, uh, we kind of know there's no big magic bullet.Like I think a lot of this is about showing up, being really consistent, um, you know, being able to like put out some really interesting content that people actually want to, and you know, I think a lot of people forget about that part of it and kind of focus. If you did this one thing, your distribution goes up a lot and here's this like, other like secret hack and you know Sure.Like those are like really short term stuff, but really in the long term, the magic is to just like keep at it. Yeah. And, uh, put out really, really good content. [00:09:48] Sriram: Yeah. Yeah. And yeah, absolutely. Yeah. Yeah. Um, that's good to hear. . [00:09:53] 10x Engineers[00:09:53] Dwarkesh: Um, so you've both, um, led teams that have, you know, dozens or even hundreds of people.Um, how easy is it for you to tell who the 10 X engineers are? Is it something that you as managers and executives can tell easily or [00:10:06] Sriram: no? Uh, absolutely. I think you can tell this very easily or repeat of time and it doesn't, I think a couple of ways. One is, uh, Uh, before, let's say before you work with someone, um, 10 x people just don't suddenly start becoming 10 x.They usually have a history of becoming 10 x, uh, of, you know, being really good at what they do. And you can, you know, the cliche line is you can sort of connect the dots. Uh, you start seeing achievements pile up and achievements could be anything. It could be a bunch of projects. It could be a bunch of GitHub code commits.It could be some amazing writing on ck, but whatever it is, like somebody just doesn't show up and become a 10 x person, they probably have a track record of already doing it. The second part of it is, I've seen this is multiple people, uh, who are not named so that they don't get hired from the companies actually want them to be in, or I can then hire them in the future is, uh, you know, they will make incredibly rapid progress very quickly.So, uh, I have a couple of examples and almost independently, I know it's independently, so I have a couple of. Um, and I actually, and name both, right? Like, so one is, uh, this guy named, uh, Vijay Raji, uh, who, uh, was probably one of Facebook's best engineers. He's now the CEO of a company called Stats. And, um, he was probably my first exposure to the real TenX engineer.And I remembered this because, uh, you know, at the time I was. Kind of in my twenties, I had just joined Facebook. I was working on ads, and he basically built a large part of Facebook's ad system over the weekend. And what he would do is he would just go, and then he con he [00:11:24] Aarthi: continued to do that with Facebook marketplace.Yeah. Like he's done this like over and over and over [00:11:28] Sriram: again. . Yeah. And, and it's not that, you know, there's one burst of genius. It's just this consistent stream of every day that's a code checkin stuff is working. New demo somebody, he sent out a new bill or something working. And so before like a week or two, you just like a, you know, you running against Usain Bolt and he's kind of running laps around you.He's so far ahead of everyone else and you're like, oh, this guy is definitely ahead. Uh, the second story I have is, uh, of, uh, John Carmack, uh, you know, who's legend and I never worked with him in, uh, directly with, you know, hopefully someday I can fix. But, uh, somebody told me a story about him. Which is, uh, that the person told me story was like, I never thought a individual could replace the output of a hundred percent team until I saw John.And there's a great story where, um, you know, and so John was the most senior level at Facebook and from a hr, you know, employment insecurity perspective for an individual contributor, and it at, at that level, at Facebook, uh, for folks who kind of work in these big tech companies, it is the most, the highest tier of accomplishment in getting a year in a performance review is something called xcs Expectations, or, sorry, redefines, right?Which basically means like, you have redefined what it means for somebody to perform in this level, right? Like, it's like somebody, you know, like somebody on a four minute mile, I'll be running a two minute mile or whatever, right? You're like, oh, and, and it is incredibly hard sometimes. You doing, and this guy John gets it three years in a row, right?And so there's this leadership team of all the, you know, the really most important people on Facebook. And they're like, well, we should really promote John, right? Like, because he's done this three years in a row, he's changing the industry. Three years in a row and then they realized, oh wait, there is no level to promote him to Nick be CEOWell, maybe I don't think he wanted to. And so, uh, the story I heard, and I dunno, it's true, but I like to believe it's true, is they invented a level which still now only John Carmack has gotten. Right. And, um, and I think, you know, it's his level of productivity, uh, his, uh, intellect, uh, and the consistency over time and mu and you know, if you talk to anybody, Facebook work with him, he's like, oh, he replaced hundred people, teams all by themselves and maybe was better than a hundred percent team just because he had a consistency of vision, clarity, and activity.So those are [00:13:32] Aarthi: the two stories I've also noticed. I think, uh, actually sheam, I think our first kind of exposure to 10 x engineer was actually Barry born, uh, from Microsoft. So Barry, um, uh, basically wrote pretty much all the emulation engines and emulation systems that we all use, uh, and uh, just prolific, uh, and I think in addition to what Fred had said with like qualities and tenets, Um, the, I've generally seen these folks to also be like low ego and kind of almost have this like responsibility to, um, mentor coach other people.Uh, and Barry kind of like took us under his wing and he would do these like Tuesday lunches with us, where we would just ask like, you know, we were like fresh out of college and we just ask these like really dumb questions on, you know, um, scaling things and how do you build stuff. And I was working on, uh, run times and loaders and compilers and stuff.And so he would just take the time to just answer our questions and just be there and be really like, nice about it. I remember when you moved to Redmond, he would just like spend a weekend just like, oh yeah. Driving you about and just doing things like that, but very low ego and within their teams and their art, they're just considered to be legends.Yes. Like, you know, everybody would be like, oh, Barry Bond. Yeah, of course. [00:14:47] Sriram: Yeah. It, I can't emphasize enough the consistency part of it. Um, you know, with Barry. Or I gotta briefly work with Dave Cutler, who's kind of the father of modern operating systems, uh, is every day you're on this email li list at the time, which would show you check-ins as they happen.They would have something every single day, um, every day, and it'll be tangible and meaty and you know, and you just get a sense that this person is not the same as everybody else. Um, by the, this couple of people I can actually point to who haven't worked with, uh, but I follow on YouTube or streaming. Uh, one is, uh, Andrea Ling who builds Serenity Os we had a great episode with him.Oh, the other is George Hart's, uh, geo Hart. And I urge people, if you haven't, I haven't worked with either of them, uh, but if I urge which to kinda watch their streams, right? Because, uh, you go like, well, how does the anti killing build a web browser on an operating system? Which he builds by himself in such a sharp period of time and he watches stream and he's not doing some magical new, you know, bit flipping sorting algorithm anybody has, nobody has seen before.He's just doing everything you would do, but. Five bits of speed. I, yep, exactly. [00:15:48] Dwarkesh: I I'm a big fan of the George Hot Streams and Yeah, that's exactly what, you know, it's like yeah, you, he's also curling requests and he is also, you know, you know, spinning up an experiment in a Jupyter Notebook, but yeah, just doing it [00:15:58] Aarthi: away way faster, way efficiently.Yeah. [00:16:00] 15 Minute Meetings[00:16:00] Dwarkesh: Yeah. That's really interesting. Um, so ar Arthur, I'm, you've gone through Y Combinator and famously they have that 15 minute interview Yes. Where they try to grok what your business is and what your potential is. Yeah, yeah. But just generally, it seems like in Silicon Valley you guys have, make a lot of decisions in terms of investing or other kinds of things.You, in very short calls, you know. Yeah. . Yeah. And how much can you really, what is it that you're learning in these 15 minute calls when you're deciding, should I invest in this person? What is their potential? What is happening in that 15 minutes? [00:16:31] Aarthi: Um, I can speak about YC from the other side, from like, uh, being a founder pitching, right.I think, yes, there is a 15 minute interview, but before that, there is a whole YC application process. And, uh, I think even for the, for YC as, uh, this bunch of the set of investors, I'm sure they're looking for specific signals, but for me as a founder, the application process was so useful, um, because it really makes you think about what you're building.Why are you building this? Are you the right person to be building this? Who are the other people you should be hiring? And so, I mean, there are like few questions or like, one of my favorite questions is, um, how have you hacked a non-computer system to your advantage? Yeah. . And it kind of really makes you think about, huh, and you kind of noticed that many good founders have that pattern of like hacking other systems to their advantage.Um, and so to me, I think more than the interview itself, the process of like filling out the application form, doing that little video, all of that gives you better, um, it gives you the, the entire scope of your company in your head because it's really hard when you have this idea and you're kind of like noodling about with it and talking to a few people.You don't really know if this is a thing. To just like crystallize the whole vision in your head. I think, uh, that's on point. Yes. Um, the 15 minute interview for me, honestly, it was like kind of controversial because, uh, I went in that morning, I did the whole, you know, I, I had basically stayed at the previous night, uh, building out this website and, uh, that morning I showed up and I had my laptop open.I'm like really eager to like tell them what you're building and I keep getting cut off and I realize much later that that's kind of my design. Yeah. And you just like cut off all the time. Be like, why would anybody use this? And you start to answer and be like, oh, but I, I don't agree with that. And there's just like, and it, it's like part of it is like, makes you upset, but part of it is also like, it makes you think how to compress all that information in a really short amount of time and tell them.Um, and so that interview happens, I feel really bummed out because I kind of had this website I wanted to show them. So while walking out the door, I remember just showing Gary, Dan, um, the website and he like kind of like. Scrolls it a little bit, and he is like, this is really beautifully done. And I was like, thank you.I've been wanting to show you this for 15 minutes. Um, and I, I mentioned it to Gary recently and he laughed about it. And then, uh, I didn't get selected in that timeframe. They gave me a call and they said, come back again in the evening and we are going to do round two because we are not sure. Yeah. And so the second interview there was PG and Jessica and they both were sitting there and they were just grueling me.It was a slightly longer interview and PG was like, I don't think this is gonna work. And I'm like, how can you say that? I think this market's really big. And I'm just like getting really upset because I've been waiting this whole day to like get to this point. And he's just being like cynical and negative.And then at some point he starts smiling at Jessica and I'm like, oh, okay. They're just like baiting me to figure it out. And so that was my process. And I, by the evening, I remember Shera was working at. I remember driving down from Mountain View to Facebook and Sheam took me to the Sweet Stop. Oh yeah.Which is like their, you know, Facebook has this like, fancy, uh, sweet store, like the ice cream store. I [00:19:37] Sriram: think they had a lot more perks over the years, but that was very fancy back then. [00:19:40] Aarthi: So I had like two scoops of ice cream in each hand in, and, uh, the phone rang and I was like, oh, hold onto this. And I grabbed it and I, and you know, I think it was Michael Sibu or I don't know who, but somebody called me and said, you're through.So that was kind of my process. So even though there was only 15 minutes, mine was actually much longer after. But even before the, the application process was like much more detailed. So it sounds [00:20:01] Dwarkesh: like the 15 minutes it's really there. Like, can they rattle you? Can they, can they [00:20:06] Aarthi: you and how do you react?Yeah, yeah, yeah. Um, I also think they look for how sex you can be in explaining what the problem is. They do talk to hundreds of companies. It is a lot. And so I think, can you compress a lot of it and convince, if you can convince these folks here in three months or four months time, how are you going to do demo day and convince a whole room full of investors?[00:20:27] Sriram: Yeah. Yeah. For, I think it's a bit different for us, uh, on the VC side, uh, because two things. One, number one is, uh, the day, you know, so much of it is having a prepared mind before you go into the meeting. And, for example, if you're meeting a. very early. Are we investing before having met every single other person who's working in this space, who has ideas in the space.So you generally know what's going on, you know, what the kind of technologies are or go to market approaches are. You've probably done a bunch of homework already. It's usually, uh, it does happen where you meet somebody totally cold and uh, you really want to invest, but most often you've probably done some homework at least in this space, if not the actual company.Um, and so when you're in the meeting, I think you're trying to judge a couple of things. And these are obviously kind of stolen from Christ Dixon and others. Um, one is their ability to kind of go walk you through their idea, ma. And so very simply, um, you know, the idea MAs is, uh, and I think say the biology of Christen came with this, the idea that, hey, um, uh, How you got to the idea for your company really matters because you went and explored all the data ends, all the possibilities.You're managing around for years and years, and you've kind of come to the actual solution. And the way you can tell whether somebody's gone through the idea Mac, is when you ask 'em questions and they tell you about like five different things they've tried, did not work. And it, it's really hard to fake it.I mean, we, you maybe fake it for like one or two questions, but if you talk about like how we tried X, Y, and Z and they have like an opinion what of the opinions, if they've thought about it, you're like, okay, this person really studied the idea, ma. And that's very powerful. Uh, the second part of it is, uh, you know, Alex sample.Uh, uh, one of my partner says this, Yes, some this thing called the Manifestation Framework, which sounds like a self-help book on Amazon, but it's not, uh, uh uh, you know, but what if is, is like, you know, so many, so much of early stage startup founders is about the ability to manifest things. Uh, manifest capital, manifest the first hire, uh, manifest, uh, the first BD partnership.And, um, usually, you know, if you can't, if you don't have a Cigna sign of doing that, it's really hard to then after raising money, go and close this amazing hotshot engineer or salesperson or close this big partnership. And so in the meeting, right? If you can't convince us, right? And these are people, our day job is to give you money, right?Like, if I spent a year without giving anybody money, I'll probably get fired. If you can't, uh, if you can't convince us to give you money, right? If you wanna find probably a hard time to close this amazing engineer and get that person to come over from Facebook or close this amazing partnership against a competitor.And so that's kind of a judge of that. So it is never about the actual 60 Minutes where you're like, we, we are making up of a large part of makeup of mind is. That one or two conversations, but there's so much which goes in before and after that. Yeah, yeah. Speaking of [00:22:57] What is a16z's edge?[00:22:57] Dwarkesh: venture capital, um, I, I'm curious, so interest and Horowitz, and I guess why Combinator too?Um, but I mean, any other person who's investing in startups, they were started at a time when there were much less capital in the space, and today of course, there's been so much more capital pour into space. So how do these firms, like how does A 16 C continue to have edge? What is this edge? How can I sustain it [00:23:20] Sriram: given the fact that so much more capital is entered into the space?We show up on podcasts like the Lunar Society, , and so if you are watching this and you have a startup idea, Uh, come to us, right? Uh, no. Come, come to the Lunar society. . Well, yes. I mean, maybe so Trust me, you go in pat, you're gonna have a find, uh, a Thk pat right there. Uh, actually I, you think I joked, but there's a bit of truth.But no, I've had [00:23:40] Dwarkesh: like lu this [00:23:40] Aarthi: suddenly became very different [00:23:43] Sriram: conversation. I have had people, this is a totally ludicrous [00:23:46] Dwarkesh: idea, but I've had people like, give me that idea. And it's like, it sounds crazy to me because like, I don't know what, it's, what a company's gonna be successful, right? So, but I hasn't [00:23:55] Aarthi: become an investor.[00:23:57] Sriram: I honestly don't know. But it is something like what you're talking about Lu Society Fund one coming up, right? You heard it here first? Uh, uh, well, I think first of all, you know, I think there's something about the firm, uh, um, in terms of how it's set up philosophically and how it's set up, uh, kind of organizationally, uh, and our approach philosoph.The firm is an optimist, uh, uh, more than anything else. At the core of it, we are optimist. We are optimist about the future. We are optimist about the impact of founders on their, on the liberty to kind of impact that future. Uh, we are optimist at heart, right? Like I, I tell people like, you can't work at a six and z if you're not an optimist.That's at the heart of everything that we do. Um, and very tied to that is the idea that, you know, um, software is eating the world. It is, it's true. 10 years ago when Mark wrote that, peace is as true now, and we just see more and more of it, right? Like every week, you know, look at the week we are recording this.You know, everyone's been talking about chat, G p T, and like all the industries that can get shaped by chat, G P T. So our, our feature, our, our idea is that software is gonna go more and more. So, one way to look at this is, yes, a lot more capitalists enter the world, but there should be a lot more, right?Like, because these companies are gonna go bigger. They're gonna have bigger impacts on, uh, human lives and, and the world at large. So that's, uh, you know, uh, one school of thought, the other school of thought, uh, which I think you were asking about, say valuations, uh, et cetera. Is, uh, you know, um, again, one of my other partners, Jeff Jordan, uh, uh, always likes to tell people like, we don't go discount shopping, right?Our, the way we think about it is we want to, when we're investing in a market, We want to really map out the market, right? Uh, so for example, I work on crypto, uh, and, uh, you know, we, you know, if, if you are building something interesting in crypto and we haven't seen you, we haven't talked to you, that's a fail, that's a mess, right?We ideally want to see every single interesting founder company idea. And a category can be very loose. Crypto is really big. We usually segmented something else. Or if you look at enterprise infrastructure, you can take them into like, you know, AI or different layers and so on. But once you map out a category, you want to know everything.You wanna know every interesting person, every interesting founder you wanna be abreast of every technology change, every go to market hack, every single thing. You wanna know everything, right? And then, uh, the idea is that, uh, we would love to invest in, you know, the what is hopefully becomes the market.Set category, uh, or you know, somebody who's maybe close to the, the market leader. And our belief is that these categories will grow and, you know, they will capture huge value. Um, and as a whole, software is still can used to be undervalued by, uh, a, you know, the world. So, um, we, so, which is why, again, going back to what Jeff would say, he's like, we are not in the business of oh, we are getting a great deal, right?We, we are like, we want to invest in something which, where we think the team and the company and their approach is going to win in this space, and we want to help them win. And we think if they do win, there's a huge value to be unlocked. Yeah, I see. I see. Um, [00:26:42] Future of Twitter[00:26:42] Dwarkesh: let's talk about Twitter. [00:26:44] Sriram: Uh, . I need a drink. I need a drink.[00:26:48] Dwarkesh: um, Tell me, what is the future of Twitter? What is the app gonna look like in five years? You've, um, I mean obviously you've been involved with the Musk Venture recently, but, um, you've, you've had a senior position there. You were an executive there before a few years ago, and you've also been an executive at, uh, you've both been at Meta.So what [00:27:06] Sriram: is the future of Twitter? It's gonna be entertaining. Uh, uh, what is it El say the most entertaining outcome is the most, [00:27:12] Aarthi: uh, uh, like, best outcome is the most, uh, most likely outcome is the most entertaining outcome. [00:27:16] Sriram: Exactly right. So I think it's gonna be the most entertaining outcome. Um, I, I mean, I, I, I think a few things, uh, first of all, uh, ideally care about Twitter.Yeah. Uh, and all of my involvement, uh, you know, over the years, uh, uh, professionally, you know, uh, has, it's kind of. A lagging indicator to the value I got from the service person. I have met hundreds of people, uh, through Twitter. Uh, hundreds of people have reached out to me. Thousands. Exactly. Uh, and you know, I met Mark Andresen through Twitter.Uh, I met like, you know, uh, people are not very good friends of mine. We met through Twitter. We met at Twitter, right. There we go. Right. Uh, just [00:27:50] Aarthi: like incredible outsized impact. Yeah. Um, and I think it's really hard to understate that because, uh, right now it's kind of easy to get lost in the whole, you know, Elon, the previous management bio, like all of that.Outside of all of that, I think the thing I like to care about is, uh, focus on is the product and the product experience. And I think even with the product experience that we have today, which hasn't like, dramatically changed from for years now, um, it's still offering such outsized value for. If you can actually innovate and build really good product on top, I think it can, it can just be really, really good for humanity overall.And I don't even mean this in like a cheesy way. I really think Twitter as a tool could be just really, really effective and enormously good for everyone. Oh yeah. [00:28:35] Sriram: Twitter is I think, sort of methodically upstream of everything that happens in culture in uh, so many different ways. Like, um, you know, there was this, okay, I kinda eli some of the details, uh, but like a few years ago I remember there was this, uh, sort of this somewhat salacious, controversial story which happened in entertainment and uh, and I wasn't paying attention to, except that something caught my eye, which was that, uh, every story had the same two tweets.And these are not tweets from any famous person. It was just some, like, some, um, you know, somebody had some followers, but not a lot of, a lot of followers. And I. Why is this being quoted in every single story? Because it's not from the, you know, the person who was actually in the story or themselves. And it turned out that, uh, what had happened was, uh, you know, somebody wrote in the street, it had gone viral, um, it started trending on Twitter, um, and a bunch of people saw it.They started writing news stories about it. And by that afternoon it was now, you know, gone from a meme to now reality. And like in a lot of people entertainment say, kind of go respond to that. And I've seen this again and again, again, right? Uh, sports, politics, culture, et cetera. So Twitter is memetically upstream of so much of life.Uh, you know, one of my friends had said like, Twitter is more important than the real world. Uh, which I don't, I don't know about that, but, uh, you know, I do think it's, um, it has huge sort of, uh, culture shaping value. Yeah. I thing I think about Twitter is so much of. The network is very Lindy. So one of the things I'm sure from now is like five years from now, you know, what does that mean?Well that, uh, is that something which has kind of stood the test of time to some extent? And, um, and, uh, well the Lindy effect generally means, I don't think it's using this context with ideas like things which, with withstood the test of time tend to also with some test of time in the future, right? Like, like if we talked to Naim is like, well, people have lifting heavy weights and doing red wine for 2000 years, so let's continue doing that.It's probably a good thing. Um, but, but, but that's Twitter today. What is the future of Twitter? Well, uh, well, I think so one is, I think that's gonna continue to be true, right? 10 years from now, five years from now, it's still gonna be the metic battleground. It's still gonna be the place where ideas are shared, et cetera.Um, you know, I'm very. Unabashedly a a big fan of what Elon, uh, as a person, as a founder and what he's doing at Twitter. And my hope is that, you know, he can kind of canoe that and, you know, he's, you know, and I can't actually predict what he's gonna go Bill, he's kind of talked about it. Maybe that means bringing in other product ideas.Uh, I think he's talked about payments. He's talked about like having like longer form video. Uh, who knows, right? But I do know, like five years from now, it is still gonna be the place of like active conversation where people fight, yell, discuss, and maybe sometimes altogether. Yeah. Yeah. Uh, the Twitter, [00:30:58] Is Big Tech Overstaffed?[00:30:58] Dwarkesh: um, conversation has raised a lot of, a lot of questions about how over or understaffed, uh, these big tech companies are, and in particular, um, how many people you can get rid of and the thing basically functions or how fragile are these code bases?And having worked at many of these big tech companies, how, how big is the bus factor, would you guess? Like what, what percentage of people could I fire at the random big tech [00:31:22] Sriram: company? Why? I think, uh, [00:31:23] Aarthi: yeah, I think. That's one way to look at it. I think the way I see it is there are a few factors that go into this, right?Like pre covid, post covid, like through covid everybody became remote, remote teams. As you scaled, it was kind of also hard to figure out what was really going on in different parts of the organization. And I think a lot of inefficiencies were overcome by just hiring more people. It's like, oh, you know what, like that team, yeah, that project's like lagging, let's just like add 10 more people.And that's kind of like it became the norm. Yeah. And I think a lot of these teams just got bigger and bigger and bigger. I think the other part of it was also, um, you lot of how performance ratings and culture of like, moving ahead in your career path. And a lot of these companies were dependent on how big your team was and uh, and so every six months or year long cycle or whatever is your performance review cycle, people would be like, this person instead of looking at what has this person shipped or what has like the impact that this person's got had, uh, the team's done.It became more of like, well this person's got a hundred percent arc or 200% arc and next year they're gonna have a 10% increase and that's gonna be like this much. And you know, that was the conversation. And so a lot of the success and promo cycles and all of those conversations were tied around like number of headcount that this person would get under them as such, which I think is like a terrible way to think about how you're moving up the ladder.Um, you should really, like, even at a big company, you should really be thinking about the impact that you've had and customers you've reached and all of that stuff. And I think at some point people kind of like lost that, uh, and pick the more simpler metric, which just headcount and it's easy. Yeah. And to just scale that kind of thing.So I think now with Elon doing this where he is like cutting costs, and I think Elon's doing this for different set of reasons. You know, Twitter's been losing money and I think it's like driving efficiency. Like this is like no different. Anybody else who like comes in, takes over a business and looks at it and says, wait, we are losing money every day.We have to do something about this. Like, it's not about like, you know, cutting fat for the sake of it or anything. It's like this, this business is not gonna be viable if we keep it going the way it is. Yeah. And just pure economics. And so when he came in and did that, I'm now seeing this, and I'm sure Sheam is too at like at eight 16 Z and like his companies, uh, but even outside, and I see this with like my angel investment portfolio of companies, um, and just founders I talk to where people are like, wait, Elon can do that with Twitter.I really need to do that with my company. And it's given them the permission to be more aggressive and to kind of get back into the basics of why are we building what we are building? These are our customers, this is our revenue. Why do we have these many employees? What do they all do? And not from a place of like being cynical, but from a place of.I want people to be efficient in doing what they do and how do we [00:34:06] Sriram: make that happen? Yeah. I, I stole this, I think somebody said this on Twitter and I officially, he said, Elon has shifted the overturn window of, uh, the playbook for running a company. Um, which is, I think if you look at Twitter, uh, you know, and by the way, I would say, you know, you know the sort of, the warning that shows up, which is don't try this at home before, which is like, so don't try some of these unless you're er and maybe try your own version of these.But, you know, number one is the idea that you, you can become better not through growth, but by cutting things. You can become better, by demanding more out of yourself and the people who work for you. Uh, you, you can become better by hiring a, you know, a higher bar, sitting a higher bar for the talent that you bring into the company and, uh, that you reach into the company.I think at the heart of it, by the way, uh, you know, it's one of the things I've kinda observed from Elon. His relentless focus on substance, which is every condition is gonna be like, you know, the, the meme about what have you gotten done this week is, it kinda makes sense to everything else, which is like, okay, what are we building?What is the thing? Who's the actual person doing the work? As opposed to the some manager two levels a about aggregating, you know, the reports and then telling you what's being done. There is a relentless focus on substance. And my theory is, by the way, I think maybe some of it comes from Iran's background in, uh, space and Tesla, where at the end of the day, you are bound by the physics of the real world, right?If you get something wrong, right, you can, the rockets won't take off or won't land. That'd be a kalo, right? Like what, what's a, the phrase that they use, uh, rapid unplanned disassembly is the word. Right? Which is like better than saying it went kaboom. Uh, but, you know, so the constraints are if, if, you know, if you get something wrong at a social media company, people can tell if you get something really wrong at space with the Tesla.People can tap, right? Like very dramatically so and so, and I think, so there was a relentless focus on substance, right? Uh, being correct, um, you know, what is actually being done. And I think that's external Twitter too. And I think a lot of other founders I've talked to, uh, uh, in, sometimes in private, I look at this and go, oh, there is no different playbook that they have always I instituted or they were used to when they were growing up.We saw this when we were growing up. They're definitely seen some other cultures around the world where we can now actually do this because we've seen somebody else do this. And they don't have to do the exact same thing, you know, Elon is doing. Uh, they don't have to, uh, but they can do their variations of demanding more of themselves, demanding more of the people that work for them.Um, focusing on substance, focusing on speed. Uh, I think our all core element. [00:36:24] Aarthi: I also think over the last few years, uh, this may be controversial, I don't know why it is, but it somehow is that you can no longer talk about hard work as like a recipe for success. And you know, like growing up for us. When people say that, or like our parents say that, we just like kind of roll our eyes and be like, yeah, sure.Like, we work hard, like we get it. Yeah. But I think over the last couple of years, it just became not cool to say that if you work hard, then you can, there is a shot at like finding success. And I think it's kind of refreshing almost, uh, to have Elon come in and say, we are gonna work really hard. We are gonna be really hardcore about how we build things.And it's, it's very simple. Like you have to put in the hours. There is no kind of shortcut to it. And I think it's, it's nice to bring it all tight, all back to the basics. And, uh, I like that, like, I like the fact that we are now talking about it again and it's, it's sad that now talking about working really hard or having beds in your office, we used to do that at MicrosoftYeah. Uh, is now like suddenly really controversial. And so, um, I'm, I'm all for this. Like, you know, it's not for everyone, but if you are that type of person who really enjoys working hard, really enjoys shipping things and building really good things, Then I think you might find a fit in this culture. And I think that's a good thing.Yeah. I, [00:37:39] Sriram: I think there's nothing remarkable that has been built without people just working really hard. It doesn't happen for years and years, but I think for strong, some short-term burst of some really passionate, motivated, smart people working some really, you know, and hard doesn't mean time. It can mean so many different dimensions, but I don't think anything great gets built without that.So, uh, yeah, it's interesting. We [00:37:59] Aarthi: used to like do overnights at Microsoft. Like we'd just like sleep under our desk, um, until the janitor would just like, poke us out of there like, I really need to vacuum your cubicle. Like, get out of here. And so we would just like find another bed or something and just like, go crash on some couch.But it was, those were like some of our fun days, like, and we look back at it and you're like, we sh we built a lot. I think at some point sh I think when I walked over to his cubicle, he was like looking at Windows Source code and we're like, we are looking at Windows source code. This is the best thing ever.I think, I think there's such joy in like, Finding those moments where you like work hard and you're feeling really good about it. [00:38:36] Sriram: Yeah, yeah, yeah. Um, so you [00:38:37] Next CEO of Twitter?[00:38:37] Dwarkesh: get working hard and bringing talent into the company, uh, let's say Elon and says Tomorrow, you know what, uh, Riam, I'm, uh, I've got these other three companies that I've gotta run and I need some help running this company.And he says, Sriram would you be down to be the next, [00:38:51] Sriram: uh, next CEO of Twitter Absolutely not. Absolutely not. But I am married to someone. No, uh uh, no, uh uh, you know, you know when, uh, I don't think I was, the answer is absolutely not. And you know this exactly. Fun story. Um, uh, I don't think it says in public before. So when you, when I was in the process, you know, talking to and nor words and, you know, it's, it's not like a, uh, it's not like a very linear process.It's kind of a relationship that kind of develops over time. And I met Mark Andreen, uh, multiple times over the years. They've been having this discussion of like, Hey, do you want to come do venture or do you want to, if you wanna do venture, do you wanna come do with us? And um, and, and one of the things Mark would always tell me is, uh, something like, we would love to have you, but you have to scratch the edge of being an operator first.Um, because there are a lot of, there are a lot of ways VCs fail, uh, operator at VCs fail. Um, and I can get, get into some of them if you're interested, but one of the common ways that they fail is they're like, oh, I really want to go back to, um, building companies. And, uh, and now thing is like antis more than most interest, like really respects entrepreneurship, fraud's the hard of what we do.But he will, like, you have to get that out of a system. You have to be like, okay, I'm done with that word. I want to now do this. Uh, before you know, uh, you want to come over, right? And if you say so, let's have this conversation, but if not, we will wait for you. Right. And a woman telling me this all the time, and at some point of time I decided, uh, that, uh, you know, I just love this modoc.Um, you know, there are many things kind of different about being an operator versus a BC uh, and you kind of actually kind of really train myself in what is actually a new profession. But one of the things is like, you know, you kind of have to be more of a coach and more open to like, working with very different kinds of people without having direct agency.And it's always a very different mode of operation, right? And you have to be like, well, I'm not the person doing the thing. I'm not the person getting the glory. I'm here to fund, obviously, but really help support coach be, uh, a lending hand, be a supporting shoulder, whatever the, uh, the metaphor is, or for somebody else doing the thing.And so you kind of have to have the shift in your brain. And I think sometimes when VCs don't work out, the few operator on VCs don't work out. There are few reasons. Uh, number one reason I would say is when an operator, and I, I hate the word operator by the way, right? It just means you have a regular job.Uh, you know, uh, and, uh, but the number one reason is like when you have a regular job, you know, you're an engineer, you're, you're a product manager, you're a marketer, whatever. , you get feedback every single day about how you're doing. If you're an engineer, you're checking in code or you know your manager, you hire a great person, whatever it is.When you're at Visa, you're not getting direct feedback, right? You know, maybe today what I'm doing now, recording this with you is the best thing ever because some amazing fund is gonna meet it and they're gonna come talk to me, or maybe it's a total waste of time and I should be talking some else. You do have no way of knowing.So you really have to think very differently about how you think about patients, how we think about spending your time, and you don't get the dopamine of like, oh, I'm getting this great reinforcement loop. Um, the second part of it is because of that lack of feedback loop, you often don't know how well you're doing.Also, you don't have that fantastic product demo or you're like, you know, if an engineer like, oh, I got this thing working, the builder is working, it's 10 x faster, or this thing actually works, whatever the thing is, you don't get that feedback loop, uh, because that next great company that, you know, the next Larry and Sergey or Brian Armstrong might walk in through your door or Zoom meeting tomorrow or maybe two years from now.So you don't really have a way to know. Um, so you kind of have to be, you have a focus on different ways to do, uh, get. Kind of figured out how well you're doing. The third part of it is, uh, you know, the, uh, the feedback loops are so long where, uh, you know, you, you can't test it. When I was a product manager, you would ship things, something you, if you don't like it, you kill it, you ship something else.At, at our firm in, you invest in somebody, you're working with them for a decade, if not longer, really for life in some ways. So you are making much more intense, but much less frequent decisions as opposed to when you're in a regular job, you're making very frequent, very common decisions, uh, every single day.So, uh, I get a lot of differences and I think, you know, sometimes, uh, you know, folks who, who are like a former CEO or former like VP product, uh, uh, I talk a lot of them sometimes who went from, came to BC and then went back and they either couldn't adapt or didn't like it, or didn't like the emotions of it.And I had to really convince myself that okay. Hopefully wouldn't fate those problems. I probably, maybe some other problems. And, uh, uh, so yes, the long way of saying no, , [00:43:13] Why Don't More Venture Capitalists Become Founders?[00:43:13] Dwarkesh: um, the desk partly answer another question I had, which was, you know, there is obviously this pipeline of people who are founders who become venture capitalists.And it's interesting to me. I would think that the other end or the converse of that would be just as common because if you're, if you're an angel investor or venture capitalist, you've seen all these companies, you've seen dozens of companies go through all these challenges and then you'd be like, oh, I, I understand.[00:43:36] Sriram: Wait, why do you think more VCs driven apart? You have some strong opinions of this . [00:43:40] Dwarkesh: Should more venture capitalists and investors become founders? I think [00:43:43] Aarthi: they should. I don't think they will. Ouch. I dunno, why not? Um, I think, uh, look, I think the world is better with more founders. More people should start companies, more people should be building things.I fundamentally think that's what needs to happen. Like our single biggest need is like, we just don't have enough founders. And we should just all be trying new things, building new projects, all of that. Um, I think for venture capital is, I think what happens, and this is just my take, I don't know if Farram agrees with it, but, um, I think they see so much from different companies.And if you're like really successful with what you do as a vc, you are probably seeing hundreds of companies operate. You're seeing how the sausage is being made in each one of them. Like an operating job. You kind of sort of like have this linear learning experience. You go from one job to the other.Here you kind of sort of see in parallel, like you're probably on like 50, 60 boards. Uh, and oftentimes when it comes to the investor as like an issue, it is usually a bad problem. Um, and you kind of see like you, you know, you kind of see how every company, what the challenges are, and every company probably has like, you know, the best companies we know, I've all had this like near death experience and they've come out of that.That's how the best founders are made. Um, you see all of that and I think at some point you kind of have this fear of like, I don't know. I just don't think I wanna like, bet everything into this one startup. One thing, I think it's very hard to have focus if you've honed your skillset to be much more breath first and go look at like a portfolio of companies being helpful to every one of them.And I see Sure. And do this every day where I, I have no idea how he does it, but key context, which is every 30 minutes. Yeah. And it's crazy. Like I would go completely and say, where if you told me board meeting this founder pitch, oh, sell this operating role for this portfolio company. Second board meeting, third, board meeting founder, pitch founder pitch founder pitch.And that's like, you know, all day, every day nonstop. Um, that's just like, you, you, I don't think you can like, kind of turn your mindset into being like, I'm gonna clear up my calendar and I'm just gonna like work on this one thing. Yeah. And it may be successful, it may not be, but I'm gonna give it my best shot.It's a very, very different psychology. I don't know. What do you [00:45:57] Sriram: think? Well, Well, one of my partners Triess to say like, I don't know what VCs do all day. The job is so easy, uh, uh, you know, they should start complaining. I mean, being a founder is really hard. Um, and I think, you know, there's a part of it where the VCs are like, oh, wait, I see how hard it is.And I'm like, I'm happy to support, but I don't know whether I can go through with it. So, because it's just really hard and which is kind of like why we have like, so much, uh, sort of respect and empathy, uh, for the whole thing, which is, I, [00:46:20] Aarthi: I do like a lot of VCs, the best VCs I know are people who've been operators in the past because they have a lot of empathy for what it takes to go operate.Um, and I've generally connected better with them because you're like, oh, okay, you're a builder. You've built these things, so, you know, kind of thing. Yeah. Um, but I do think a lot more VCs should become [00:46:38] Sriram: founders than, yeah. I, I think it's some of the couple of other things which happened, which is, uh, uh, like Arthur said, like sometimes, uh, you know, when we see you kind of, you see, you kind of start to pattern match, like on.And you sometimes you analyze and, and you kind of, your brain kind of becomes so focused on context switching. And I think when need a founder, you need to kind of just dedicate, you know, everything to just one idea. And it, it's not just bbc sometimes with academics also, where sometimes you are like a person who's supporting multiple different kinds of disciplines and context switching between like various speech students you support.Uh, but it's very different from being in the lab and working on one problem for like long, long years. Right. So, um, and I think it's kind of hard to then context switch back into just doing the exact, you know, just focus on one problem, one mission, day in and day out. So I think that's hard, uh, and uh, but you should be a founder.Yeah, I think, yeah, I think more people should try. [00:47:32] Role of Boards[00:47:32] Dwarkesh: . Speaking of being on boards, uh, what the FTX Saga has raised some questions about what is like the role of a board, even in a startup, uh, stage company, and you guys are on multiple boards, so I'm curious how you think about, there's a range of between micromanaging everything the CEO does to just rubber stamping everything the CEO does.Where, what is the responsibility of a board and a startup? [00:47:54] Aarthi: What, what, what are the, this is something I'm really curious about too. I'm [00:47:57] Sriram: just, well, I just wanna know on the FDX soccer, whether we are gonna beat the FTX episode in interviews in terms of view your podcast, right? Like, so if you folks are listening, right?Like let's get us to number one. So what you YouTube like can subscriber, they're already listening. [00:48:10] Aarthi: What do you mean? Get us [00:48:10] Sriram: to number one? Okay, then, then spread the word, right? Like, uh, don't [00:48:13] Aarthi: watch other episodes. It's kinda what you [00:48:15] Sriram: should, I mean, if there's [00:48:16] Dwarkesh: like some sort of scandal with a 16 Z, we could definitely be to fdx.[00:48:21] Sriram: Uh, uh, yeah, I think it's gonna, well, it's gonna be really hard to read that one. Uh, , uh, uh, for for sure. Uh, uh, oh my goodness. Um, uh, but no, [00:48:29] Aarthi: I'm, I'm genuinely curious about [00:48:31] Sriram: these two. Well, uh, it's a few things, you know, so the multiple schools of thought, I would say, you know, there's one school of thought, which is the, uh, uh, you know, which I don't think I totally subscribe to, but I think some of the other later stages, especially public market folks that I work with sometimes subscribe to, which is the only job of a, uh, board is to hire and fire the ceo.I don't think I really subscribe to that. I think because we deal with more, uh, early stage venture, um, and our job is like, uh, you know, like lot of the companies I work with are in a cdc c, b, you know, they have something working, but they have a lot long way to go. Um, and hopefully this journey, which goes on for many, many years, and I think the best way I thought about it is to, people would say like, you want to be.Wave form dampener, which is, uh, you know, for example, if the company's kind of like soaring, you want to kind of be like kind the check and balance of what? Like, hey, okay, what do we do to, uh, you know, um, uh, to make sure we are covering our bases or dotting the is dotting the, crossing The ts be very kind of like careful about it because the natural gravitational pool of the company is gonna take it like one direct.On the other hand, uh, if the company's not doing very well and everybody's beating us, beating up about it, you're, you know, your cust you're not able to close deals. The press is beating you up. You want to be the person who is supportive to the ceo, who's rallying, everybody helping, you know, convince management to stay, helping convince, close host, hire.So, um, there are a lot of things, other things that go into being a board member. Obviously there's a fiscal responsibility part of things, and, um, you know, um, because you kind of represent so many stakeholders. But I think at the heart of it, I kind of think about, uh, you know, how do I sort of help the founder, uh, the founder and kind of dampen the waveform.Um, the other Pinteresting part was actually the board meetings. Uh, Themselves do. Uh, and I do think like, you know, about once a year or, uh, so like that there's every kind of, there's, there's almost always a point every 18 months or so in a company's lifetime where you have like some very decisive, interesting moment, right?It could be good, it could be bad. And I think those moments can be, uh, really, really pivotal. So I think there's, there's huge value in showing up to board meetings, being really prepared, uh, uh, where you've done your homework, you, you know, you've kind of had all the conversations maybe beforehand. Um, and you're coming into add real value, like nothing kind of annoying me if somebody's just kind of showing up and, you know, they're kind of maybe cheering on the founder once or twice and they kind of go away.So I don't think you can make big difference, but, uh, you know, I think about, okay, how are we sort of like the waveform, the, you know, make sure the company, [00:50:58] Aarthi: but I guess the question then is like, should startups have better corporate governance compared to where we are today? Would that have avoided, like, say the FTX [00:51:08] Sriram: saga?No, I mean, it's, I mean, we, I guess there'll be a legal process and you'll find out right when the FTX case, nobody really knows, you know, like, I mean, like what level of, uh, who knew what, when, and what level of deceptions, you know, deception, uh, uh, you know, unfolded, right? So, uh, it, yeah. Maybe, but you know, it could have been, uh, it could have been very possible that, you know, uh, somebody, somebody just fakes or lies stuff, uh, lies to you in multiple ways.[00:51:36] Aarthi: To,

CFO Thought Leader
840: Putting a Spin on Your Talent Pinwheel| Bryan Morris, CFO, Demandbase

CFO Thought Leader

Play Episode Listen Later Oct 9, 2022 45:22


Among the recruitment milestones that populate Bryan Morris's CFO resume, few can match the 6-month talent acquisition binge that he launched during the first quarter of 2015. “In terms of key hires, I never hired faster than I did then,” comments Morris, as he begins to lay out the circumstances that led to his need to speedily attract and hire talent. At the time, Morris was the newly appointed CFO of Xamarin, a creator of software tools used for mobile apps development.   This firm, then led by cofounder and CEO Nat Freidman, had doubled its revenue annually for the previous few years yet had theretofore focused its talent recruitment efforts mainly on nabbing software engineers and intrepid salespeople. “When it came to people, sales, marketing, and R&D were way out ahead of G&A, so I knew that my first few months would be dedicated to recruiting,” recalls Morris, who notes that until his arrival, the developer had outsourced its accounting function while relying on fractional CFO services to patch any management voids. “I made five key hires—head of HR, head of technical recruiting, controller, head of FP&A, and our first corporate counsel—all within the first 6 months,” remarks Morris, who believes that hiring can at times benefit from its own momentum. He explains: “Sometimes, when you're in a great situation and your company is growing, the press is great and the buzz is good—and what happens is that one great hire begets another. So, I kind of had this pinwheel going.” Still, what happened next made Morris's energetic hiring spree all the more consequential. During the second half of 2015, as Xamarin was preparing for another capital raise, Microsoft—one of the developer's strategic partners—acknowledged that not only would it be willing to serve as a reference on behalf of Xamarin for the venture investor community but also it might be interested in partnering with Xamarin to pursue something more strategic.   Subsequently, 12 months into Morris's CFO tenure at Xamarin, company management signed a letter of intent (LOI) to sell the business to Microsoft. Looking back, Morris doesn't hesitate to expose some of the drama that preceded Microsoft's signed LOI. “Here were my team and I—with only some 3 to 6 months of working together—and suddenly we were up against one of the most capable technology buyers in the world,” remembers Morris, who today believes that the timing of Xamarin's key hires and the timing of the deal were not unrelated events. “I couldn't have done it by myself,” observes Morris, who points out that there were a number of 20-hour days during the period leading up to the finalization of the deal. Morris notes that the merger provided mostly great outcomes for both investors and Xamarin employees—not excluding CEO Nat Friedman, who until late 2021 served as CEO of GitHub, which Microsoft had acquired in 2018. Looking back on the CEO who hired him and the subsequent “pinwheel effect” that within 6 months transformed Xamarin's lines of functional management, Morris highlights a shared mission: “Luckily, Nat was completely on board—he knew what I was inheriting, so he gave me the green light to go ahead and hire.” –Jack Sweeney

RecTech: the Recruiting Technology Podcast
Several New HR Tech Funding Announcements

RecTech: the Recruiting Technology Podcast

Play Episode Listen Later Sep 22, 2022 5:40


The Muse, a values-based job search and career development platform used by 75 million people annually, is proud to announce $8 million in new investment led by MBM Capital, founded and run by managing partners Lauren Bonner and Arun Mittal. The investment is a result of The Muse's progressive vision to bring together key players in the future-of-work arena. https://hrtechfeed.com/the-muse-gets-more-funding/  Knoetic, the #1 Chief People Officer (CPO) platform and “single source of truth” for people analytics, today announced its $36M Series B. https://hrtechfeed.com/knoetic-raises-36m-series-b-empowering-chief-people-officers-to-make-better-data-driven-decisions/ SilkRoad Technology, the world-class talent acquisition leader, announced today the addition of person-to-person texting and SMS notifications in its RedCarpet Onboarding solution and On-demand Talent Campaigns (OTC) at its annual user conference, Connections. These powerful capabilities will enable organizations to source, attract, convert, engage and retain talent amidst the most dynamic talent economy in recent memory. https://hrtechfeed.com/silkroad-technology-announces-texting-features-for-onboarding/ Polywork, the professional network, just announced $28 million in Series B funding. The round was co-led by Nat Friedman (former GitHub CEO) and Caffeinated Capital with participation from existing and new investors including Andreessen Horowitz, Baron Davis, Bungalow Capital, Daniel Gross, Elad Gil, Fidji Simo, Maverick Carter and the founders of Stripe, Lyft, Clubhouse, Instacart, Lattice, Minted, and Divvy Homes. https://hrtechfeed.com/professional-network-platform-gets-28m-in-funding/ Sourcing platform Findem has raised $30 million in Series B financing. This round brings their total funding to $37.3 million. Four Rivers and Quarry Ventures led the round, with participation from our existing investor Wing Venture Capital. https://hrtechfeed.com/findem-lands-30m-investment/

The Nonlinear Library
LW - Why pessimism sounds smart by jasoncrawford

The Nonlinear Library

Play Episode Listen Later Apr 26, 2022 2:53


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why pessimism sounds smart, published by jasoncrawford on April 25, 2022 on LessWrong. Pessimists sound smart. Optimists make money.–Nat Friedman (quoted by Patrick) I've realized a new reason why pessimism sounds smart: optimism often requires believing in unknown, unspecified future breakthroughs—which seems fanciful and naive. If you very soberly, wisely, prudently stick to the known and the proven, you will necessarily be pessimistic. No proven resources or technologies can sustain economic growth. The status quo will plateau. To expect growth is to believe in future technologies. To expect very long-term growth is to believe in science fiction. No known solutions can solve our hardest problems—that's why they're the hardest ones. And by the nature of problem-solving, we are aware of many problems before we are aware of their solutions. So there will always be a frontier of problems we don't yet know how to solve. Fears of Peak Oil and other resource shortages follow this pattern. Predictions of shortages are typically based on “proven reserves.” We are saved from shortage by the unproven and even the unknown reserves, and the new technologies that make them profitable to extract. Or, when certain resources really do run out, we are saved economically by new technologies that use different resources: Haber-Bosch saved us from the guano shortage; kerosene saved the sperm whales from extinction; plastic saved the elephants by replacing ivory. In just the same way, it can seem that we're running out of ideas—that all our technologies and industries are plateauing. Technologies do run a natural S-curve, just like oil fields. But when some breakthrough insight creates an entirely new field, it opens an entire new orchard of low-hanging fruit to pick. Focusing only on established sectors and proven fields thus naturally leads to pessimism. To be an optimist, you have to believe that at least some current wild-eyed speculation will come true. Why is this style of pessimism repeatedly wrong? How can this optimism be justified? Not on the basis of specific future technologies—which, again, are unproven—but on the basis of philosophical premises about the nature of humans and of progress. The possibility of sustained progress is a consequence of the view of humans as “universal explainers” (cf. David Deutsch), and of progress as driven fundamentally by human choice and effort—that is, by human agency. The opposite view is that progress is a matter of luck. If the progress of the last few centuries was a random windfall, then pessimism is logical: our luck is bound to run out. How could we get that lucky again? If the next century is an average one, it will see little progress. But if progress is a primarily matter of agency, then whether it continues is up to us. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library: LessWrong
LW - Why pessimism sounds smart by jasoncrawford

The Nonlinear Library: LessWrong

Play Episode Listen Later Apr 26, 2022 2:53


Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why pessimism sounds smart, published by jasoncrawford on April 25, 2022 on LessWrong. Pessimists sound smart. Optimists make money.–Nat Friedman (quoted by Patrick) I've realized a new reason why pessimism sounds smart: optimism often requires believing in unknown, unspecified future breakthroughs—which seems fanciful and naive. If you very soberly, wisely, prudently stick to the known and the proven, you will necessarily be pessimistic. No proven resources or technologies can sustain economic growth. The status quo will plateau. To expect growth is to believe in future technologies. To expect very long-term growth is to believe in science fiction. No known solutions can solve our hardest problems—that's why they're the hardest ones. And by the nature of problem-solving, we are aware of many problems before we are aware of their solutions. So there will always be a frontier of problems we don't yet know how to solve. Fears of Peak Oil and other resource shortages follow this pattern. Predictions of shortages are typically based on “proven reserves.” We are saved from shortage by the unproven and even the unknown reserves, and the new technologies that make them profitable to extract. Or, when certain resources really do run out, we are saved economically by new technologies that use different resources: Haber-Bosch saved us from the guano shortage; kerosene saved the sperm whales from extinction; plastic saved the elephants by replacing ivory. In just the same way, it can seem that we're running out of ideas—that all our technologies and industries are plateauing. Technologies do run a natural S-curve, just like oil fields. But when some breakthrough insight creates an entirely new field, it opens an entire new orchard of low-hanging fruit to pick. Focusing only on established sectors and proven fields thus naturally leads to pessimism. To be an optimist, you have to believe that at least some current wild-eyed speculation will come true. Why is this style of pessimism repeatedly wrong? How can this optimism be justified? Not on the basis of specific future technologies—which, again, are unproven—but on the basis of philosophical premises about the nature of humans and of progress. The possibility of sustained progress is a consequence of the view of humans as “universal explainers” (cf. David Deutsch), and of progress as driven fundamentally by human choice and effort—that is, by human agency. The opposite view is that progress is a matter of luck. If the progress of the last few centuries was a random windfall, then pessimism is logical: our luck is bound to run out. How could we get that lucky again? If the next century is an average one, it will see little progress. But if progress is a primarily matter of agency, then whether it continues is up to us. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - AMA Conjecture, A New Alignment Startup by Adam Shimi

The Nonlinear Library

Play Episode Listen Later Apr 9, 2022 1:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AMA Conjecture, A New Alignment Startup, published by Adam Shimi on April 9, 2022 on The AI Alignment Forum. Conjecture is a new alignment startup founded by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale alignment research. We have VC backing from, among others, Nat Friedman, Daniel Gross, Patrick and John Collison, Arthur Breitman, Andrej Karpathy, and Sam Bankman-Fried. Our founders and early staff are mostly EleutherAI alumni and previously independent researchers like Adam Shimi. We are located in London. As described in our announcement post, we are running an AMA this week-end, from Today (Saturday 9th April) to Sunday 10th of April. We will answer any question asked before the end of Sunday Anywhere on Earth. We might answer later questions, but no guarantees. If you asked question on our announcement post, we would prefer that you repost them here if possible. Thanks! Looking forward to your questions! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - We Are Conjecture, A New Alignment Research Startup by Connor Leahy

The Nonlinear Library

Play Episode Listen Later Apr 8, 2022 6:12


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We Are Conjecture, A New Alignment Research Startup, published by Connor Leahy on April 8, 2022 on The AI Alignment Forum. Conjecture is a new alignment startup founded by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale alignment research. We have VC backing from, among others, Nat Friedman, Daniel Gross, Patrick and John Collison, Arthur Breitman, Andrej Karpathy, and Sam Bankman-Fried. Our founders and early staff are mostly EleutherAI alumni and previously independent researchers like Adam Shimi. We are located in London. Of the options we considered, we believe that being a for-profit company with products on the market is the best one to reach our goals. This lets us scale investment quickly while maintaining as much freedom as possible to expand alignment research. The more investors we appeal to, the easier it is for us to select ones that support our mission (like our current investors), and the easier it is for us to guarantee security to alignment researchers looking to develop their ideas over the course of years. The founders also retain complete control of the company. We're interested in your feedback, questions, comments, and concerns. We'll be hosting an AMA on the Alignment Forum this weekend, from Saturday 9th to Sunday 10th, and would love to hear from you all there. (We'll also be responding to the comments thread here!) Our Research Agenda We aim to conduct both conceptual and applied research that addresses the (prosaic) alignment problem. On the experimental side, this means leveraging our hands-on experience from EleutherAI to train and study state-of-the-art models without pushing the capabilities frontier. On the conceptual side, most of our work will tackle the general idea and problems of alignment like deception, inner alignment, value learning, and amplification, with a slant towards language models and backchaining to local search. Our research agenda is still actively evolving, but some of the initial directions are: New frames for reasoning about large language models: What: Propose and expand on a frame of GPT-like models as simulators of various coherent text-processes called simulacra, as opposed to goal-directed agents (upcoming sequence to be published on the AF, see this blogpost for preliminary thoughts). Why: Both an alternative perspective on alignment that highlights different questions, and a high-level model to study how large language models will scale and how they will influence AGI development. Scalable mechanistic interpretability: What: Mechanistic interpretability research in a similar vein to the work of Chris Olah and David Bau, but with less of a focus on circuits-style interpretability and more focus on research whose insights can scale to models with many billions of parameters and larger. Some example approaches might be: Locating and editing factual knowledge in a transformer language model. Using deep learning to automate deep learning interpretability - for example, training a language model to give semantic labels to neurons or other internal circuits. Studying the high-level algorithms that models use to perform e.g, in-context learning or prompt programming. Why: Provide tools to implement alignment proposals on neural nets, and insights that reframe conceptual problems in concrete terms. History and philosophy of alignment: What: Map different approaches to alignment, translate between them, explore ideas that were abandoned too fast, and propose new exciting directions (upcoming sequence on pluralism in alignment to be published on the AF). Why: Help alignment research become even more pluralist while still remaining productive. Understanding historical patterns helps put our current paradigms and assumptions into perspective. We target the Alignment Forum as our main publication outlet,...

Coder Radio
457: Rich Clownshow Services

Coder Radio

Play Episode Listen Later Mar 16, 2022 57:44


Our take on big tech's return to office, AT&T's RCS boondoggle, and the concerning territory tech is racing towards.

Wharton FinTech Podcast
Dan Westgarth, Deel COO - Accelerating Global Hiring of Anyone, Anywhere

Wharton FinTech Podcast

Play Episode Listen Later Feb 25, 2022 35:58


In today's episode, Ally McCloskey sits down with two-time guest Dan Westgarth, COO at Deel. Deel makes growing remote and international teams effortless by simplifying international hiring, compliance, and payments all on one platform. This means businesses can hire anyone, anywhere as independent contractors or full time employees compliantly, in minutes, and with the ability to pay them in over 120 currencies and cryptos starting with USDC. The company has raised over $631mm from the likes of a16z, Spark Capital, Y Combinator, Coatue, Elad Gil, Nat Friedman, Alexis Ohanian, Daniel Gross, and others, most recently closing their $425mm Series D this past October. Dan joined Deel in 2020, after spending what he calls his "postgraduate education" at Revolut. For more on that chapter of his life, check out the link below to his 2019 WFT Podcast debut. In this conversation, Ally and Dan discuss: • Dan's time at Revolut and his lessons learned leading the early U.S. expansion • Meeting the founders of Deel and building conviction that they'd be the team to tackle global hiring and payroll • How Deel replaces long email chains, generic contracts, individual wire transfers, and foreign subsidiaries and instead enables quick and compliant hiring of employees living anywhere in the world • The effects of the pandemic on demand for foreign talent • How Deel thinks about company culture when it has 450 employees working in more than 50 countries • Why Dan values investor diligence as it relates to his own technical operations • And a lot more About Dan (twitter.com/Dan_Westgarth) Fintech enthusiast Dan Westgarth currently serves as Chief Operating Officer at Deel, where he manages Deel's Business, Operations, Legal, and Compliance teams. Before joining Deel, Dan was one of challenger bank Revolut's earliest employees, most recently in the role of General Manager for North America. Outside of his day to day at Deel, Westgarth spends his time making early-stage angel investments into interesting technology companies and helping them scale. Dan's 2019 WFT Podcast debut: https://soundcloud.com/wft/dan-westgarth-north-america-general-manager-for-revolut About Deel Deel is a global payroll solution that helps businesses hire anyone, anywhere. Using a tech-enabled self-serve process, companies can now hire independent contractors or full-time employees in over 150 countries, compliantly and in minutes. Today, Deel serves 6,000+ customers from SMBs to publicly traded companies, including Coinbase, Shopify and Dropbox, among others, and just recently announced that customers can now fund their payroll in crypto. For more on its journey, check out What's the deal with Deel? -- As always, for more Fintech insights and opportunities to collaborate, please find us below: WFT Blog: medium.com/wharton-fintech WFT Twitter: twitter.com/whartonfintech WFT Home: www.whartonfintech.org WFT LinkedIn: www.linkedin.com/company/wharton-fintech-club/ Suggest a Podcast Guest: airtable.com/shrdbokQPxAJzgVh7 Ally's Twitter: twitter.com/AllyMcCloskey Ally's Linkedin: linkedin.com/in/allyraemccloskey/

The Stack Overflow Podcast
250 words per minute on a chorded keyboard? Only if you can think that fast.

The Stack Overflow Podcast

Play Episode Listen Later Nov 16, 2021 24:40


GitHub's CEO, Nat Friedman, stepped down recently to focus on his startup roots. Chief product officer, Thomas Dohmke, will be moving to CEO. The Verge reviewed our no-longer-a-joke April Fool's keyboard. How many keyboard layouts are there anyway? Including non-English layouts, there's lots. Do you have a mind's eye? How about an inner monologue? We explore why some people have a voice in their head when they think and some don't. 

The Stack Overflow Podcast
250 words per minute on a chorded keyboard? Only if you can think that fast.

The Stack Overflow Podcast

Play Episode Listen Later Nov 16, 2021 24:40


GitHub's CEO, Nat Friedman, stepped down recently to focus on his startup roots. Chief product officer, Thomas Dohmke, will be moving to CEO. The Verge reviewed our no-longer-a-joke April Fool's keyboard. How many keyboard layouts are there anyway? Including non-English layouts, there's lots. Do you have a mind's eye? How about an inner monologue? We explore why some people have a voice in their head when they think and some don't. 

BrazilJS
Nat Friedman não é mais CEO do GitHub - Weekly #407

BrazilJS

Play Episode Listen Later Nov 11, 2021 20:36


Chegou a edição 407 da BrazilJS Weekly!Nesta semana Nat Friedman, que é CEO desde a compra do GitHub pela Microsoft, anunciou que está deixando o cargo.Muitas novidades nas devtools do Chrome, incluindo Record, replay e user flows.Também temos versões novas do Angular e do TypeScript, além de um artigo sobre a versão nova do Node.js. Get full access to BrazilJS at www.braziljs.org/subscribe

Coder Radio
439: Github NoPilot

Coder Radio

Play Episode Listen Later Nov 10, 2021 59:13


Microsoft has a bunch of new goodies for developers, but Mike is becoming more and more concerned about an insidious new feature.

Linux Action News
Linux Action News 214

Linux Action News

Play Episode Listen Later Nov 8, 2021 17:28


Significant changes at GitHub, Ubuntu starts work on a new desktop tool, why WirePlumber is a big deal, and we bust some Red Hat FUD.

Linux Action News
Linux Action News 214

Linux Action News

Play Episode Listen Later Nov 8, 2021 17:28


Significant changes at GitHub, Ubuntu starts work on a new desktop tool, why WirePlumber is a big deal, and we bust some Red Hat FUD.

Linux Action News
Linux Action News 214

Linux Action News

Play Episode Listen Later Nov 8, 2021 17:28


Significant changes at GitHub, Ubuntu starts work on a new desktop tool, why WirePlumber is a big deal, and we bust some Red Hat FUD.

Software Defined Talk
Episode 328: Your MOM is a SaaS

Software Defined Talk

Play Episode Listen Later Nov 5, 2021 62:00


This week we discuss HashiCorp's S1, AWS Earnings and highlights from Microsoft Ignite. Plus, Coté teaches us a new Dutch phrase. Rundown Cloud software vendor HashiCorp files for IPO as investors pour money into high-growth tech stocks (https://www.cnbc.com/2021/11/04/cloud-software-vendor-hashicorp-files-for-ipo.htmlCot) Coté's highlights (https://twitter.com/cote/status/1456344043433177091). Understanding the 2021 State of Open Source Report (https://tanzu.vmware.com/content/blog/state-of-open-source-report-highlights) Amazon Amazon badly misses on earnings and revenue, gives disappointing fourth-quarter guidance (https://www.cnbc.com/2021/10/28/amazon-amzn-earnings-q3-2021.html) Amazon Web Services tops analysts' estimates on profit and revenue (https://www.cnbc.com/2021/10/28/aws-earnings-q3-2021.html) Amazon Is The Flywheel, AWS Is The Cash Register (https://www.nextplatform.com/2021/10/29/amazon-is-the-flywheel-aws-is-the-cash-register/) A fully functional local AWS cloud stack. Develop and test your cloud & Serverless apps offline! (https://github.com/localstack/localstack) Your hybrid, multicloud, and edge strategy just got better with Azure (https://azure.microsoft.com/en-us/blog/your-hybrid-multicloud-and-edge-strategy-just-got-better-with-azure/) Compliance in a DevOps Culture (https://martinfowler.com/articles/devops-compliance.html) Relevant to your interests Abacus.ai snags $50M Series C as it expands into computer vision use cases (https://techcrunch.com/2021/10/27/abacus-ai-snags-50m-series-c-as-it-expands-into-computer-vision-use-cases/) NeuVector is excited to announce we are joining SUSE (https://www.suse.com/c/accelerating-security-innovation/) Monitor Your Azure Environment Using Amazon Managed Grafana (https://www.youtube.com/watch?v=e5z4ysfz_gA) Facebook's new name will be Meta (https://www.theverge.com/2021/10/28/22745234/facebook-new-name-meta-metaverse-zuckerberg-rebrand) New product: Raspberry Pi Zero 2 W on sale now at $15 - Raspberry Pi (https://www.raspberrypi.com/news/new-raspberry-pi-zero-2-w-2/) Universal Search & Productivity App | Command E (https://getcommande.com/?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiosprorata&stream=top) Software services firm Zendesk to buy SurveyMonkey parent for nearly $4 bln (https://www.reuters.com/technology/software-services-firm-zendesk-buy-surveymonkey-parent-nearly-4-bln-2021-10-28/) Kalshi (https://kalshi.com/markets) Popular gaming platform Roblox back online after multi-day crash (https://www.marketwatch.com/story/popular-gaming-platform-roblox-suffers-multi-day-crash-01635713002) Dell spins off $64 billion VMware as it battles debt hangover (https://arstechnica.com/information-technology/2021/11/dell-spins-off-64-billion-vmware-as-it-battles-debt-hangover/) BMC Unveils New Data Management and Analytics Capabilities (https://thenewstack.io/bmc-helix-and-control-m-data-management-and-analytics/) Squid Game Cryptocurrency Scammers Make Off With $2.1 Million (https://gizmodo.com/squid-game-cryptocurrency-scammers-make-off-with-2-1-m-1847972824) AI programming tool Copilot helps write up to 30% of code on GitHub (https://www.axios.com/copilot-artificial-intelligence-coding-github-9a202f40-9af7-4786-9dcb-b678683b360f.html) Introducing the Free Java License (https://blogs.oracle.com/java/post/free-java-license) Backblaze's IPO a test for smaller tech concerns (https://techcrunch.com/2021/11/02/backblazes-ipo-a-test-for-smaller-tech-concerns/) Happy 1.0, Knative (https://off-by-one.dev/happy-1-0-knative/) A Return to the General Purpose Database (https://redmonk.com/sogrady/2021/10/26/general-purpose-database/) Microsoft Teams enters the metaverse race with 3D avatars and immersive meetings (https://www.theverge.com/e/22523015) Nat Friedman to step down as head of Microsoft's GitHub (https://www.zdnet.com/article/nat-friedman-to-step-down-as-head-of-microsofts-github/) Microsoft launches Google Wave (https://techcrunch.com/2021/11/02/microsoft-launches-google-wave/) Nonsense Apple's worst shipping delay is for a $19 polishing cloth — Engadget (https://apple.news/A5hFyYAq3RgG35nJT1AX6bA) Aussie++ (https://aussieplusplus.vercel.app/) Microsoft resurrects Clippy again after brutally killing him off in Microsoft Teams (https://www.theverge.com/2021/11/1/22756973/microsoft-clippy-microsoft-teams-stickers-return) Allbirds shares surge 60% in eco-friendly shoe maker's market debut (https://www.cnbc.com/2021/11/03/allbirds-ipo-bird-to-start-trading-on-the-nasdaq.html) Sponsors strongDM — Manage and audit remote access to infrastructure. Start your free 14-day trial today at strongdm.com/SDT (http://strongdm.com/SDT) CBT Nuggets — Training available for IT Pros anytime, anywhere. Start your 7-day Free Trial today at cbtnuggets.com/sdt (https://cbtnuggets.com/sdt) Conferences MongoDB.local London 2021 (https://events.mongodb.com/dotlocallondon) - November 9, 2021 Coté speaking at DevOops (https://devoops.ru/en/) (Russia), Nov 11th: “Kubernetes is not for developers…?” (https://devoops.ru/en/talks/kubernetes-is-not-for-developers/) THAT Conference comes to Texas January 17-20, 2022 (https://that.us/activities/call-for-counselors/tx/2022) Listener Feedback Mailed stickers to Stephan in Berlin. Brian wants you to work at Red Hat as a Senior Product Manager (https://us-redhat.icims.com/jobs/88701/senior-product-manager) or Principle Product Manager (https://us-redhat.icims.com/jobs/89053/principal-product-manager) in Security. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), LinkedIn (https://www.linkedin.com/company/software-defined-talk/) and YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=823) and he wants you to buy it for $0.99. Use the code SDT to get $20 off Coté's book, (https://leanpub.com/digitalwtf/c/sdt) Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Brandon: Success Equation (http://success-equation.com) — The spiritual sequel to “The Halo Effect” Podcast Interview with Author (http://Michael> Mauboussin Master Class — Moats, Skill, Luck, Decision Making and a Whole Lot More | Acquired Podcast) YouTube Talk by Author (https://www.youtube.com/watch?v=1JLfqBsX5Lc) Paradox of Skill (https://research-doc.credit-suisse.com/docView?language=ENG&format=PDF&source_id=em&document_id=805456950&serialid=LsvBuE4wt3XNGE0V%2B3ec251NK9soTQqcMVQ9q2QuF2I%3D) Matt: The Art and Soul of Dune (Companion Book Music) (https://open.spotify.com/album/0FGr97xSOQLD596ZebfU1T?si=9rTrMK_wTiWZOtwiKfvZMA) Dune (the book) (https://amzn.to/3whLKHx) Coté: LaserWriter II (https://www.goodreads.com/en/book/show/56269270-laserwriter-ii). Also, check out my Tiny Tanzu Talk videos playlist (https://www.youtube.com/playlist?list=PLk_5VqpWEtiV6sJUlKx_4dse8U2tLjjn0) - 18 months of video madness. Also, I watch Frozen from three to ten times a day now with Dutch subtitles turned on. So, I'm trying to memorize “als een kip met het gezicht van een aap.” (https://translate.google.com/?sl=auto&tl=nl&text=like%20a%20chicken%20with%20the%20face%20of%20a%20monkey&op=translate&hl=en) Photo Credits Show Art (https://unsplash.com/photos/UMfGoM67w48) Hashicorp S1 Screenshot Show Art (https://twitter.com/cote/status/1456349608775491585re) Banner Header (https://unsplash.com/photos/dwBZLRPhHjc)

mixxio — podcast diario de tecnología

Stripe no quiere amuletos ni hechizos / Depuradoras biológicas para crear Hidrógeno / Cortinas anti-ruido de IKEA / Google News vuelve a España / EE.UU. bloquea NSO / Cámara Nikon sin obturador mecánico / Lavado de dinero en Twitch Patrocinador: Kärcher presenta su nueva colección de hardware de limpieza para tu hogar. En su web https://www.kaercher.com/es/ encontrarás una potente fregona eléctrica sin cables https://www.kaercher.com/es/home-garden/fregonas-electricas/fc-7-sin-cable-10557300.html, una limpiadora de vapor https://www.kaercher.com/es/home-garden/limpiadoras-de-vapor/sc-4-easyfix-15124500.html para eliminar el 99,999% de bacterias, o sus aspiradoras multi-uso https://www.kaercher.com/es/home-garden/aspiradores-multifuncionales/aspiradores-multiuso/wd-6-p-premium-13482710.html para limpiar garajes, sótanos y mucho más. — Si los compras antes del 15 de noviembre te llevas gratis su escoba eléctrica KB-5 https://www.kaercher.com/es/home-garden/escoba-electrica/kb-5-12580000.html. Stripe no quiere amuletos ni hechizos / Depuradoras biológicas para crear Hidrógeno / Cortinas anti-ruido de IKEA / Google News vuelve a España / EE.UU. bloquea NSO / Cámara Nikon sin obturador mecánico / Lavado de dinero en Twitch

The Sourcegraph Podcast
The future of the code economy, with Devon Zuegel, creator of GitHub Sponsors

The Sourcegraph Podcast

Play Episode Listen Later Aug 5, 2021 72:36


Devon Zuegel, the creator of GitHub Sponsors, tells the story of how an email rant to Nat Friedman on the eve of Microsoft's acquisition of GitHub turned into the most popular way to fund open source. She also shares her thoughts on different models of paying for software and where the future of the code economy is headed.Show notes & transcript: about.sourcegraph.com/podcast/devon-zuegel/Sourcegraph: about.sourcegraph.com

Hacker Noon Podcast
How Many Devs Does It Take to Fly on Mars?

Hacker Noon Podcast

Play Episode Listen Later Apr 21, 2021 35:39


What's David's hot take on Facebook's pivot to audio? Hands up if you think the Mars Helicopter video is a deep fake! ALSO: are kids growing up during a global panini more likely to get face tattoos?

Björeman // Melin
Avsnitt 244: Som att backa tiden

Björeman // Melin

Play Episode Listen Later Feb 7, 2021 115:08


Avsnitt 244 - Ett exklusivt erbjudande till dig som tröttnat på att skotta snö. MAXIMAL inspelning Jocke satt och idlade Gamingstolar - är de perfekt när man ska jobba hemma Christian: åter på benen! Minaintyg.se – saker man lär sig vid sjukskrivning Jocke har testat Stensåkras falukorv. Inte överdrivet imponerad. 2/5BM. Varför finns Jocke kvar hos Hallon trots att han inte är kund längre? Iller: “De har rätt att behålla information i sju år pga bokföring och sådant.” Jockes tiogigstrick Hyfsat mycket snö i Stockholm. Hyfsat mycket snö i Jönköping Macos 11.2 Jättesur tycks ha fixat problemet med Jockes USB-C->-HDMI-anslutna skärm som inte alltid fungerade förut. Fredrik kör just nu sin trådlösa trackpad trådlöst Gammal Imac i ett skjul Hush - surfa utan cookie popup på Mac och iOS. Vilken befrielse. Microsoft Edge har nu stöd för M1-Macar. iOS 14.5 Beta – lås upp din iPhone med din Apple Watch om du har mask Skrivare för hemmet; vad ska man köpa? Amiga-RPi-HDMI-projektet tar ett steg framåt: Raspberry pi Zero inköpt Christian har tröttat på Facebook och Twitter men tittar däremot mer på YouTube Spotify höjer priserna – Jocke lämnar skeppet Bezos kliver av som vd för Amazon. Blir styrelseordförande istället. Film och TV Snowpiercer S2 börjar bra WandaVision – nu är det inte bara sitcom Länkar Anders läser Sagan om ringen Mandalorian Din hjärna Mina intyg Crohns sjukdom Stensåkra falukorv Andersson & Tillmans falukorv Andersson & Tillmans Roslagsgrill Härryda Karlsson Vaggeryds falukorv Erikson chark i Nossebro The windows of Siracusa county The barn Kioskenkorva ColdMac 10 BOINC Hush Hush på Github Nat Friedman Microsoft Edge numera på M1 Den sydafrikanska webbläsaren IOS 14-beta med klockupplåsning HP laserjet 4M Airprint Jocke tycker Christian ska köpa den här skrivaren Amiga-Digital-Video svenskalag.se Jacob Guidol Netnewswire Feedly Sveriges mästerkock Efter stängning Markus Aujalay Markus Youtubekanal Uppdrag mat Leif Mannerström på Youtube Spotify höjer priset för Premium-abonnemanget Bezos kliver av som VD för Amazon Larry Ellison Snowpiercer Wandavision Fredrik Björeman, Joacim Melin och Christian Åhs. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-244-som-att-backa-i-tiden.html.

Taking You To The Top
Episode 60: Teamway - Co-Founder & CEO - Søren Nørgaard

Taking You To The Top

Play Episode Listen Later Jan 20, 2021 35:40


Top 3 Value Bombs: 1. Allow yourself to act intuitively 2. A 100% remote workforce is the new trend since the pandemic 3. Structure your validation process to ensure your success The Famous Five: 1. What's your favorite business book? Answer: "Let My People Go Surfing: The Education of a Reluctant Businessman by Yvon Chouinard" 2. Is there a CEO you're following or studying? Answer: "GitHub: Founders - Tom Preston-Werner, Chris Wanstrath, Scott Chacon, P. J. Hyett & CEO - Nat Friedman" 3. What's your favorite online tool for growing your business? Answer: "Confluence" 4. If you could give your 20 year old self a piece of advice, what would it be? Answer: "Trust your intuition." 5. How many hours of sleep do you get every night? Answer: "6-9 hours." Episode Information: https://www.takingyoutothetop.io/episodes/episode-60-teamway-co-founder-ceo-søren-nørgaard

Hacker News TLDR
[#16] Jan 6, 2021

Hacker News TLDR

Play Episode Listen Later Jan 6, 2021 16:08


Kenny learns the difference between Bridgerton and Paddington. We also talk DALL·E, Iran, behavioral psychology, and Nat Friedman being pretty cool.

How I Raised It - The podcast where we interview startup founders who raised capital.
Ep. 176 How I Raised It with Brian Vallelunga of Doppler

How I Raised It - The podcast where we interview startup founders who raised capital.

Play Episode Listen Later Dec 29, 2020 52:46


Produced by Foundersuite.com, "How I Raised It" goes behind the scenes with startup founders who have raised capital. This episode is with Brian Vallelunga of Doppler.com a platform for managing environment variables (e.g. API keys) for developers. In this episode, Brian talks about raising a pre-seed round from Kleiner while he was still working at Uber, applying to Y Combinator for the 6th or 7th time, pitching a long-term massive vision (even in the early days), resisting the temptation of initial offers and the search for "partners not capital," how he made his investor intros go viral, pitching Peter Thiel, the importance of managing dilution, and much more. The Company raised raised $2.3 million in a seed round led by Sequoia Capital. Abstract Ventures, Greylock Partners, Kleiner Perkins, Soma Capital, Nat Friedman, Aaron Levie, Dylan Field, Ben Porterfield, Jeremy Stoppelman, Kevin Hartz, Jeff Holden, Greg Brockman, Jeffrey Queisser and Peter Thiel also participated in the round. How I Raised It is produced by Foundersuite, makers of software to raise capital and manage investor relations. Foundersuite's customers have raised over $2.5 Billion since 2016. Create a free account at www.foundersuite.com

State of Mind
S2 E06: Rolling the dice more often

State of Mind

Play Episode Listen Later Dec 15, 2020 30:16


State of Mind
S2 E06: Rolling the dice more often

State of Mind

Play Episode Listen Later Dec 15, 2020 30:16


Last Week in .NET
A Magic String that takes down your system

Last Week in .NET

Play Episode Listen Later Sep 28, 2020 5:42


Microsoft Ignite was the 22nd - 24th of September and the news is hereLots of Azure, and lots of releases that large enterprises and governments would love.Top Ten APIs in .NET 5.0Good info here, and lots you may not have known about.How to build a Database application in Blazor Part 3Everything old is new again. Angular is the new Webforms, Blazor is the new Angular. Here we are, partying like it's 2009.Visual Studio for Mac now supports iOS 14 and XCode 12The magic phrase is redactedApparently if you make that text in the above tweet your password you can find out if anyone stores your password in plain text.Microsoft talks about what's new with the Windows SubSystem for Linux in September 2020.If we ever get to the year of Linux on the Desktop it will be through WSL.Ginny Caughey talks about Project ReunionI haven't quite figured out what they want it to do but as long as it's "Simplify the sheer number of ways you can develop for Windows", I'm in.Microsoft releases Microsoft.Data.Sqlite 5.0 RC1 for Entity Framework Core.Jetbrains Resharper 2020.3 EAP is out, with support for C# 9 features like top-level statements.The Desktop community Standup happened on September 24th, and this hour long standup dives into winforms and OSS.In case it isn't apparent, Microsoft uses the word 'standup' loosely.Microsoft's Channel 9 goes into Microsoft Identity and how to get started with it in this 15 minute video.Fifteen minutes. Around the time a standup should take. We see you, Channel 9.Matthew Leibowitz writes a deep dive into System.CommandLine.If you want to write a command line app in .NET, check out System.CommandLine. Finally there's a way to deal with parsing command line arguments that doesn't involve reinventing the wheel or using a go-clone library.Nat Friedman, CEO of Github, unfollows everyone in an attempt to hear less about Github putting children in cages.On September 24th, Nat tweeted "Github Stories

MultipLX
Peut-on être asservi au code ?

MultipLX

Play Episode Listen Later Jul 30, 2020 24:57


Le nom d'une branche de code source peut-il avoir un impact sur notre société ? C'est tout le sujet de discussions et d'échanges qui ont animé dernièrement la communauté des développeurs. Notre développeur Hervé Bérenger y voit le signe d'une nouvelle vitalité de la communauté. Un échange philosophique qui vous fera découvrir “l'arrière-cuisine” du numérique, et donc de notre monde.Un sujet particulier vous tient à coeur, faites le nous savoir et nous interviewerons un de nos experts pour vous : multiplx@fabernovel.com

TWiT Specials (Video HD)
News 354: Microsoft Build 2020

TWiT Specials (Video HD)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

Radio Leo (Video HI)
TWiT News 354: Microsoft Build 2020

Radio Leo (Video HI)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

Radio Leo (Video HD)
TWiT News 354: Microsoft Build 2020

Radio Leo (Video HD)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

Radio Leo (Video LO)
TWiT News 354: Microsoft Build 2020

Radio Leo (Video LO)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

All TWiT.tv Shows (Video LO)
TWiT News 354: Microsoft Build 2020

All TWiT.tv Shows (Video LO)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

All TWiT.tv Shows (Video HI)
TWiT News 354: Microsoft Build 2020

All TWiT.tv Shows (Video HI)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

All TWiT.tv Shows (Video HD)
TWiT News 354: Microsoft Build 2020

All TWiT.tv Shows (Video HD)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

TWiT Specials (Video LO)
News 354: Microsoft Build 2020

TWiT Specials (Video LO)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

All TWiT.tv Shows (MP3)
TWiT News 354: Microsoft Build 2020

All TWiT.tv Shows (MP3)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

TWiT Specials (MP3)
News 354: Microsoft Build 2020

TWiT Specials (MP3)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

TWiT Specials (Video HI)
News 354: Microsoft Build 2020

TWiT Specials (Video HI)

Play Episode Listen Later May 19, 2020 95:38


Microsoft Build 2020 Satya Nadella Vision Keynote, Imagine Cup, and Scott Hanselman Developer Keynote Microsoft CEO Satya Nadella gave his annual Vision Keynote at Build 2020. Nadella talked briefly about Github, Power Platform, Azure, Windows, Teams, and Project Reunion. He interviewed Greg Bowman, Director of Folding@Home, talking about how Folding@Home has been used to fight Alzheimer's Disease and Cancer, and is currently being used to fight Covid-19. Then he featured five artists at the San Francisco Conservatory of Music who have been using Teams to connect virtually during the current shelter-at-home order. After Nadella's keynote, the conference moved on to the Imagine Cup, a tech competition for college students. The winning team, Hollow from Hong Kong, is developing a tool to help throat cancer victims recover their voices. Build then moved on to Scott Hanselman, who interviewed several developers. He first interviewed Kayla Clarence about the Windows Subsystem for Linux, the new Windows terminal, the Winget Windows Package Manager, and using Teams to code in the terminal collaboratively. Then he called Nat Friedman, CEO of Github. Friedman talked about Github's new free-to-everybody system, mobile app, and integration with Visual Studio through Codespace. Then he interviewed Allison Buchholtz-Au, Program Manager of Codespace at Microsoft who went id-depth about Codespace. Scott also talked about .Net Core 3.1 and changes to Visual Studio 2019. Hosts: Leo Laporte and Mikah Sargent Download or subscribe to this show at https://twit.tv/shows/twit-news. Sponsor: LastPass.com/twit

LINUX Unplugged
349: Arm: A New Hope

LINUX Unplugged

Play Episode Listen Later Apr 14, 2020 52:55


We build the server you never should, a tricked out Arm box, and push it to the limit with a telnet torture test. Plus what we're playing recently, community news, a handy self-hosted music pick, and more. Special Guests: Alan Pope and Brent Gervais.

Björeman // Melin
Avsnitt 178: RSS är roligt igen

Björeman // Melin

Play Episode Listen Later Sep 7, 2019 68:54


Fredrik rapporterar från en säker plats i Spanien Nyheter i Icloud får anstå Veckans oväntade VR-mys - Fredrik surfar på nätet i sin Oculus quest Jockes mage bråkar (dagen efter inspelning blev han inlagd på sjukhus) Jocke tittar på Nginx på CentOS7, kompilerar Haproxy på CentOS 7, databaskluster med MariaDB och Galera, med mera 2,5 timmar snack med John Carmack, någon? Unix fyller 50 - episk (och lite för kort) artikel på Ars technica iPhone-event nästa vecka Fredrik har handlat hus. Jocke berättar om oväntade utgifter och strulande Macar. Borde inte datorer kunna vara lite mer spännande? NetNewsWire en vecka in - RSS är roligt igen! (grubers senaste avsnitt med Brent Simmons är svinbra. Denna lista är också rolig där gänget bakom Netnewswire funderar på vilken kritik de skulle få när de väl släppt applikationen The web we lost, högeligen aktuell artikel från 2012 Fredrik funderar på att bygga mycket liten Mastodon-app, Jocke är för Länkar Sitges Full stack fest Brainshare Nat Friedman Miguel de Icaza Midnight commander Charlie Christiansen Arne Anka Bombad och sänkt Firefox reality Oculus quest Instapaper Jockes mage är verkligen i olag Galera Puppet Ansible Graylog John Carmack The Joe Rogan experience Joe Rogan pratar med John Carmack Ars artikel om Unix Roblox Fornite For all mankind The morning show Netnewswire Brent Simmons på The talk show) The web we lost Whalebird Thedesk Två andra mastodonklienter för Mac: Hyperspace och Sax Day of the programmer Två nördar - en podcast. Fredrik Björeman och Joacim Melin diskuterar allt som gör livet värt att leva. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-178-rss-ar-roligt-igen.html.

44BITS 팟캐스트 - 클라우드, 개발, 가젯
stdout_002.log: 애플 신제품, 아마존 프라임데이 장애

44BITS 팟캐스트 - 클라우드, 개발, 가젯

Play Episode Listen Later Nov 4, 2018 67:14


두 번째 에피소드는 애플 신제품 발표, GitHub 장애원인 보고서, 클라우드 시대의 DB AWS Aurora를 이야기 했어요. 참가자: @seapy, @raccoonyy, @nacyo_t [총정리] 쉽게 보는 애플 발표 - 새로운 맥북에어, 맥미니, 아이패드 프로 달라진점. GitHub 장애 보고서 MySQL High Availability at GitHub 아마존 서비스 DB를 Oracle에서 Aurora로 변경한것이 프라임 데이 장애의 원인중 하나라고 이야기 하는 기사 - CNBC CNBC 기자가 아침일찍 전화해서 항의 했다는 트윗 - Andy Pavlo AWS 서울리전 인스턴스 부족 Amazon EC2, 이제 온디맨드 용량 예약 제공 Datadog APM New Relic APM GitHub 새 CEO Nat Friedman 취임 인삿말 Nat Friedman - Wikipedia IBM이 redhat 인수 Elastic 뉴욕증시 상장 Skip 언어

Bad Voltage
2×41: To Star Repeat 5 Forward 100 Right 144

Bad Voltage

Play Episode Listen Later Nov 2, 2018 67:52


Stuart Langridge, Jono Bacon, and Jeremy Garcia present Bad Voltage, in which a proof through poetry is presented and utterly fails to move the audience, Red Hat maintains a ton of things, and: [00:01:20] IBM acquires Red Hat; let the hot takes begin [00:24:30] Microsoft deal to acquire Github completes, and Nat Friedman is now […]

Parallel Passion
10: Don Goodman-Wilson

Parallel Passion

Play Episode Listen Later Jul 19, 2018 63:37


Show Notes Microsoft to acquire GitHub (https://news.microsoft.com/2018/06/04/microsoft-to-acquire-github-for-7-5-billion/) I’m Nat Friedman, future CEO of GitHub. AMA. (https://www.reddit.com/r/AMA/comments/8pc8mf/im_nat_friedman_future_ceo_of_github_ama/) ScreenHero joins Slack (http://blog.screenhero.com/post/109337923751/screenhero-joins-slack) Zoom (https://zoom.us/) Magic: The Gathering (https://en.wikipedia.org/wiki/Magic:_The_Gathering) magique - A tool that applies genetic algorithms to building Magic: The Gathering decks. (https://github.com/DEGoodmanWilson/magique) Latent Dirichlet allocation (https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) MarI/O - Machine Learning for Video Games (https://www.youtube.com/watch?v=qv6UVOQ0F44) Swackett (https://sweaterjacketorcoat.com/) Boodler: a programmable soundscape tool (https://github.com/erkyrath/boodler) Corelation does not imply causation (https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation) A Guide to the Good Life: The Ancient Art of Stoic Joy (https://www.amazon.com/o/ASIN/1522632735/parpaspod-20) Recommendations Short stories by Jorge Luis Borges (https://en.wikipedia.org/wiki/Category:Short_stories_by_Jorge_Luis_Borges) The City & The City (https://www.amazon.com/o/ASIN/034549752X/parpaspod-20) The Good Place (https://www.netflix.com/title/80113701) Don Goodman-Wilson Twitter (https://twitter.com/DEGoodmanWilson) GitHub (https://github.com/DEGoodmanWilson) Personal Page (https://don.goodman-wilson.com/) Parallel Passion Patreon (https://www.patreon.com/parpaspod) Twitter (https://www.twitter.com/parpaspod) Instagram (https://www.instagram.com/parpaspod) Facebook (https://www.facebook.com/parpaspod) Credits Tina Tavčar (https://twitter.com/tinatavcar) for the logo Jan Jenko (https://twitter.com/JanJenko) for the music

Lambda3 Podcast
Lambda3 Podcast 102 – Github e a compra pela Microsoft

Lambda3 Podcast

Play Episode Listen Later Jul 6, 2018 84:11


Nesse episódio vamos tocar na polêmica compra do Github pela Microsoft. É boa, é ruim? O que muda? Vem conversar com a gente! Feed do podcast: www.lambda3.com.br/feed/podcast Feed do podcast somente com episódios técnicos: www.lambda3.com.br/feed/podcast-tecnico Feed do podcast somente com episódios não técnicos: www.lambda3.com.br/feed/podcast-nao-tecnico Orlando, Emmanuel, Vinicius e Giovanni Pauta: Os fatos A historia do Github e do Git Situação financeira do Github e a resposta do mercado A resposta da comunidade O movimento pro Gitlab Abordagem: a MS errou na forma com que o anúncio foi feito? Memes O que as referências do mercado falaram a respeito. FUDs O que muda? (hoje, no futuro) O que esperamos Links Citados: Post no blog da Microsoft sobre a compra Post no blog do Github sobre a compra Post no Nat Friedman (futuro presidente do Github) sobre a compra Reação da Linux Foundation Participantes: Emmanuel Brandão - @egomesbrandao Giovanni Bassi - @giovannibassi Orlando Gomes - #orlandomaiuri Vinicius Quaiato - @vquaiato Edição: Luppi Arts Créditos das músicas usadas neste programa: Music by Kevin MacLeod (incompetech.com) licensed under Creative Commons: By Attribution 3.0 - creativecommons.org/licenses/by/3.0

Software Defined Talk
Episode 138: 8 Duffle bags, some permitted food enhancer and, GitHub goes to Redmond

Software Defined Talk

Play Episode Listen Later Jun 7, 2018 52:49


GItHub got bought so we breakdown what it all means for devs and open source. Matt Ray offers expert tips on relocating your family aboard as Coté prepares to move to Amsterdam. Finally, we announced the first live in person Software Defined Talk meetup in July somewhere in Austin. Don’t miss it. GitHub got bought Price: $7.5bn in stock. Getting Microsoft stock now is probably good, should have good growth over next 3 to five years (the golden handcuff period, etc.). See the overview deck from Microsoft (https://view.officeapps.live.com/op/view.aspx?src=https://c.s-microsoft.com/en-us/CMSFiles/calldeck.pptx?version=f3eef72b-35d3-95b2-4fda-73a47f805c7f). Very qualitative, not much (or any) business case numbers stuff. Will “operate independently,” Microsoft’s Nat Friedman to be CEO of GitHub, reporting up to Scott Guthrie. Does this imply Microsoft will be moving OSS stuff to the GitHub business? How does this fit with TFS, whatever “Sourcesafe” is now? Coté has no idea about the current state of that business, esp. w/r/t to git. Slides say there’s be the usual seamless integration in VisualStudio, also some marketplace thing additions (presumably, pull in GitHub repos). Don’t forget the private cloud version of GitHub, plus using it for cloud native/DevOps config storage and release management stuff. Microsoft’s blog post (https://blogs.microsoft.com/blog/2018/06/04/microsoft-github-empowering-developers/), GitHub’s blog post (https://blog.github.com/2018-06-04-github-microsoft/). Google Was Bidding Against Microsoft For GitHub, Report Says | CRN Mobile (https://m.crn.com/news/cloud/300104596/google-was-bidding-against-microsoft-for-github-report-says.htm) # Kubernetes Korner Four years after release of Kubernetes 1.0, it has come a long way (https://techcrunch.com/2018/06/06/four-years-after-release-of-kubernetes-1-0-it-has-come-long-way/) Enterprise hits and misses - AI hits the curve, SAP subverts the two-speed enterprise (https://diginomica.com/2018/06/06/enterprise-hits-and-misses-ai-hits-the-curve-sap-subverts-the-two-speed-enterprise/) # This episode brought to you by: Datadog! This episode is sponsored by Datadog, a monitoring platform for cloud-scale infrastructure and applications. Sign up for a free trial (https://www.datadoghq.com/ts/tshirt-landingpage/?utm_source=Advertisement&utm_medium=Advertisement&utm_campaign=SoftwareDefinedTalkRead-Tshirt) at www.datadog.com/sdt (http://www.datadog.com/sdt) This week Datadog also wants you to know about their upcoming conference Dash, in NYC on July 11th-12th (https://www.dashcon.io/?utm_source=Advertisement&utm_medium=GoogleAds&utm_campaign=GoogleAds-Dash&utm_content=Dash&utm_keyword=%2Bdatadog%20%2Bconference&utm_matchtype=b&gclid=CjwKCAjw8r_XBRBkEiwAjWGLlH3LXgGYu4iPzwOh8gkrY5NAQ1B9dWqB2OukaISujKyVCU4_5sUUchoCfT8QAvD_BwE). You can register to attend at https://www.dashcon.io/sdt use the discount code DASHSDT and save 20%. DevOpsDays MINNEAPOLIS - JULY 12-13, 2018 Get a 20% discount for one of the best DevOpsDays on the planet, DevOpsDays Minneapolis (https://www.devopsdays.org/events/2018-minneapolis/welcome/). It's July 12th to 13th, and you can bet it'll be worth your time. If you're new to DevOps you'll get an idea of what it is, how it's practices, and how to get started. If you're an old pro, you'll dive down into topics and catch-up with all the other old hands. Code: SDT2018 (https://www.devopsdays.org/events/2018-minneapolis/registration/). Conferences, et. al. June 28th and 29th, 2018 - Coté at DevOpsDays Amsterdam (https://www.devopsdays.org/events/2018-amsterdam/welcome/) - come get a sticker! Also at some World Cup things in Cologne and Munich, email if you’re a VP type and interested. Sep 24th to 27th - SpringOne Platform (https://springoneplatform.io/), in DC/Maryland (crabs!) get $200 off registration with the code S1P200_Cote. Also, check out the Spring One Tour - coming to a city near you (https://springonetour.io/)! The Software Defined Talk Meetup Would you come to a three hour SDT event in July in Austin, TX? Email brandon@softwaredefinedtalk.com and he will add you to the invite. SDT news & hype Check out Software Defined Interviews (http://www.softwaredefinedinterviews.com/), our new podcast. Pretty self-descriptive, plus the #exegesis podcast we’ve been doing, all in one, for free. Join us in Slack (http://www.softwaredefinedtalk.com/slack). Buy some t-shirts (https://fsgprints.myshopify.com/collections/software-defined-talk)! DISCOUNT CODE: SDTFSG (20% off) Send your name and address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a sticker. If you run into Matt he’ll give you one too! Recommendations Brandon: New York T (https://www.nytimes.com/subscription/games/lp897H9.html?campaignId=47LFJ&gclid=Cj0KCQjwjN7YBRCOARIsAFCb934Mi7pJWCigB6YfV4e-84hr9DVOcSkDi4jDU2N5rCB_ZMZU-6fdR8YaAtr3EALw_wcB&dclid=CNnNuMLwv9sCFVaraQodG6MJeg)i (https://www.nytimes.com/subscription/games/lp897H9.html?campaignId=47LFJ&gclid=Cj0KCQjwjN7YBRCOARIsAFCb934Mi7pJWCigB6YfV4e-84hr9DVOcSkDi4jDU2N5rCB_ZMZU-6fdR8YaAtr3EALw_wcB&dclid=CNnNuMLwv9sCFVaraQodG6MJeg)mes Crossword App (https://www.nytimes.com/subscription/games/lp897H9.html?campaignId=47LFJ&gclid=Cj0KCQjwjN7YBRCOARIsAFCb934Mi7pJWCigB6YfV4e-84hr9DVOcSkDi4jDU2N5rCB_ZMZU-6fdR8YaAtr3EALw_wcB&dclid=CNnNuMLwv9sCFVaraQodG6MJeg) Matt: Killing Eve (https://www.imdb.com/title/tt7016936/) Coté: that Singaporian soup mix (Song Fa bak kut teh (https://www.songfa.com.sg/)). What are “permitted food enhancer (E621),” though? photo credit (https://www.flickr.com/photos/unfoldedorigami/2736239796/in/photolist-5aMWLd-duFvra-W8Gn3s-afnmXg-dZXZXS-fGqUk2-fGHtyG-fGqTW8-7PPsrg-bZNbZ5-VuvUXX-aVzLAB-VuvVqk-fkbLc-7hmmSo-KaJ1Ut-ddSJg-VuvWjV-7PPsRn-dPuC87-Nr18H-7PSLgo-dPuCmQ-iDRBf-96BLzJ-d6KvqC-VuvVfk-WEqbz5-8m5g3h-VrN1cm-W8HypJ-dPoZsP-pQtEWF-W8HiMN-VuvLy2-VrN1xm-ch9Fyf-dRRnBN-VrMZTL-WEpY7u-rfTDR9-d6KvdC-5QFJ9Z-fb82pi-bViWDu-7PPs4v-f5zwWr-a3T15k-9jZ8QT-dPoZwi)

Kubernetes Podcast from Google
Skaffold, with Matt Rickard

Kubernetes Podcast from Google

Play Episode Listen Later Jun 5, 2018 18:34


On this weeks Kubernetes Podcast, Adam and Craig talk to Matt Rickard about Skaffold. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod News of the week Microsoft to acquire GitHub for $7.5bnby New CEO is Nat Friedman, previously of Ximian and Xamarin Huge uptick in GitLab migrations - over 100,000 repositories migrated Istio 0.8 released New traffic management model Multiple clusters in the same Istio mesh Envoy v2 APIs VPC native clusters in Google Kubernetes Engine Kustomize: Launch blog post Kustomize on GitHub How to get your talk accepted at Kubecon Shanghai CFP Seattle CFP Links from the interview Skaffold GitHub page Announcement blog Matt Rickard on Twitter

Devchat.tv Master Feed
189 iPS iPhreaks Take Manhattan - Nat Friedman

Devchat.tv Master Feed

Play Episode Listen Later Feb 8, 2017 33:27


On episode 189 of iPhreaks, Andrew and Jaim talk to Xamarin Founder Nat Friedman in New York City during Microsoft Connect(). Nat talks about his new role and about creating Visual Studio Mobile Center. Tune in to iPhreaks Take Manhattan - Nat Friedman.

The iPhreaks Show
189 iPS iPhreaks Take Manhattan - Nat Friedman

The iPhreaks Show

Play Episode Listen Later Feb 8, 2017 33:27


On episode 189 of iPhreaks, Andrew and Jaim talk to Xamarin Founder Nat Friedman in New York City during Microsoft Connect(). Nat talks about his new role and about creating Visual Studio Mobile Center. Tune in to iPhreaks Take Manhattan - Nat Friedman.

.NET Rocks!
Xamarin Joins Microsoft!

.NET Rocks!

Play Episode Listen Later Mar 31, 2016 50:48


Microsoft buys Xamarin! While at Build, Carl and Richard chatted with Nat Friedman and Miguel de Icaza about what the acquisition of Xamarin means. The big news is that the Xamarin tools for making iOS and Android apps are now part of Visual Studio - all versions, right down to the Community Edition. And there's more (of course), so have a listen. Miguel digs into what this means for the average .NET developer going forward: .NET now runs everywhere you could possibly want to run code, and maybe a few spots you've never thought of. It's true, .NET really does rock!Support this podcast at — https://redcircle.com/net-rocks/donations

.NET Rocks!
Xamarin Joins Microsoft!

.NET Rocks!

Play Episode Listen Later Mar 31, 2016 50:47


Microsoft buys Xamarin! While at Build, Carl and Richard chatted with Nat Friedman and Miguel de Icaza about what the acquisition of Xamarin means. The big news is that the Xamarin tools for making iOS and Android apps are now part of Visual Studio - all versions, right down to the Community Edition. And there's more (of course), so have a listen. Miguel digs into what this means for the average .NET developer going forward: .NET now runs everywhere you could possibly want to run code, and maybe a few spots you've never thought of. It's true, .NET really does rock!Support this podcast at — https://redcircle.com/net-rocks/donations

Hacker Medley
Pilot Show: The 26c3 and GSM security

Hacker Medley

Play Episode Listen Later Jan 4, 2010


Welcome to Hacker Medley! We decided to try podcasting. In our pilot show, Nat Friedman shares what he learned about mobile phone security at the 26th annual Chaos Communications Congress in Berlin. It’s our first effort, so it’s a little rough. But please let us know what you think so we can decide whether or not [...]

Subjects – Novell Open Audio
Live at BrainShare: Nat Friedman and Tomboy maintainer Boyd Timothy

Subjects – Novell Open Audio

Play Episode Listen Later Mar 28, 2007 0:01


Caitlin, Erin and Ted chat with Boyd Timothy about the Tomboy note-taking application in Pull out the earbuds and adjust the volume accordingly. Time: 50:06 MP3 Size:17.3 MB Segment Times Tomboy with Boyd Timothy: 1:28 – 18:07 Nat Friedman: 18:28 – 49:01 Links for this Episode: Tomboy project page Boyd Timothy’s blog Post: Road map […]

Subjects – Novell Open Audio
Nat Friedman at LinuxWorld Expo

Subjects – Novell Open Audio

Play Episode Listen Later Aug 15, 2006 0:01


We speak with Nat Friedman at LinuxWorld Expo in front of a live audience.

Subjects – Novell Open Audio
BrainShare 06 BackStage: Wednesday

Subjects – Novell Open Audio

Play Episode Listen Later Mar 22, 2006 0:01


Nat Friedman and Guy Lunardi talk about their sensational demonstration of SUSE Linux Enterprise Desktop. Chris Neal and Kevin Smith discuss migrating from Microsoft Exchange to GroupWise on Linux, as well using GroupWise on mobiles. Martin Buckley tell us about securing mini-storage devices.

.NET Rocks!
Nat Friedman and Miguel de Icaza Start Xamarin

.NET Rocks!

Play Episode Listen Later Jan 1, 1970 57:07


Carl and Richard talk to Nat Friedman and Miguel de Icaza, the CEO and CTO (respectively) of Xamarin. Xamarin is the company that Nat and Miguel set up to house the Mono Project and the rest of the Mono related products including MonoTouch and Mono for Android after Attachmate acquired Novell. The conversation starts out on mobile development and moves to tablets. Click the link below for a 20% discount on Xamarin tools!Support this podcast at — https://redcircle.com/net-rocks/donations