POPULARITY
ThePrimeagen (aka Michael Paulson) is a programmer who has educated, entertained, and inspired millions of people to build software and have fun doing it. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep461-sc See below for timestamps, and to give feedback, submit questions, contact Lex, etc. CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: ThePrimeagen's X: https://twitter.com/ThePrimeagen ThePrimeagen's YouTube: https://youtube.com/ThePrimeTimeagen ThePrimeagen's Twitch: https://twitch.tv/ThePrimeagen ThePrimeagen's GitHub: https://github.com/theprimeagen ThePrimeagen's TikTok: https://tiktok.com/@theprimeagen ThePrimeagen's Coffee: https://www.terminal.shop/ SPONSORS: To support this podcast, check out our sponsors & get discounts: Invideo AI: AI video generator. Go to https://invideo.io/i/lexpod Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex BetterHelp: Online therapy and counseling. Go to https://betterhelp.com/lex AG1: All-in-one daily nutrition drinks. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (10:27) - Love for programming (20:00) - Hardest part of programming (22:16) - Types of programming (29:54) - Life story (39:58) - Hardship (41:29) - High school (47:15) - Porn addiction (57:01) - God (1:12:44) - Perseverance (1:22:40) - Netflix (1:35:08) - Groovy (1:40:13) - Printf() debugging (1:46:35) - Falcor (1:56:05) - Breaking production (1:58:49) - Pieter Levels (2:03:19) - Netflix, Twitch, and YouTube infrastructure (2:15:22) - ThePrimeagen origin story (2:30:37) - Learning programming languages (2:39:40) - Best programming languages in 2025 (2:44:35) - Python (2:45:15) - HTML & CSS (2:46:05) - Bash (2:46:45) - FFmpeg (2:53:28) - Performance (2:56:00) - Rust (3:00:48) - Epic projects (3:14:12) - Asserts (3:23:26) - ADHD (3:31:34) - Productivity (3:35:58) - Programming setup (4:11:28) - Coffee (4:18:32) - Programming with AI (5:01:16) - Advice for young programmers (5:12:48) - Reddit questions (5:20:20) - God PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips
This week, we discuss essential Linux applications, explore the exciting new features of Linux Kernel 6.13, and discover how to control FFmpeg with natural language commands. Plus, we'll discuss Nvidia's plans for a consumer-friendly ARM chip.
Guest Kailash Nadh Panelist Richard Littauer Show Notes In this episode of Sustain, host Richard Littauer sits down with Kailash Nath, CTO of Zerodha, to delve into the dynamics of funding and sustaining open-source projects. They explore the establishment of Zerodha's FLOSS/Fund, which allocates a million dollars annually to support pivotal open source projects and discuss the development of the funding.json format to streamline grant applications. The conversation also covers the challenges of creating such funds, including regulatory hurdles, and aims to make financial assistance globally accessible. From detailing efforts to revive India's open-source communities through the FOSS United Foundation to highlighting the obstacles and innovative models in funding open-source software, the episode provides a comprehensive look at both global and Indian perspectives. Hit download now! [00:01:14] Richard brings up the FLOSS/Fund, a $1 million annual commitment to open source projects. Kailash confirms that the fund is still active and explains how it recently became more structured and a small team has been formed to manage the logistics of the fund. [00:02:48] The FLOSS/Fund has been created to publicly commit to supporting open source in a structured way. Kailash points out that while other companies give donations to open source, there are few structured initiatives from large organizations. [00:04:33] Kailash expresses frustration that few billion-dollar companies have set up similar initiatives to support open source projects. [00:06:24] Kailash explains that the FLOSS/Fund is open to the global open source community and target systemically important projects like libraries and widely used software tools. [00:08:14] Richard inquires about the application process and Kailash explains instead of traditional grant forms, projects must create and publish a “funding.json” file. [00:10:35] Kailash shares that the structured application method is designed to avoid the usual awkwardness of fundraising conversations and streamline the process. [00:13:31] The two discuss the difficulty maintainers face when articulating the importance of their projects, particularly for maintainers who may not have strong written communication skills, Kailash emphasizes that the funding.json method does not replace narrative descriptions but simplifies signaling. [00:16:17] The conversation switches to global scope and prioritization as Kailash tells us Zerodha's open source contributions are not limited to projects they directly use, the fund is open to all global projects, and Zerodha hopes to support projects that are crucial for open source infrastructure. [00:17:09] Kailash discusses the complexity of sending money internationally from India. [00:18:59] We learn the goal is not to make funding.json go viral through financial incentives, but to organically grow adoption if the tool proves valuable. [00:20:49] Richard and Kailash explore the broader challenges of sustaining open source projects beyond funding, such as building healthy communities and incentivizing the proper use and citation of open source infrastructure. [00:25:32] Kailash discusses the Indian open source ecosystem. [00:30:29] Kailash explains how Zerodha's initiatives aim to push the Indian industry to give back more to the open source community. He hopes that their efforts will inspire other companies to set up similar initiatives. [00:32:12] Find out where you can donate to Floss fund and follow Kailash online. Spotlight [00:32:56] Richard's spotlight is his first grade teacher, Mrs. Barril. [00:33:25] Kailash's spotlight is Jim Martsolf who introduced him to “webmastering.” Links SustainOSS (https://sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) richard@sustainoss.org (mailto:richard@sustainoss.org) SustainOSS Discourse (https://discourse.sustainoss.org/) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) Open Collective-SustainOSS (Contribute) (https://opencollective.com/sustainoss) Richard Littauer Socials (https://www.burntfen.com/2023-05-30/socials) Kailash Nadh LinkedIn (https://www.linkedin.com/in/kailashnadh/) Kailash Nadh Website (https://nadh.in/) Zerodha (https://zerodha.com/) funding.json (https://floss.fund/funding-manifest/) Sustain Podcast-Episode 153: Kailash Nadh and the FOSS United Foundation (https://podcast.sustainoss.org/153) FLOSS/Fund (https://floss.fund/) FFmpeg (https://ffmpeg.org/) Zig (https://ziglang.org/) Sustain Podcast-Episode 247: Chad Whitacre on the Open Source Pledge (https://podcast.sustainoss.org/247) Open Source Pledge (https://opensourcepledge.com/) Announcing FLOSS/fund: $1M per year for free and open source projects-post by Kailash Nadh (https://floss.fund/blog/announcing-floss-fund/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guest: Kailash Nadh.
Обирайте VPS з мавпочкою — https://fotbo.com.ua/?utm_source=bloggers&utm_medium=yt&utm_campaign=dou_2024
We are recording our next big recap episode and taking questions! Submit questions and messages on Speakpipe here for a chance to appear on the show!Also subscribe to our calendar for our Singapore, NeurIPS, and all upcoming meetups!In our first ever episode with Logan Kilpatrick we called out the two hottest LLM frameworks at the time: LangChain and Dust. We've had Harrison from LangChain on twice (as a guest and as a co-host), and we've now finally come full circle as Stanislas from Dust joined us in the studio.After stints at Oracle and Stripe, Stan had joined OpenAI to work on mathematical reasoning capabilities. He describes his time at OpenAI as "the PhD I always wanted to do" while acknowledging the challenges of research work: "You're digging into a field all day long for weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, 'oh, yeah, that was obvious.' And you go back to digging." This experience, combined with early access to GPT-4's capabilities, shaped his decision to start Dust: "If we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down."The History of DustDust's journey can be broken down into three phases:* Developer Framework (2022): Initially positioned as a competitor to LangChain, Dust started as a developer tooling platform. While both were open source, their approaches differed – LangChain focused on broad community adoption and integration as a pure developer experience, while Dust emphasized UI-driven development and better observability that wasn't just `print` statements.* Browser Extension (Early 2023): The company pivoted to building XP1, a browser extension that could interact with web content. This experiment helped validate user interaction patterns with AI, even while using less capable models than GPT-4.* Enterprise Platform (Current): Today, Dust has evolved into an infrastructure platform for deploying AI agents within companies, with impressive metrics like 88% daily active users in some deployments.The Case for Being HorizontalThe big discussion for early stage companies today is whether or not to be horizontal or vertical. Since models are so good at general tasks, a lot of companies are building vertical products that take care of a workflow end-to-end in order to offer more value and becoming more of “Services as Software”. Dust on the other hand is a platform for the users to build their own experiences, which has had a few advantages:* Maximum Penetration: Dust reports 60-70% weekly active users across entire companies, demonstrating the potential reach of horizontal solutions rather than selling into a single team.* Emergent Use Cases: By allowing non-technical users to create agents, Dust enables use cases to emerge organically from actual business needs rather than prescribed solutions.* Infrastructure Value: The platform approach creates lasting value through maintained integrations and connections, similar to how Stripe's value lies in maintaining payment infrastructure. Rather than relying on third-party integration providers, Dust maintains its own connections to ensure proper handling of different data types and structures.The Vertical ChallengeHowever, this approach comes with trade-offs:* Harder Go-to-Market: As Stan talked about: "We spike at penetration... but it makes our go-to-market much harder. Vertical solutions have a go-to-market that is much easier because they're like, 'oh, I'm going to solve the lawyer stuff.'"* Complex Infrastructure: Building a horizontal platform requires maintaining numerous integrations and handling diverse data types appropriately – from structured Salesforce data to unstructured Notion pages. As you scale integrations, the cost of maintaining them also scales. * Product Surface Complexity: Creating an interface that's both powerful and accessible to non-technical users requires careful design decisions, down to avoiding technical terms like "system prompt" in favor of "instructions." The Future of AI PlatformsStan initially predicted we'd see the first billion-dollar single-person company in 2023 (a prediction later echoed by Sam Altman), but he's now more focused on a different milestone: billion-dollar companies with engineering teams of just 20 people, enabled by AI assistance.This vision aligns with Dust's horizontal platform approach – building the infrastructure that allows small teams to achieve outsized impact through AI augmentation. Rather than replacing entire job functions (the vertical approach), they're betting on augmenting existing workflows across organizations.Full YouTube EpisodeChapters* 00:00:00 Introductions* 00:04:33 Joining OpenAI from Paris* 00:09:54 Research evolution and compute allocation at OpenAI* 00:13:12 Working with Ilya Sutskever and OpenAI's vision* 00:15:51 Leaving OpenAI to start Dust* 00:18:15 Early focus on browser extension and WebGPT-like functionality* 00:20:20 Dust as the infrastructure for agents* 00:24:03 Challenges of building with early AI models* 00:28:17 LLMs and Workflow Automation* 00:35:28 Building dependency graphs of agents* 00:37:34 Simulating API endpoints* 00:40:41 State of AI models* 00:43:19 Running evals* 00:46:36 Challenges in building AI agents infra* 00:49:21 Buy vs. build decisions for infrastructure components* 00:51:02 Future of SaaS and AI's Impact on Software* 00:53:07 The single employee $1B company race* 00:56:32 Horizontal vs. vertical approaches to AI agentsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:11]: Hey, and today we're in a studio with Stanislas, welcome.Stan [00:00:14]: Thank you very much for having me.Swyx [00:00:16]: Visiting from Paris.Stan [00:00:17]: Paris.Swyx [00:00:18]: And you have had a very distinguished career. It's very hard to summarize, but you went to college in both Ecopolytechnique and Stanford, and then you worked in a number of places, Oracle, Totems, Stripe, and then OpenAI pre-ChatGPT. We'll talk, we'll spend a little bit of time about that. About two years ago, you left OpenAI to start Dust. I think you were one of the first OpenAI alum founders.Stan [00:00:40]: Yeah, I think it was about at the same time as the Adept guys, so that first wave.Swyx [00:00:46]: Yeah, and people really loved our David episode. We love a few sort of OpenAI stories, you know, for back in the day, like we're talking about pre-recording. Probably the statute of limitations on some of those stories has expired, so you can talk a little bit more freely without them coming after you. But maybe we'll just talk about, like, what was your journey into AI? You know, you were at Stripe for almost five years, there are a lot of Stripe alums going into OpenAI. I think the Stripe culture has come into OpenAI quite a bit.Stan [00:01:11]: Yeah, so I think the buses of Stripe people really started flowing in, I guess, after ChatGPT. But, yeah, my journey into AI is a... I mean, Greg Brockman. Yeah, yeah. From Greg, of course. And Daniela, actually, back in the days, Daniela Amodei.Swyx [00:01:27]: Yes, she was COO, I mean, she is COO, yeah. She had a pretty high job at OpenAI at the time, yeah, for sure.Stan [00:01:34]: My journey started as anybody else, you're fascinated with computer science and you want to make them think, it's awesome, but it doesn't work. I mean, it was a long time ago, it was like maybe 16, so it was 25 years ago. Then the first big exposure to AI would be at Stanford, and I'm going to, like, disclose a whole lamb, because at the time it was a class taught by Andrew Ng, and there was no deep learning. It was half features for vision and a star algorithm. So it was fun. But it was the early days of deep learning. At the time, I think a few years after, it was the first project at Google. But you know, that cat face or the human face trained from many images. I went to, hesitated doing a PhD, more in systems, eventually decided to go into getting a job. Went at Oracle, started a company, did a gazillion mistakes, got acquired by Stripe, worked with Greg Buckman there. And at the end of Stripe, I started interesting myself in AI again, felt like it was the time, you had the Atari games, you had the self-driving craziness at the time. And I started exploring projects, it felt like the Atari games were incredible, but there were still games. And I was looking into exploring projects that would have an impact on the world. And so I decided to explore three things, self-driving cars, cybersecurity and AI, and math and AI. It's like I sing it by a decreasing order of impact on the world, I guess.Swyx [00:03:01]: Discovering new math would be very foundational.Stan [00:03:03]: It is extremely foundational, but it's not as direct as driving people around.Swyx [00:03:07]: Sorry, you're doing this at Stripe, you're like thinking about your next move.Stan [00:03:09]: No, it was at Stripe, kind of a bit of time where I started exploring. I did a bunch of work with friends on trying to get RC cars to drive autonomously. Almost started a company in France or Europe about self-driving trucks. We decided to not go for it because it was probably very operational. And I think the idea of the company, of the team wasn't there. And also I realized that if I wake up a day and because of a bug I wrote, I killed a family, it would be a bad experience. And so I just decided like, no, that's just too crazy. And then I explored cybersecurity with a friend. We're trying to apply transformers to cut fuzzing. So cut fuzzing, you have kind of an algorithm that goes really fast and tries to mutate the inputs of a library to find bugs. And we tried to apply a transformer to that and do reinforcement learning with the signal of how much you propagate within the binary. Didn't work at all because the transformers are so slow compared to evolutionary algorithms that it kind of didn't work. Then I started interested in math and AI and started working on SAT solving with AI. And at the same time, OpenAI was kind of starting the reasoning team that were tackling that project as well. I was in touch with Greg and eventually got in touch with Ilya and finally found my way to OpenAI. I don't know how much you want to dig into that. The way to find your way to OpenAI when you're in Paris was kind of an interesting adventure as well.Swyx [00:04:33]: Please. And I want to note, this was a two-month journey. You did all this in two months.Stan [00:04:38]: The search.Swyx [00:04:40]: Your search for your next thing, because you left in July 2019 and then you joined OpenAI in September.Stan [00:04:45]: I'm going to be ashamed to say that.Swyx [00:04:47]: You were searching before. I was searching before.Stan [00:04:49]: I mean, it's normal. No, the truth is that I moved back to Paris through Stripe and I just felt the hardship of being remote from your team nine hours away. And so it kind of freed a bit of time for me to start the exploration before. Sorry, Patrick. Sorry, John.Swyx [00:05:05]: Hopefully they're listening. So you joined OpenAI from Paris and from like, obviously you had worked with Greg, but notStan [00:05:13]: anyone else. No. Yeah. So I had worked with Greg, but not Ilya, but I had started chatting with Ilya and Ilya was kind of excited because he knew that I was a good engineer through Greg, I presume, but I was not a trained researcher, didn't do a PhD, never did research. And I started chatting and he was excited all the way to the point where he was like, hey, come pass interviews, it's going to be fun. I think he didn't care where I was, he just wanted to try working together. So I go to SF, go through the interview process, get an offer. And so I get Bob McGrew on the phone for the first time, he's like, hey, Stan, it's awesome. You've got an offer. When are you coming to SF? I'm like, hey, it's awesome. I'm not coming to the SF. I'm based in Paris and we just moved. He was like, hey, it's awesome. Well, you don't have an offer anymore. Oh, my God. No, it wasn't as hard as that. But that's basically the idea. And it took me like maybe a couple more time to keep chatting and they eventually decided to try a contractor set up. And that's how I kind of started working at OpenAI, officially as a contractor, but in practice really felt like being an employee.Swyx [00:06:14]: What did you work on?Stan [00:06:15]: So it was solely focused on math and AI. And in particular in the application, so the study of the larger grid models, mathematical reasoning capabilities, and in particular in the context of formal mathematics. The motivation was simple, transformers are very creative, but yet they do mistakes. Formal math systems are of the ability to verify a proof and the tactics they can use to solve problems are very mechanical, so you miss the creativity. And so the idea was to try to explore both together. You would get the creativity of the LLMs and the kind of verification capabilities of the formal system. A formal system, just to give a little bit of context, is a system in which a proof is a program and the formal system is a type system, a type system that is so evolved that you can verify the program. If the type checks, it means that the program is correct.Swyx [00:07:06]: Is the verification much faster than actually executing the program?Stan [00:07:12]: Verification is instantaneous, basically. So the truth is that what you code in involves tactics that may involve computation to search for solutions. So it's not instantaneous. You do have to do the computation to expand the tactics into the actual proof. The verification of the proof at the very low level is instantaneous.Swyx [00:07:32]: How quickly do you run into like, you know, halting problem PNP type things, like impossibilities where you're just like that?Stan [00:07:39]: I mean, you don't run into it at the time. It was really trying to solve very easy problems. So I think the... Can you give an example of easy? Yeah, so that's the mass benchmark that everybody knows today. The Dan Hendricks one. The Dan Hendricks one, yeah. And I think it was the low end part of the mass benchmark at the time, because that mass benchmark includes AMC problems, AMC 8, AMC 10, 12. So these are the easy ones. Then AIME problems, somewhat harder, and some IMO problems, like Crazy Arm.Swyx [00:08:07]: For our listeners, we covered this in our Benchmarks 101 episode. AMC is literally the grade of like high school, grade 8, grade 10, grade 12. So you can solve this. Just briefly to mention this, because I don't think we'll touch on this again. There's a bit of work with like Lean, and then with, you know, more recently with DeepMind doing like scoring like silver on the IMO. Any commentary on like how math has evolved from your early work to today?Stan [00:08:34]: I mean, that result is mind blowing. I mean, from my perspective, spent three years on that. At the same time, Guillaume Lampe in Paris, we were both in Paris, actually. He was at FAIR, was working on some problems. We were pushing the boundaries, and the goal was the IMO. And we cracked a few problems here and there. But the idea of getting a medal at an IMO was like just remote. So this is an impressive result. And we can, I think the DeepMind team just did a good job of scaling. I think there's nothing too magical in their approach, even if it hasn't been published. There's a Dan Silver talk from seven days ago where it goes a little bit into more details. It feels like there's nothing magical there. It's really applying reinforcement learning and scaling up the amount of data that can generate through autoformalization. So we can dig into what autoformalization means if you want.Alessio [00:09:26]: Let's talk about the tail end, maybe, of the OpenAI. So you joined, and you're like, I'm going to work on math and do all of these things. I saw on one of your blog posts, you mentioned you fine-tuned over 10,000 models at OpenAI using 10 million A100 hours. How did the research evolve from the GPD 2, and then getting closer to DaVinci 003? And then you left just before ChatGPD was released, but tell people a bit more about the research path that took you there.Stan [00:09:54]: I can give you my perspective of it. I think at OpenAI, there's always been a large chunk of the compute that was reserved to train the GPTs, which makes sense. So it was pre-entropic splits. Most of the compute was going to a product called Nest, which was basically GPT-3. And then you had a bunch of, let's say, remote, not core research teams that were trying to explore maybe more specific problems or maybe the algorithm part of it. The interesting part, I don't know if it was where your question was going, is that in those labs, you're managing researchers. So by definition, you shouldn't be managing them. But in that space, there's a managing tool that is great, which is compute allocation. Basically by managing the compute allocation, you can message the team of where you think the priority should go. And so it was really a question of, you were free as a researcher to work on whatever you wanted. But if it was not aligned with OpenAI mission, and that's fair, you wouldn't get the compute allocation. As it happens, solving math was very much aligned with the direction of OpenAI. And so I was lucky to generally get the compute I needed to make good progress.Swyx [00:11:06]: What do you need to show as incremental results to get funded for further results?Stan [00:11:12]: It's an imperfect process because there's a bit of a... If you're working on math and AI, obviously there's kind of a prior that it's going to be aligned with the company. So it's much easier than to go into something much more risky, much riskier, I guess. You have to show incremental progress, I guess. It's like you ask for a certain amount of compute and you deliver a few weeks after and you demonstrate that you have a progress. Progress might be a positive result. Progress might be a strong negative result. And a strong negative result is actually often much harder to get or much more interesting than a positive result. And then it generally goes into, as any organization, you would have people finding your project or any other project cool and fancy. And so you would have that kind of phase of growing up compute allocation for it all the way to a point. And then maybe you reach an apex and then maybe you go back mostly to zero and restart the process because you're going in a different direction or something else. That's how I felt. Explore, exploit. Yeah, exactly. Exactly. Exactly. It's a reinforcement learning approach.Swyx [00:12:14]: Classic PhD student search process.Alessio [00:12:17]: And you were reporting to Ilya, like the results you were kind of bringing back to him or like what's the structure? It's almost like when you're doing such cutting edge research, you need to report to somebody who is actually really smart to understand that the direction is right.Stan [00:12:29]: So we had a reasoning team, which was working on reasoning, obviously, and so math in general. And that team had a manager, but Ilya was extremely involved in the team as an advisor, I guess. Since he brought me in OpenAI, I was lucky to mostly during the first years to have kind of a direct access to him. He would really coach me as a trainee researcher, I guess, with good engineering skills. And Ilya, I think at OpenAI, he was the one showing the North Star, right? He was his job and I think he really enjoyed it and he did it super well, was going through the teams and saying, this is where we should be going and trying to, you know, flock the different teams together towards an objective.Swyx [00:13:12]: I would say like the public perception of him is that he was the strongest believer in scaling. Oh, yeah. Obviously, he has always pursued the compression thesis. You have worked with him personally, what does the public not know about how he works?Stan [00:13:26]: I think he's really focused on building the vision and communicating the vision within the company, which was extremely useful. I was personally surprised that he spent so much time, you know, working on communicating that vision and getting the teams to work together versus...Swyx [00:13:40]: To be specific, vision is AGI? Oh, yeah.Stan [00:13:42]: Vision is like, yeah, it's the belief in compression and scanning computes. I remember when I started working on the Reasoning team, the excitement was really about scaling the compute around Reasoning and that was really the belief we wanted to ingrain in the team. And that's what has been useful to the team and with the DeepMind results shows that it was the right approach with the success of GPT-4 and stuff shows that it was the right approach.Swyx [00:14:06]: Was it according to the neural scaling laws, the Kaplan paper that was published?Stan [00:14:12]: I think it was before that, because those ones came with GPT-3, basically at the time of GPT-3 being released or being ready internally. But before that, there really was a strong belief in scale. I think it was just the belief that the transformer was a generic enough architecture that you could learn anything. And that was just a question of scaling.Alessio [00:14:33]: Any other fun stories you want to tell? Sam Altman, Greg, you know, anything.Stan [00:14:37]: Weirdly, I didn't work that much with Greg when I was at OpenAI. He had always been mostly focused on training the GPTs and rightfully so. One thing about Sam Altman, he really impressed me because when I joined, he had joined not that long ago and it felt like he was kind of a very high level CEO. And I was mind blown by how deep he was able to go into the subjects within a year or something, all the way to a situation where when I was having lunch by year two, I was at OpenAI with him. He would just quite know deeply what I was doing. With no ML background. Yeah, with no ML background, but I didn't have any either, so I guess that explains why. But I think it's a question about, you don't necessarily need to understand the very technicalities of how things are done, but you need to understand what's the goal and what's being done and what are the recent results and all of that in you. And we could have kind of a very productive discussion. And that really impressed me, given the size at the time of OpenAI, which was not negligible.Swyx [00:15:44]: Yeah. I mean, you've been a, you were a founder before, you're a founder now, and you've seen Sam as a founder. How has he affected you as a founder?Stan [00:15:51]: I think having that capability of changing the scale of your attention in the company, because most of the time you operate at a very high level, but being able to go deep down and being in the known of what's happening on the ground is something that I feel is really enlightening. That's not a place in which I ever was as a founder, because first company, we went all the way to 10 people. Current company, there's 25 of us. So the high level, the sky and the ground are pretty much at the same place. No, you're being too humble.Swyx [00:16:21]: I mean, Stripe was also like a huge rocket ship.Stan [00:16:23]: Stripe, I was a founder. So I was, like at OpenAI, I was really happy being on the ground, pushing the machine, making it work. Yeah.Swyx [00:16:31]: Last OpenAI question. The Anthropic split you mentioned, you were around for that. Very dramatic. David also left around that time, you left. This year, we've also had a similar management shakeup, let's just call it. Can you compare what it was like going through that split during that time? And then like, does that have any similarities now? Like, are we going to see a new Anthropic emerge from these folks that just left?Stan [00:16:54]: That I really, really don't know. At the time, the split was pretty surprising because they had been trying GPT-3, it was a success. And to be completely transparent, I wasn't in the weeds of the splits. What I understood of it is that there was a disagreement of the commercialization of that technology. I think the focal point of that disagreement was the fact that we started working on the API and wanted to make those models available through an API. Is that really the core disagreement? I don't know.Swyx [00:17:25]: Was it safety?Stan [00:17:26]: Was it commercialization?Swyx [00:17:27]: Or did they just want to start a company?Stan [00:17:28]: Exactly. Exactly. That I don't know. But I think what I was surprised of is how quickly OpenAI recovered at the time. And I think it's just because we were mostly a research org and the mission was so clear that some divergence in some teams, some people leave, the mission is still there. We have the compute. We have a site. So it just keeps going.Swyx [00:17:50]: Very deep bench. Like just a lot of talent. Yeah.Alessio [00:17:53]: So that was the OpenAI part of the history. Exactly. So then you leave OpenAI in September 2022. And I would say in Silicon Valley, the two hottest companies at the time were you and Lanktrain. What was that start like and why did you decide to start with a more developer focused kind of like an AI engineer tool rather than going back into some more research and something else?Stan [00:18:15]: Yeah. First, I'm not a trained researcher. So going through OpenAI was really kind of the PhD I always wanted to do. But research is hard. You're digging into a field all day long for weeks and weeks and weeks, and you find something, you get super excited for 12 seconds. And at the 13 seconds, you're like, oh, yeah, that was obvious. And you go back to digging. I'm not a trained, like formally trained researcher, and it wasn't kind of a necessarily an ambition of me of creating, of having a research career. And I felt the hardness of it. I enjoyed a lot of like that a ton. But at the time, I decided that I wanted to go back to something more productive. And the other fun motivation was like, I mean, if we believe in AGI and if we believe the timelines might not be too long, it's actually the last train leaving the station to start a company. After that, it's going to be computers all the way down. And so that was kind of the true motivation for like trying to go there. So that's kind of the core motivation at the beginning of personally. And the motivation for starting a company was pretty simple. I had seen GPT-4 internally at the time, it was September 2022. So it was pre-GPT, but GPT-4 was ready since, I mean, I'd been ready for a few months internally. I was like, okay, that's obvious, the capabilities are there to create an insane amount of value to the world. And yet the deployment is not there yet. The revenue of OpenAI at the time were ridiculously small compared to what it is today. So the thesis was, there's probably a lot to be done at the product level to unlock the usage.Alessio [00:19:49]: Yeah. Let's talk a bit more about the form factor, maybe. I think one of the first successes you had was kind of like the WebGPT-like thing, like using the models to traverse the web and like summarize things. And the browser was really the interface. Why did you start with the browser? Like what was it important? And then you built XP1, which was kind of like the browser extension.Stan [00:20:09]: So the starting point at the time was, if you wanted to talk about LLMs, it was still a rather small community, a community of mostly researchers and to some extent, very early adopters, very early engineers. It was almost inconceivable to just build a product and go sell it to the enterprise, though at the time there was a few companies doing that. The one on marketing, I don't remember its name, Jasper. But so the natural first intention, the first, first, first intention was to go to the developers and try to create tooling for them to create product on top of those models. And so that's what Dust was originally. It was quite different than Lanchain, and Lanchain just beat the s**t out of us, which is great. It's a choice.Swyx [00:20:53]: You were cloud, in closed source. They were open source.Stan [00:20:56]: Yeah. So technically we were open source and we still are open source, but I think that doesn't really matter. I had the strong belief from my research time that you cannot create an LLM-based workflow on just one example. Basically, if you just have one example, you overfit. So as you develop your interaction, your orchestration around the LLM, you need a dozen examples. Obviously, if you're running a dozen examples on a multi-step workflow, you start paralyzing stuff. And if you do that in the console, you just have like a messy stream of tokens going out and it's very hard to observe what's going there. And so the idea was to go with an UI so that you could kind of introspect easily the output of each interaction with the model and dig into there through an UI, which is-Swyx [00:21:42]: Was that open source? I actually didn't come across it.Stan [00:21:44]: Oh yeah, it wasn't. I mean, Dust is entirely open source even today. We're not going for an open source-Swyx [00:21:48]: If it matters, I didn't know that.Stan [00:21:49]: No, no, no, no, no. The reason why is because we're not open source because we're not doing an open source strategy. It's not an open source go-to-market at all. We're open source because we can and it's fun.Swyx [00:21:59]: Open source is marketing. You have all the downsides of open source, which is like people can clone you.Stan [00:22:03]: But I think that downside is a big fallacy. Okay. Yes, anybody can clone Dust today, but the value of Dust is not the current state. The value of Dust is the number of eyeballs and hands of developers that are creating to it in the future. And so yes, anybody can clone it today, but that wouldn't change anything. There is some value in being open source. In a discussion with the security team, you can be extremely transparent and just show the code. When you have discussion with users and there's a bug or a feature missing, you can just point to the issue, show the pull request, show the, show the, exactly, oh, PR welcome. That doesn't happen that much, but you can show the progress if the person that you're chatting with is a little bit technical, they really enjoy seeing the pull request advancing and seeing all the way to deploy. And then the downsides are mostly around security. You never want to do security by obfuscation. But the truth is that your vector of attack is facilitated by you being open source. But at the same time, it's a good thing because if you're doing anything like a bug bountying or stuff like that, you just give much more tools to the bug bountiers so that their output is much better. So there's many, many, many trade-offs. I don't believe in the value of the code base per se. I think it's really the people that are on the code base that have the value and go to market and the product and all of those things that are around the code base. Obviously, that's not true for every code base. If you're working on a very secret kernel to accelerate the inference of LLMs, I would buy that you don't want to be open source. But for product stuff, I really think there's very little risk. Yeah.Alessio [00:23:39]: I signed up for XP1, I was looking, January 2023. I think at the time you were on DaVinci 003. Given that you had seen GPD 4, how did you feel having to push a product out that was using this model that was so inferior? And you're like, please, just use it today. I promise it's going to get better. Just overall, as a founder, how do you build something that maybe doesn't quite work with the model today, but you're just expecting the new model to be better?Stan [00:24:03]: Yeah, so actually, XP1 was even on a smaller one that was the post-GDPT release, small version, so it was... Ada, Babbage... No, no, no, not that far away. But it was the small version of GDPT, basically. I don't remember its name. Yes, you have a frustration there. But at the same time, I think XP1 was designed, was an experiment, but was designed as a way to be useful at the current capability of the model. If you just want to extract data from a LinkedIn page, that model was just fine. If you want to summarize an article on a newspaper, that model was just fine. And so it was really a question of trying to find a product that works with the current capability, knowing that you will always have tailwinds as models get better and faster and cheaper. So that was kind of a... There's a bit of a frustration because you know what's out there and you know that you don't have access to it yet. It's also interesting to try to find a product that works with the current capability.Alessio [00:24:55]: And we highlighted XP1 in our anatomy of autonomy post in April of last year, which was, you know, where are all the agents, right? So now we spent 30 minutes getting to what you're building now. So you basically had a developer framework, then you had a browser extension, then you had all these things, and then you kind of got to where Dust is today. So maybe just give people an overview of what Dust is today and the courtesies behind it. Yeah, of course.Stan [00:25:20]: So Dust, we really want to build the infrastructure so that companies can deploy agents within their teams. We are horizontal by nature because we strongly believe in the emergence of use cases from the people having access to creating an agent that don't need to be developers. They have to be thinkers. They have to be curious. But anybody can create an agent that will solve an operational thing that they're doing in their day-to-day job. And to make those agents useful, there's two focus, which is interesting. The first one is an infrastructure focus. You have to build the pipes so that the agent has access to the data. You have to build the pipes such that the agents can take action, can access the web, et cetera. So that's really an infrastructure play. Maintaining connections to Notion, Slack, GitHub, all of them is a lot of work. It is boring work, boring infrastructure work, but that's something that we know is extremely valuable in the same way that Stripe is extremely valuable because it maintains the pipes. And we have that dual focus because we're also building the product for people to use it. And there it's fascinating because everything started from the conversational interface, obviously, which is a great starting point. But we're only scratching the surface, right? I think we are at the pong level of LLM productization. And we haven't invented the C3. We haven't invented Counter-Strike. We haven't invented Cyberpunk 2077. So this is really our mission is to really create the product that lets people equip themselves to just get away all the work that can be automated or assisted by LLMs.Alessio [00:26:57]: And can you just comment on different takes that people had? So maybe the most open is like auto-GPT. It's just kind of like just trying to do anything. It's like it's all magic. There's no way for you to do anything. Then you had the ADAPT, you know, we had David on the podcast. They're very like super hands-on with each individual customer to build super tailored. How do you decide where to draw the line between this is magic? This is exposed to you, especially in a market where most people don't know how to build with AI at all. So if you expect them to do the thing, they're probably not going to do it. Yeah, exactly.Stan [00:27:29]: So the auto-GPT approach obviously is extremely exciting, but we know that the agentic capability of models are not quite there yet. It just gets lost. So we're starting, we're starting where it works. Same with the XP one. And where it works is pretty simple. It's like simple workflows that involve a couple tools where you don't even need to have the model decide which tools it's used in the sense of you just want people to put it in the instructions. It's like take that page, do that search, pick up that document, do the work that I want in the format I want, and give me the results. There's no smartness there, right? In terms of orchestrating the tools, it's mostly using English for people to program a workflow where you don't have the constraint of having compatible API between the two.Swyx [00:28:17]: That kind of personal automation, would you say it's kind of like an LLM Zapier type ofStan [00:28:22]: thing?Swyx [00:28:22]: Like if this, then that, and then, you know, do this, then this. You're programming with English?Stan [00:28:28]: So you're programming with English. So you're just saying, oh, do this and then that. You can even create some form of APIs. You say, when I give you the command X, do this. When I give you the command Y, do this. And you describe the workflow. But you don't have to create boxes and create the workflow explicitly. It just needs to describe what are the tasks supposed to be and make the tool available to the agent. The tool can be a semantic search. The tool can be querying into a structured database. The tool can be searching on the web. And obviously, the interesting tools that we're only starting to scratch are actually creating external actions like reimbursing something on Stripe, sending an email, clicking on a button in the admin or something like that.Swyx [00:29:11]: Do you maintain all these integrations?Stan [00:29:13]: Today, we maintain most of the integrations. We do always have an escape hatch for people to kind of custom integrate. But the reality is that the reality of the market today is that people just want it to work, right? And so it's mostly us maintaining the integration. As an example, a very good source of information that is tricky to productize is Salesforce. Because Salesforce is basically a database and a UI. And they do the f**k they want with it. And so every company has different models and stuff like that. So right now, we don't support it natively. And the type of support or real native support will be slightly more complex than just osing into it, like is the case with Slack as an example. Because it's probably going to be, oh, you want to connect your Salesforce to us? Give us the SQL. That's the Salesforce QL language. Give us the queries you want us to run on it and inject in the context of dust. So that's interesting how not only integrations are cool, and some of them require a bit of work on the user. And for some of them that are really valuable to our users, but we don't support yet, they can just build them internally and push the data to us.Swyx [00:30:18]: I think I understand the Salesforce thing. But let me just clarify, are you using browser automation because there's no API for something?Stan [00:30:24]: No, no, no, no. In that case, so we do have browser automation for all the use cases and apply the public web. But for most of the integration with the internal system of the company, it really runs through API.Swyx [00:30:35]: Haven't you felt the pull to RPA, browser automation, that kind of stuff?Stan [00:30:39]: I mean, what I've been saying for a long time, maybe I'm wrong, is that if the future is that you're going to stand in front of a computer and looking at an agent clicking on stuff, then I'll hit my computer. And my computer is a big Lenovo. It's black. Doesn't sound good at all compared to a Mac. And if the APIs are there, we should use them. There is going to be a long tail of stuff that don't have APIs, but as the world is moving forward, that's disappearing. So the core API value in the past has really been, oh, this old 90s product doesn't have an API. So I need to use the UI to automate. I think for most of the ICP companies, the companies that ICP for us, the scale ups that are between 500 and 5,000 people, tech companies, most of the SaaS they use have APIs. Now there's an interesting question for the open web, because there are stuff that you want to do that involve websites that don't necessarily have APIs. And the current state of web integration from, which is us and OpenAI and Anthropic, I don't even know if they have web navigation, but I don't think so. The current state of affair is really, really broken because you have what? You have basically search and headless browsing. But headless browsing, I think everybody's doing basically body.innertext and fill that into the model, right?Swyx [00:31:56]: MARK MIRCHANDANI There's parsers into Markdown and stuff.Stan [00:31:58]: FRANCESC CAMPOY I'm super excited by the companies that are exploring the capability of rendering a web page into a way that is compatible for a model, being able to maintain the selector. So that's basically the place where to click in the page through that process, expose the actions to the model, have the model select an action in a way that is compatible with model, which is not a big page of a full DOM that is very noisy, and then being able to decompress that back to the original page and take the action. And that's something that is really exciting and that will kind of change the level of things that agents can do on the web. That I feel exciting, but I also feel that the bulk of the useful stuff that you can do within the company can be done through API. The data can be retrieved by API. The actions can be taken through API.Swyx [00:32:44]: For listeners, I'll note that you're basically completely disagreeing with David Wan. FRANCESC CAMPOY Exactly, exactly. I've seen it since it's summer. ADEPT is where it is, and Dust is where it is. So Dust is still standing.Alessio [00:32:55]: Can we just quickly comment on function calling? You mentioned you don't need the models to be that smart to actually pick the tools. Have you seen the models not be good enough? Or is it just like, you just don't want to put the complexity in there? Like, is there any room for improvement left in function calling? Or do you feel you usually consistently get always the right response, the right parametersStan [00:33:13]: and all of that?Alessio [00:33:13]: FRANCESC CAMPOY So that's a tricky product question.Stan [00:33:15]: Because if the instructions are good and precise, then you don't have any issue, because it's scripted for you. And the model will just look at the scripts and just follow and say, oh, he's probably talking about that action, and I'm going to use it. And the parameters are kind of abused from the state of the conversation. I'll just go with it. If you provide a very high level, kind of an auto-GPT-esque level in the instructions and provide 16 different tools to your model, yes, we're seeing the models in that state making mistakes. And there is obviously some progress can be made on the capabilities. But the interesting part is that there is already so much work that can assist, augment, accelerate by just going with pretty simply scripted for actions agents. What I'm excited about by pushing our users to create rather simple agents is that once you have those working really well, you can create meta agents that use the agents as actions. And all of a sudden, you can kind of have a hierarchy of responsibility that will probably get you almost to the point of the auto-GPT value. It requires the construction of intermediary artifacts, but you're probably going to be able to achieve something great. I'll give you some example. We have our incidents are shared in Slack in a specific channel, or shipped are shared in Slack. We have a weekly meeting where we have a table about incidents and shipped stuff. We're not writing that weekly meeting table anymore. We have an assistant that just go find the right data on Slack and create the table for us. And that assistant works perfectly. It's trivially simple, right? Take one week of data from that channel and just create the table. And then we have in that weekly meeting, obviously some graphs and reporting about our financials and our progress and our ARR. And we've created assistants to generate those graphs directly. And those assistants works great. By creating those assistants that cover those small parts of that weekly meeting, slowly we're getting to in a world where we'll have a weekly meeting assistance. We'll just call it. You don't need to prompt it. You don't need to say anything. It's going to run those different assistants and get that notion page just ready. And by doing that, if you get there, and that's an objective for us to us using Dust, get there, you're saving an hour of company time every time you run it. Yeah.Alessio [00:35:28]: That's my pet topic of NPM for agents. How do you build dependency graphs of agents? And how do you share them? Because why do I have to rebuild some of the smaller levels of what you built already?Swyx [00:35:40]: I have a quick follow-up question on agents managing other agents. It's a topic of a lot of research, both from Microsoft and even in startups. What you've discovered best practice for, let's say like a manager agent controlling a bunch of small agents. It's two-way communication. I don't know if there should be a protocol format.Stan [00:35:59]: To be completely honest, the state we are at right now is creating the simple agents. So we haven't even explored yet the meta agents. We know it's there. We know it's going to be valuable. We know it's going to be awesome. But we're starting there because it's the simplest place to start. And it's also what the market understands. If you go to a company, random SaaS B2B company, not necessarily specialized in AI, and you take an operational team and you tell them, build some tooling for yourself, they'll understand the small agents. If you tell them, build AutoGP, they'll be like, Auto what?Swyx [00:36:31]: And I noticed that in your language, you're very much focused on non-technical users. You don't really mention API here. You mention instruction instead of system prompt, right? That's very conscious.Stan [00:36:41]: Yeah, it's very conscious. It's a mark of our designer, Ed, who kind of pushed us to create a friendly product. I was knee-deep into AI when I started, obviously. And my co-founder, Gabriel, was a Stripe as well. We started a company together that got acquired by Stripe 15 years ago. It was at Alain, a healthcare company in Paris. After that, it was a little bit less so knee-deep in AI, but really focused on product. And I didn't realize how important it is to make that technology not scary to end users. It didn't feel scary to me, but it was really seen by Ed, our designer, that it was feeling scary to the users. And so we were very proactive and very deliberate about creating a brand that feels not too scary and creating a wording and a language, as you say, that really tried to communicate the fact that it's going to be fine. It's going to be easy. You're going to make it.Alessio [00:37:34]: And another big point that David had about ADAPT is we need to build an environment for the agents to act. And then if you have the environment, you can simulate what they do. How's that different when you're interacting with APIs and you're kind of touching systems that you cannot really simulate? If you call it the Salesforce API, you're just calling it.Stan [00:37:52]: So I think that goes back to the DNA of the companies that are very different. ADAPT, I think, was a product company with a very strong research DNA, and they were still doing research. One of their goals was building a model. And that's why they raised a large amount of money, et cetera. We are 100% deliberately a product company. We don't do research. We don't train models. We don't even run GPUs. We're using the models that exist, and we try to push the product boundary as far as possible with the existing models. So that creates an issue. Indeed, so to answer your question, when you're interacting in the real world, well, you cannot simulate, so you cannot improve the models. Even improving your instructions is complicated for a builder. The hope is that you can use models to evaluate the conversations so that you can get at least feedback and you could get contradictive information about the performance of the assistance. But if you take actual trace of interaction of humans with those agents, it is even for us humans extremely hard to decide whether it was a productive interaction or a really bad interaction. You don't know why the person left. You don't know if they left happy or not. So being extremely, extremely, extremely pragmatic here, it becomes a product issue. We have to build a product that identifies the end users to provide feedback so that as a first step, the person that is building the agent can iterate on it. As a second step, maybe later when we start training model and post-training, et cetera, we can optimize around that for each of those companies. Yeah.Alessio [00:39:17]: Do you see in the future products offering kind of like a simulation environment, the same way all SaaS now kind of offers APIs to build programmatically? Like in cybersecurity, there are a lot of companies working on building simulative environments so that then you can use agents like Red Team, but I haven't really seen that.Stan [00:39:34]: Yeah, no, me neither. That's a super interesting question. I think it's really going to depend on how much, because you need to simulate to generate data, you need to train data to train models. And the question at the end is, are we going to be training models or are we just going to be using frontier models as they are? On that question, I don't have a strong opinion. It might be the case that we'll be training models because in all of those AI first products, the model is so close to the product surface that as you get big and you want to really own your product, you're going to have to own the model as well. Owning the model doesn't mean doing the pre-training, that would be crazy. But at least having an internal post-training realignment loop, it makes a lot of sense. And so if we see many companies going towards that all the time, then there might be incentives for the SaaS's of the world to provide assistance in getting there. But at the same time, there's a tension because those SaaS, they don't want to be interacted by agents, they want the human to click on the button. Yeah, they got to sell seats. Exactly.Swyx [00:40:41]: Just a quick question on models. I'm sure you've used many, probably not just OpenAI. Would you characterize some models as better than others? Do you use any open source models? What have been the trends in models over the last two years?Stan [00:40:53]: We've seen over the past two years kind of a bit of a race in between models. And at times, it's the OpenAI model that is the best. At times, it's the Anthropic models that is the best. Our take on that is that we are agnostic and we let our users pick their model. Oh, they choose? Yeah, so when you create an assistant or an agent, you can just say, oh, I'm going to run it on GP4, GP4 Turbo, or...Swyx [00:41:16]: Don't you think for the non-technical user, that is actually an abstraction that you should take away from them?Stan [00:41:20]: We have a sane default. So we move the default to the latest model that is cool. And we have a sane default, and it's actually not very visible. In our flow to create an agent, you would have to go in advance and go pick your model. So this is something that the technical person will care about. But that's something that obviously is a bit too complicated for the...Swyx [00:41:40]: And do you care most about function calling or instruction following or something else?Stan [00:41:44]: I think we care most for function calling because you want to... There's nothing worse than a function call, including incorrect parameters or being a bit off because it just drives the whole interaction off.Swyx [00:41:56]: Yeah, so got the Berkeley function calling.Stan [00:42:00]: These days, it's funny how the comparison between GP4O and GP4 Turbo is still up in the air on function calling. I personally don't have proof, but I know many people, and I'm probably part of them, to think that GP4 Turbo is still better than GP4O on function calling. Wow. We'll see what comes out of the O1 class if it ever gets function calling. And Cloud 3.5 Summit is great as well. They kind of innovated in an interesting way, which was never quite publicized. But it's that they have that kind of chain of thought step whenever you use a Cloud model or Summit model with function calling. That chain of thought step doesn't exist when you just interact with it just for answering questions. But when you use function calling, you get that step, and it really helps getting better function calling.Swyx [00:42:43]: Yeah, we actually just recorded a podcast with the Berkeley team that runs that leaderboard this week. So they just released V3.Stan [00:42:49]: Yeah.Swyx [00:42:49]: It was V1 like two months ago, and then they V2, V3. Turbo is on top.Stan [00:42:53]: Turbo is on top. Turbo is over 4.0.Swyx [00:42:54]: And then the third place is XLAM from Salesforce, which is a large action model they've been trying to popularize.Stan [00:43:01]: Yep.Swyx [00:43:01]: O1 Mini is actually on here, I think. O1 Mini is number 11.Stan [00:43:05]: But arguably, O1 Mini has been in a line for that. Yeah.Alessio [00:43:09]: Do you use leaderboards? Do you have your own evals? I mean, this is kind of intuitive, right? Like using the older model is better. I think most people just upgrade. Yeah. What's the eval process like?Stan [00:43:19]: It's funny because I've been doing research for three years, and we have bigger stuff to cook. When you're deploying in a company, one thing where we really spike is that when we manage to activate the company, we have a crazy penetration. The highest penetration we have is 88% daily active users within the entire employee of the company. The kind of average penetration and activation we have in our current enterprise customers is something like more like 60% to 70% weekly active. So we basically have the entire company interacting with us. And when you're there, there is so many stuff that matters most than getting evals, getting the best model. Because there is so many places where you can create products or do stuff that will give you the 80% with the work you do. Whereas deciding if it's GPT-4 or GPT-4 Turbo or et cetera, you know, it'll just give you the 5% improvement. But the reality is that you want to focus on the places where you can really change the direction or change the interaction more drastically. But that's something that we'll have to do eventually because we still want to be serious people.Swyx [00:44:24]: It's funny because in some ways, the model labs are competing for you, right? You don't have to do any effort. You just switch model and then it'll grow. What are you really limited by? Is it additional sources?Stan [00:44:36]: It's not models, right?Swyx [00:44:37]: You're not really limited by quality of model.Stan [00:44:40]: Right now, we are limited by the infrastructure part, which is the ability to connect easily for users to all the data they need to do the job they want to do.Swyx [00:44:51]: Because you maintain all your own stuff.Stan [00:44:53]: You know, there are companies out thereSwyx [00:44:54]: that are starting to provide integrations as a service, right? I used to work in an integrations company. Yeah, I know.Stan [00:44:59]: It's just that there is some intricacies about how you chunk stuff and how you process information from one platform to the other. If you look at the end of the spectrum, you could think of, you could say, oh, I'm going to support AirByte and AirByte has- I used to work at AirByte.Swyx [00:45:12]: Oh, really?Stan [00:45:13]: That makes sense.Swyx [00:45:14]: They're the French founders as well.Stan [00:45:15]: I know Jean very well. I'm seeing him today. And the reality is that if you look at Notion, AirByte does the job of taking Notion and putting it in a structured way. But that's the way it is not really usable to actually make it available to models in a useful way. Because you get all the blocks, details, et cetera, which is useful for many use cases.Swyx [00:45:35]: It's also for data scientists and not for AI.Stan [00:45:38]: The reality of Notion is that sometimes you have a- so when you have a page, there's a lot of structure in it and you want to capture the structure and chunk the information in a way that respects that structure. In Notion, you have databases. Sometimes those databases are real tabular data. Sometimes those databases are full of text. You want to get the distinction and understand that this database should be considered like text information, whereas this other one is actually quantitative information. And to really get a very high quality interaction with that piece of information, I haven't found a solution that will work without us owning the connection end-to-end.Swyx [00:46:15]: That's why I don't invest in, there's Composio, there's All Hands from Graham Newbig. There's all these other companies that are like, we will do the integrations for you. You just, we have the open source community. We'll do off the shelf. But then you are so specific in your needs that you want to own it.Swyx [00:46:28]: Yeah, exactly.Stan [00:46:29]: You can talk to Michel about that.Swyx [00:46:30]: You know, he wants to put the AI in there, but you know. Yeah, I will. I will.Stan [00:46:35]: Cool. What are we missing?Alessio [00:46:36]: You know, what are like the things that are like sneakily hard that you're tackling that maybe people don't even realize they're like really hard?Stan [00:46:43]: The real parts as we kind of touch base throughout the conversation is really building the infra that works for those agents because it's a tenuous walk. It's an evergreen piece of work because you always have an extra integration that will be useful to a non-negligible set of your users. I'm super excited about is that there's so many interactions that shouldn't be conversational interactions and that could be very useful. Basically, know that we have the firehose of information of those companies and there's not going to be that many companies that capture the firehose of information. When you have the firehose of information, you can do a ton of stuff with models that are just not accelerating people, but giving them superhuman capability, even with the current model capability because you can just sift through much more information. An example is documentation repair. If I have the firehose of Slack messages and new Notion pages, if somebody says, I own that page, I want to be updated when there is a piece of information that should update that page, this is not possible. You get an email saying, oh, look at that Slack message. It says the opposite of what you have in that paragraph. Maybe you want to update or just ping that person. I think there is a lot to be explored on the product layer in terms of what it means to interact productively with those models. And that's a problem that's extremely hard and extremely exciting.Swyx [00:48:00]: One thing you keep mentioning about infra work, obviously, Dust is building that infra and serving that in a very consumer-friendly way. You always talk about infra being additional sources, additional connectors. That is very important. But I'm also interested in the vertical infra. There is an orchestrator underlying all these things where you're doing asynchronous work. For example, the simplest one is a cron job. You just schedule things. But also, for if this and that, you have to wait for something to be executed and proceed to the next task. I used to work on an orchestrator as well, Temporal.Stan [00:48:31]: We used Temporal. Oh, you used Temporal? Yeah. Oh, how was the experience?Swyx [00:48:34]: I need the NPS.Stan [00:48:36]: We're doing a self-discovery call now.Swyx [00:48:39]: But you can also complain to me because I don't work there anymore.Stan [00:48:42]: No, we love Temporal. There's some edges that are a bit rough, surprisingly rough. And you would say, why is it so complicated?Swyx [00:48:49]: It's always versioning.Stan [00:48:50]: Yeah, stuff like that. But we really love it. And we use it for exactly what you said, like managing the entire set of stuff that needs to happen so that in semi-real time, we get all the updates from Slack or Notion or GitHub into the system. And whenever we see that piece of information goes through, maybe trigger workflows to run agents because they need to provide alerts to users and stuff like that. And Temporal is great. Love it.Swyx [00:49:17]: You haven't evaluated others. You don't want to build your own. You're happy with...Stan [00:49:21]: Oh, no, we're not in the business of replacing Temporal. And Temporal is so... I mean, it is or any other competitive product. They're very general. If it's there, there's an interesting theory about buy versus build. I think in that case, when you're a high-growth company, your buy-build trade-off is very much on the side of buy. Because if you have the capability, you're just going to be saving time, you can focus on your core competency, etc. And it's funny because we're seeing, we're starting to see the post-high-growth company, post-SKF company, going back on that trade-off, interestingly. So that's the cloud news about removing Zendesk and Salesforce. Do you believe that, by the way?Alessio [00:49:56]: Yeah, I did a podcast with them.Stan [00:49:58]: Oh, yeah?Alessio [00:49:58]: It's true.Swyx [00:49:59]: No, no, I know.Stan [00:50:00]: Of course they say it's true,Swyx [00:50:00]: but also how well is it going to go?Stan [00:50:02]: So I'm not talking about deflecting the customer traffic. I'm talking about building AI on top of Salesforce and Zendesk, basically, if I understand correctly. And all of a sudden, your product surface becomes much smaller because you're interacting with an AI system that will take some actions. And so all of a sudden, you don't need the product layer anymore. And you realize that, oh, those things are just databases that I pay a hundred times the price, right? Because you're a post-SKF company and you have tech capabilities, you are incentivized to reduce your costs and you have the capability to do so. And then it makes sense to just scratch the SaaS away. So it's interesting that we might see kind of a bad time for SaaS in post-hyper-growth tech companies. So it's still a big market, but it's not that big because if you're not a tech company, you don't have the capabilities to reduce that cost. If you're a high-growth company, always going to be buying because you go faster with that. But that's an interesting new space, new category of companies that might remove some SaaS. Yeah, Alessio's firmSwyx [00:51:02]: has an interesting thesis on the future of SaaS in AI.Alessio [00:51:05]: Service as a software, we call it. It's basically like, well, the most extreme is like, why is there any software at all? You know, ideally, it's all a labor interface where you're asking somebody to do something for you, whether that's a person, an AI agent or whatnot.Stan [00:51:17]: Yeah, yeah, that's interesting. I have to ask.Swyx [00:51:19]: Are you paying for Temporal Cloud or are you self-hosting?Stan [00:51:22]: Oh, no, no, we're paying, we're paying. Oh, okay, interesting.Swyx [00:51:24]: We're paying way too much.Stan [00:51:26]: It's crazy expensive, but it makes us-Swyx [00:51:28]: That's why as a shareholder, I like to hear that. It makes us go faster,Stan [00:51:31]: so we're happy to pay.Swyx [00:51:33]: Other things in the infrastack, I just want a list for other founders to think about. Ops, API gateway, evals, you know, anything interesting there that you build or buy?Stan [00:51:41]: I mean, there's always an interesting question. We've been building a lot around the interface between models and because Dust, the original version, was an orchestration platform and we basically provide a unified interface to every model providers.Swyx [00:51:56]: That's what I call gateway.Stan [00:51:57]: That we add because Dust was that and so we continued building upon and we own it. But that's an interesting question was in you, you want to build that or buy it?Swyx [00:52:06]: Yeah, I always say light LLM is the current open source consensus.Stan [00:52:09]: Exactly, yeah. There's an interesting question there.Swyx [00:52:12]: Ops, Datadog, just tracking.Stan [00:52:14]: Oh yeah, so Datadog is an obvious... What are the mistakes that I regret? I started as pure JavaScript, not TypeScript, and I think you want to, if you're wondering, oh, I want to go fast, I'll do a little bit of JavaScript. No, don't, just start with TypeScript. I see, okay.Swyx [00:52:30]: So interesting, you are a research engineer that came out of OpenAI that bet on TypeScript.Stan [00:52:36]: Well, the reality is that if you're building a product, you're going to be doing a lot of JavaScript, right? And Next, we're using Next as an example. It's
This week we're still waiting for Ubuntu Core, But the wait is over for AMD's new 9800X3D processor! We get better kernel PWM support, Russia appears to be forking the kernel, the Mozilla Foundation makes cuts, and Framework is teaming with Mint. For tips we have pw-cat for sniffing on Pipewire, nvtop for sniffing on your GPU, ssh jump servers, and the zen browser. Find the show notes at https://bit.ly/4fmWf22 and enjoy! Host: Jonathan Bennett Co-Hosts: David Ruggles, Ken McDonald, and Jeff Massie Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Malicious NPM packages are sneaking into codebases while FFmpeg devs prove old-school assembly skills can still smoke the competition. Plus, a rare bee species takes on Zuck's AI dreams.
Malicious NPM packages are sneaking into codebases while FFmpeg devs prove old-school assembly skills can still smoke the competition. Plus, a rare bee species takes on Zuck's AI dreams.
Malicious NPM packages are sneaking into codebases while FFmpeg devs prove old-school assembly skills can still smoke the competition. Plus, a rare bee species takes on Zuck's AI dreams.
From Nextcloud Breakup to Blissful Reunion: Chris's journey back to a smarter setup. Plus, Jellyfin's game-changing features and a beloved self-hosted app get the upgrade we've all been waiting for.
This week we cover news from Pine on another piece of hardware that runs doom, network manager and Godot have a shared struggle, and FFMpeg drops 7.1. Then we chat Audacious, Look at XFCE's Wayland work, talk about Linux 6.12, and reminisce about bittorrent. For tips, there's Etchdroid for writing boot disks from Android, install for command line installations, neovim for editing needs, and truncate for trimming bytes off the end of files. The show notes are available at https://tinyurl.com/23zpxeyf with links to each topic! Host: Jonathan Bennett Co-Hosts: Rob Campbell, Jeff Massie, and Ken McDonald Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
This week we dig into the Verizon outage. We'll talk about why and how you can host your own communication infrastructure, so an outage is less detrimental to your life! -- During The Show -- 01:35 Notes with sync? - Jou Jou How Steve takes notes Standard Notes (https://app.standardnotes.com/) Joplin (https://joplinapp.org/) Simple Mobile Tools (https://simplemobiletools.com/) Fossify (https://www.fossify.org/apps/) Nextcloud Notes app Collabra 09:12 Listener Reacts To Distros - Aaron Issues with Ubuntu Go for Fedora Endless OS Write in with your Ubuntu issues Mind Drip One Offline Windows Setup (https://docs.minddripone.com/Windows/windows-11-offline-setup/) 15:27 Audio Issues Mixer died Loaner from company Got the board fixed Sample rate changed Sample rate manually fixed and digital recorder back in production 17:48 News Wire Firefox 131 - mozilla.org (https://www.mozilla.org/en-US/firefox/131.0/releasenotes/) FFmpeg 7.1 - ffmpeg.org (https://ffmpeg.org) GnuCash 5.9 - gnucash.org (https://www.gnucash.org/news.phtml) Postgres 17.1 - postgresql (https://www.postgresql.org/about/news/postgresql-17-released-2936/) Winamp Source Code - bleepingcomputer.com (https://www.bleepingcomputer.com/news/software/winamp-releases-source-code-asks-for-help-modernizing-the-player/) Liya 2.1 - notebookcheck.net (https://www.notebookcheck.net/Arch-Linux-based-Liya-2-1-rolls-out-with-the-6-11-0-1-kernel.893752.0.html) Arch and Valve Collaboration - lists.archlinux.org (https://lists.archlinux.org/archives/list/arch-dev-public@lists.archlinux.org/thread/RIZSKIBDSLY4S5J2E2STNP5DH4XZGJMR/) Tails and Tor Join Forces - blog.torproject.org (https://blog.torproject.org/tor-tails-join-forces/) Statial-B - hackaday.com (https://hackaday.com/2024/09/25/the-statial-b-open-source-adjustable-mouse/) Molmo Open Source AI Model - venturebeat.com (https://venturebeat.com/ai/ai2s-new-molmo-open-source-ai-models-beat-gpt-4o-claude-on-some-benchmarks/) 18:55 Arch Flatpak Issue Installed Arch, ran the ansible, restored backups Flatpak installs failed Github Issue (https://github.com/flatpak/flatpak/issues/5111) Arch Flatpak Fix ``` flatpak remote-delete --force flathub flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo ``` * Let's do a better job 26:10 Verizon Outage No clear communication Speculation abounds No clear ETA for restoration of service Never seen people behave this badly Self Hosted services unaffected Calls and Texts via 3CX and JMP.chat Internal communications over Matrix Companies should self host their own services Communication is changing Master Switch (https://www.penguinrandomhouse.com/books/194417/the-master-switch-by-tim-wu/) Steve didn't even notice the outage Other carriers reported issues contacting Verizon customers Don't be a slave to the glowing brick 40:33 Thunderbird on Android Mozilla took over K9 Mail Unified Inbox Still in Beta Asking for feedback Thunderbird Matrix Chat (https://chat.mozilla.org/#/room/#tb-android:mozilla.org) Thunderbird Blog Post (https://blog.thunderbird.net/2024/09/help-us-test-the-thunderbird-for-android-beta/) 46:00 Reproducibility Combinations of packages can be problematic Automation can fail Never will be completely safe Automation can get you a long way NixOS ZFS Snapshots -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/409) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
Scott walks Wes through the new Syntax Production Assistant Desktop App, designed to streamline and automate their complex publishing process. From tech stack choices like Svelte5 and Rust to AI-driven features, they dive into how this tool keeps everything consistent. Show Notes 00:00 Welcome to Syntax! 00:44 Brought to you by Sentry.io. 01:37 What was the idea? 05:42 The tech. Svelte5, Tauri, Rust, FFMPEG. 08:32 Markdown editor. ink-mde, Dillinger. 09:32 Epoch timestamps. Epoch.vercel. 10:01 Updating front-matter. 10:10 Dexie.js function. 11:25 Backing up data. 11:58 Rust functions. 12:58 Why a desktop app and not a website? 14:38 Some small AI features. 16:26 Challenges with OAuth. 20:03 Publishing challenges. 23:29 Could this work on Windows? 23:54 Debugging. 26:23 Deciphering Apple logs. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
News includes the upcoming signed installers for Livebook and Elixir on Windows, the release of Telemetry v1.3 with improved documentation, LiveView Native 0.3.0's announcement ahead of ElixirConf, Google Research introducing an alternative SQL syntax with a pipe, a Livebook leveraging LLMs and FFMPEG for media conversion, legal updates on the US non-compete agreements ban, and potential antitrust actions against Google, and more! Show Notes online - http://podcast.thinkingelixir.com/218 (http://podcast.thinkingelixir.com/218) Elixir Community News - https://x.com/josevalim/status/1825954736094457943 (https://x.com/josevalim/status/1825954736094457943?utm_source=thinkingelixir&utm_medium=shownotes) – The next versions of Livebook and Elixir will have signed installers on Windows, thanks to the Erlang Ecosystem Foundation and Wojtek Mach. - https://x.com/wojtekmach/status/1826521109476344035 (https://x.com/wojtekmach/status/1826521109476344035?utm_source=thinkingelixir&utm_medium=shownotes) – Wojtek Mach discusses the challenges of packaging Livebook into a .msix for the Windows Store and asks for contributions from those familiar with the process. - https://hexdocs.pm/telemetry/1.3.0/readme.html (https://hexdocs.pm/telemetry/1.3.0/readme.html?utm_source=thinkingelixir&utm_medium=shownotes) – Telemetry v1.3 is out with improved documentation, rewritten to ExDoc from Erlang edoc, thanks to contributions from Wojtek Mach and Andrea Leopardi. OTP 27 is required. - https://x.com/bcardarella/status/1826266402631889091 (https://x.com/bcardarella/status/1826266402631889091?utm_source=thinkingelixir&utm_medium=shownotes) – LiveView Native 0.3.0 is now released with the official announcement at ElixirConf. Blog posts, tutorials to follow. - https://x.com/bcardarella/status/1826279303623082421 (https://x.com/bcardarella/status/1826279303623082421?utm_source=thinkingelixir&utm_medium=shownotes) – Additional details about the LiveView Native 0.3.0 release. - https://twitter.com/simonw/status/1827482890680332386 (https://twitter.com/simonw/status/1827482890680332386?utm_source=thinkingelixir&utm_medium=shownotes) – Google Research released a paper on an alternative SQL syntax with a pipe, similar to Ecto querying syntax. - https://simonwillison.net/2024/Aug/24/pipe-syntax-in-sql/ (https://simonwillison.net/2024/Aug/24/pipe-syntax-in-sql/?utm_source=thinkingelixir&utm_medium=shownotes) – More details on the new SQL syntax introduced by Google for ZetaSQL. - https://twitter.com/ac_alejos/status/1794105872680972458 (https://twitter.com/ac_alejos/status/1794105872680972458?utm_source=thinkingelixir&utm_medium=shownotes) – A Livebook that uses LLMs and FFMPEG to simplify the process of converting videos or audio by suggesting the right flags and switches. - https://github.com/acalejos/CinEx (https://github.com/acalejos/CinEx?utm_source=thinkingelixir&utm_medium=shownotes) – Detailed information on using LLMs within Livebook for conversion tasks. - https://www.reuters.com/legal/us-judge-strikes-down-biden-administration-ban-worker-noncompete-agreements-2024-08-20/ (https://www.reuters.com/legal/us-judge-strikes-down-biden-administration-ban-worker-noncompete-agreements-2024-08-20/?utm_source=thinkingelixir&utm_medium=shownotes) – A US Judge struck down the FTC's ban on non-compete agreements, stating the FTC lacks legal authority and the ban is too wide-reaching. - https://www.nytimes.com/2024/08/13/technology/google-monopoly-antitrust-justice-department.html (https://www.nytimes.com/2024/08/13/technology/google-monopoly-antitrust-justice-department.html?utm_source=thinkingelixir&utm_medium=shownotes) – The US government is considering ordering Google to be broken up following antitrust allegations. - https://www.macrumors.com/2024/08/22/apple-eu-default-app-update/ (https://www.macrumors.com/2024/08/22/apple-eu-default-app-update/?utm_source=thinkingelixir&utm_medium=shownotes) – Apple might allow EU residents to delete apps currently blocked from removal, addressing app store issues in the EU. - Living in a time when industry rules are being challenged creates opportunities for new businesses and markets, as highlighted by ongoing legal issues with major tech companies like Google and Apple. Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Message the show on Fediverse - @ThinkingElixir@genserver.social (https://genserver.social/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - Mark Ericksen on Fediverse - @brainlid@genserver.social (https://genserver.social/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - David Bernheisel on Fediverse - @dbern@genserver.social (https://genserver.social/dbern)
Rust meets Linux in a clash of coding cultures. Why some developers are resisting, and where things go from here.Sponsored By:Core Contributor Membership: Take $1 a month of your membership for a lifetime!Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices! 1Password Extended Access Management: 1Password Extended Access Management is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps. Support LINUX UnpluggedLinks:
Today's guest, Nicholas Carlini, a research scientist at DeepMind, argues that we should be focusing more on what AI can do for us individually, rather than trying to have an answer for everyone."How I Use AI" - A Pragmatic ApproachCarlini's blog post "How I Use AI" went viral for good reason. Instead of giving a personal opinion about AI's potential, he simply laid out how he, as a security researcher, uses AI tools in his daily work. He divided it in 12 sections:* To make applications* As a tutor* To get started* To simplify code* For boring tasks* To automate tasks* As an API reference* As a search engine* To solve one-offs* To teach me* Solving solved problems* To fix errorsEach of the sections has specific examples, so we recommend going through it. It also includes all prompts used for it; in the "make applications" case, it's 30,000 words total!My personal takeaway is that the majority of the work AI can do successfully is what humans dislike doing. Writing boilerplate code, looking up docs, taking repetitive actions, etc. These are usually boring tasks with little creativity, but with a lot of structure. This is the strongest arguments as to why LLMs, especially for code, are more beneficial to senior employees: if you can get the boring stuff out of the way, there's a lot more value you can generate. This is less and less true as you go entry level jobs which are mostly boring and repetitive tasks. Nicholas argues both sides ~21:34 in the pod.A New Approach to LLM BenchmarksWe recently did a Benchmarks 201 episode, a follow up to our original Benchmarks 101, and some of the issues have stayed the same. Notably, there's a big discrepancy between what benchmarks like MMLU test, and what the models are used for. Carlini created his own domain-specific language for writing personalized LLM benchmarks. The idea is simple but powerful:* Take tasks you've actually needed AI for in the past.* Turn them into benchmark tests.* Use these to evaluate new models based on your specific needs.It can represent very complex tasks, from a single code generation to drawing a US flag using C:"Write hello world in python" >> LLMRun() >> PythonRun() >> SubstringEvaluator("hello world")"Write a C program that draws an american flag to stdout." >> LLMRun() >> CRun() >> VisionLLMRun("What flag is shown in this image?") >> (SubstringEvaluator("United States") | SubstringEvaluator("USA")))This approach solves a few problems:* It measures what's actually useful to you, not abstract capabilities.* It's harder for model creators to "game" your specific benchmark, a problem that has plagued standardized tests.* It gives you a concrete way to decide if a new model is worth switching to, similar to how developers might run benchmarks before adopting a new library or framework.Carlini argues that if even a small percentage of AI users created personal benchmarks, we'd have a much better picture of model capabilities in practice.AI SecurityWhile much of the AI security discussion focuses on either jailbreaks or existential risks, Carlini's research targets the space in between. Some highlights from his recent work:* LAION 400M data poisoning: By buying expired domains referenced in the dataset, Carlini's team could inject arbitrary images into models trained on LAION 400M. You can read the paper "Poisoning Web-Scale Training Datasets is Practical", for all the details. This is a great example of expanding the scope beyond the model itself, and looking at the whole system and how ti can become vulnerable.* Stealing model weights: They demonstrated how to extract parts of production language models (like OpenAI's) through careful API queries. This research, "Extracting Training Data from Large Language Models", shows that even black-box access can leak sensitive information.* Extracting training data: In some cases, they found ways to make models regurgitate verbatim snippets from their training data. Him and Milad Nasr wrote a paper on this as well: Scalable Extraction of Training Data from (Production) Language Models. They also think this might be applicable to extracting RAG results from a generation.These aren't just theoretical attacks. They've led to real changes in how companies like OpenAI design their APIs and handle data. If you really miss logit_bias and logit results by token, you can blame Nicholas :)We had a ton of fun also chatting about things like Conway's Game of Life, how much data can fit in a piece of paper, and porting Doom to Javascript. Enjoy!Show Notes* How I Use AI* My Benchmark for LLMs* Doom Javascript port* Conway's Game of Life* Tic-Tac-Toe in one printf statement* International Obfuscated C Code Contest* Cursor* LAION 400M poisoning paper* Man vs Machine at Black Hat* Model Stealing from OpenAI* Milad Nasr* H.D. Moore* Vijay Bolina* Cosine.sh* uuencodeTimestamps* [00:00:00] Introductions* [00:01:14] Why Nicholas writes* [00:02:09] The Game of Life* [00:05:07] "How I Use AI" blog post origin story* [00:08:24] Do we need software engineering agents?* [00:11:03] Using AI to kickstart a project* [00:14:08] Ephemeral software* [00:17:37] Using AI to accelerate research* [00:21:34] Experts vs non-expert users as beneficiaries of AI* [00:24:02] Research on generating less secure code with LLMs.* [00:27:22] Learning and explaining code with AI* [00:30:12] AGI speculations?* [00:32:50] Distributing content without social media* [00:35:39] How much data do you think you can put on a single piece of paper?* [00:37:37] Building personal AI benchmarks* [00:43:04] Evolution of prompt engineering and its relevance* [00:46:06] Model vs task benchmarking* [00:52:14] Poisoning LAION 400M through expired domains* [00:55:38] Stealing OpenAI models from their API* [01:01:29] Data stealing and recovering training data from models* [01:03:30] Finding motivation in your workTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:12]: Hey, and today we're in the in-person studio, which Alessio has gorgeously set up for us, with Nicholas Carlini. Welcome. Thank you. You're a research scientist at DeepMind. You work at the intersection of machine learning and computer security. You got your PhD from Berkeley in 2018, and also your BA from Berkeley as well. And mostly we're here to talk about your blogs, because you are so generous in just writing up what you know. Well, actually, why do you write?Nicholas [00:00:41]: Because I like, I feel like it's fun to share what you've done. I don't like writing, sufficiently didn't like writing, I almost didn't do a PhD, because I knew how much writing was involved in writing papers. I was terrible at writing when I was younger. I do like the remedial writing classes when I was in university, because I was really bad at it. So I don't actually enjoy, I still don't enjoy the act of writing. But I feel like it is useful to share what you're doing, and I like being able to talk about the things that I'm doing that I think are fun. And so I write because I think I want to have something to say, not because I enjoy the act of writing.Swyx [00:01:14]: But yeah. It's a tool for thought, as they often say. Is there any sort of backgrounds or thing that people should know about you as a person? Yeah.Nicholas [00:01:23]: So I tend to focus on, like you said, I do security work, I try to like attacking things and I want to do like high quality security research. And that's mostly what I spend my actual time trying to be productive members of society doing that. But then I get distracted by things, and I just like, you know, working on random fun projects. Like a Doom clone in JavaScript.Swyx [00:01:44]: Yes.Nicholas [00:01:45]: Like that. Or, you know, I've done a number of things that have absolutely no utility. But are fun things to have done. And so it's interesting to say, like, you should work on fun things that just are interesting, even if they're not useful in any real way. And so that's what I tend to put up there is after I have completed something I think is fun, or if I think it's sufficiently interesting, write something down there.Alessio [00:02:09]: Before we go into like AI, LLMs and whatnot, why are you obsessed with the game of life? So you built multiplexing circuits in the game of life, which is mind boggling. So where did that come from? And then how do you go from just clicking boxes on the UI web version to like building multiplexing circuits?Nicholas [00:02:29]: I like Turing completeness. The definition of Turing completeness is a computer that can run anything, essentially. And the game of life, Conway's game of life is a very simple cellular 2D automata where you have cells that are either on or off. And a cell becomes on if in the previous generation some configuration holds true and off otherwise. It turns out there's a proof that the game of life is Turing complete, that you can run any program in principle using Conway's game of life. I don't know. And so you can, therefore someone should. And so I wanted to do it. Some other people have done some similar things, but I got obsessed into like, if you're going to try and make it work, like we already know it's possible in theory. I want to try and like actually make something I can run on my computer, like a real computer I can run. And so yeah, I've been going on this rabbit hole of trying to make a CPU that I can run semi real time on the game of life. And I have been making some reasonable progress there. And yeah, but you know, Turing completeness is just like a very fun trap you can go down. A while ago, as part of a research paper, I was able to show that in C, if you call into printf, it's Turing complete. Like printf, you know, like, which like, you know, you can print numbers or whatever, right?Swyx [00:03:39]: Yeah, but there should be no like control flow stuff.Nicholas [00:03:42]: Because printf has a percent n specifier that lets you write an arbitrary amount of data to an arbitrary location. And the printf format specifier has an index into where it is in the loop that is in memory. So you can overwrite the location of where printf is currently indexing using percent n. So you can get loops, you can get conditionals, and you can get arbitrary data rates again. So we sort of have another Turing complete language using printf, which again, like this has essentially zero practical utility, but like, it's just, I feel like a lot of people get into programming because they enjoy the art of doing these things. And then they go work on developing some software application and lose all joy with the boys. And I want to still have joy in doing these things. And so on occasion, I try to stop doing productive, meaningful things and just like, what's a fun thing that we can do and try and make that happen.Alessio [00:04:39]: Awesome. So you've been kind of like a pioneer in the AI security space. You've done a lot of talks starting back in 2018. We'll kind of leave that to the end because I know the security part is, there's maybe a smaller audience, but it's a very intense audience. So I think that'll be fun. But everybody in our Discord started posting your how I use AI blog post and we were like, we should get Carlini on the podcast. And then you were so nice to just, yeah, and then I sent you an email and you're like, okay, I'll come.Swyx [00:05:07]: And I was like, oh, I thought that would be harder.Alessio [00:05:10]: I think there's, as you said in the blog posts, a lot of misunderstanding about what LLMs can actually be used for. What are they useful at? What are they not good at? And whether or not it's even worth arguing what they're not good at, because they're obviously not. So if you cannot count the R's in a word, they're like, it's just not what it does. So how painful was it to write such a long post, given that you just said that you don't like to write? Yeah. And then we can kind of run through the things, but maybe just talk about the motivation, why you thought it was important to do it.Nicholas [00:05:39]: Yeah. So I wanted to do this because I feel like most people who write about language models being good or bad, some underlying message of like, you know, they have their camp and their camp is like, AI is bad or AI is good or whatever. And they like, they spin whatever they're going to say according to their ideology. And they don't actually just look at what is true in the world. So I've read a lot of things where people say how amazing they are and how all programmers are going to be obsolete by 2024. And I've read a lot of things where people who say like, they can't do anything useful at all. And, you know, like, they're just like, it's only the people who've come off of, you know, blockchain crypto stuff and are here to like make another quick buck and move on. And I don't really agree with either of these. And I'm not someone who cares really one way or the other how these things go. And so I wanted to write something that just says like, look, like, let's sort of ground reality and what we can actually do with these things. Because my actual research is in like security and showing that these models have lots of problems. Like this is like my day to day job is saying like, we probably shouldn't be using these in lots of cases. I thought I could have a little bit of credibility of in saying, it is true. They have lots of problems. We maybe shouldn't be deploying them lots of situations. And still, they are also useful. And that is the like, the bit that I wanted to get across is to say, I'm not here to try and sell you on anything. I just think that they're useful for the kinds of work that I do. And hopefully, some people would listen. And it turned out that a lot more people liked it than I thought. But yeah, that was the motivation behind why I wanted to write this.Alessio [00:07:15]: So you had about a dozen sections of like how you actually use AI. Maybe we can just kind of run through them all. And then maybe the ones where you have extra commentary to add, we can... Sure.Nicholas [00:07:27]: Yeah, yeah. I didn't put as much thought into this as maybe was deserved. I probably spent, I don't know, definitely less than 10 hours putting this together.Swyx [00:07:38]: Wow.Alessio [00:07:39]: It took me close to that to do a podcast episode. So that's pretty impressive.Nicholas [00:07:43]: Yeah. I wrote it in one pass. I've gotten a number of emails of like, you got this editing thing wrong, you got this sort of other thing wrong. It's like, I haven't just haven't looked at it. I tend to try it. I feel like I still don't like writing. And so because of this, the way I tend to treat this is like, I will put it together into the best format that I can at a time, and then put it on the internet, and then never change it. And this is an aspect of like the research side of me is like, once a paper is published, like it is done as an artifact that exists in the world. I could forever edit the very first thing I ever put to make it the most perfect version of what it is, and I would do nothing else. And so I feel like I find it useful to be like, this is the artifact, I will spend some certain amount of hours on it, which is what I think it is worth. And then I will just...Swyx [00:08:22]: Yeah.Nicholas [00:08:23]: Timeboxing.Alessio [00:08:24]: Yeah. Stop. Yeah. Okay. We just recorded an episode with the founder of Cosine, which is like an AI software engineer colleague. You said it took you 30,000 words to get GPT-4 to build you the, can GPT-4 solve this kind of like app. Where are we in the spectrum where chat GPT is all you need to actually build something versus I need a full on agent that does everything for me?Nicholas [00:08:46]: Yeah. Okay. So this was an... So I built a web app last year sometime that was just like a fun demo where you can guess if you can predict whether or not GPT-4 at the time could solve a given task. This is, as far as web apps go, very straightforward. You need basic HTML, CSS, you have a little slider that moves, you have a button, sort of animate the text coming to the screen. The reason people are going here is not because they want to see my wonderful HTML, right? I used to know how to do modern HTML in 2007, 2008. I was very good at fighting with IE6 and these kinds of things. I knew how to do that. I have no longer had to build any web app stuff in the meantime, which means that I know how everything works, but I don't know any of the new... Flexbox is new to me. Flexbox is like 10 years old at this point, but it's just amazing being able to go to the model and just say, write me this thing and it will give me all of the boilerplate that I need to get going. Of course it's imperfect. It's not going to get you the right answer, and it doesn't do anything that's complicated right now, but it gets you to the point where the only remaining work that needs to be done is the interesting hard part for me, the actual novel part. Even the current models, I think, are entirely good enough at doing this kind of thing, that they're very useful. It may be the case that if you had something, like you were saying, a smarter agent that could debug problems by itself, that might be even more useful. Currently though, make a model into an agent by just copying and pasting error messages for the most part. That's what I do, is you run it and it gives you some code that doesn't work, and either I'll fix the code, or it will give me buggy code and I won't know how to fix it, and I'll just copy and paste the error message and say, it tells me this. What do I do? And it will just tell me how to fix it. You can't trust these things blindly, but I feel like most people on the internet already understand that things on the internet, you can't trust blindly. And so this is not like a big mental shift you have to go through to understand that it is possible to read something and find it useful, even if it is not completely perfect in its output.Swyx [00:10:54]: It's very human-like in that sense. It's the same ring of trust, I kind of think about it that way, if you had trust levels.Alessio [00:11:03]: And there's maybe a couple that tie together. So there was like, to make applications, and then there's to get started, which is a similar you know, kickstart, maybe like a project that you know the LLM cannot solve. It's kind of how you think about it.Nicholas [00:11:15]: Yeah. So for getting started on things is one of the cases where I think it's really great for some of these things, where I sort of use it as a personalized, help me use this technology I've never used before. So for example, I had never used Docker before January. I know what Docker is. Lucky you. Yeah, like I'm a computer security person, like I sort of, I have read lots of papers on, you know, all the technology behind how these things work. You know, I know all the exploits on them, I've done some of these things, but I had never actually used Docker. But I wanted it to be able to, I could run the outputs of language model stuff in some controlled contained environment, which I know is the right application. So I just ask it like, I want to use Docker to do this thing, like, tell me how to run a Python program in a Docker container. And it like gives me a thing. I'm like, step back. You said Docker compose, I do not know what this word Docker compose is. Is this Docker? Help me. And like, you'll sort of tell me all of these things. And I'm sure there's this knowledge that's out there on the internet, like this is not some groundbreaking thing that I'm doing, but I just wanted it as a small piece of one thing I was working on. And I didn't want to learn Docker from first principles. Like I, at some point, if I need it, I can do that. Like I have the background that I can make that happen. But what I wanted to do was, was thing one. And it's very easy to get bogged down in the details of this other thing that helps you accomplish your end goal. And I just want to like, tell me enough about Docker so I can do this particular thing. And I can check that it's doing the safe thing. I sort of know enough about that from, you know, my other background. And so I can just have the model help teach me exactly the one thing I want to know and nothing more. I don't need to worry about other things that the writer of this thinks is important that actually isn't. Like I can just like stop the conversation and say, no, boring to me. Explain this detail. I don't understand. I think that's what that was very useful for me. It would have taken me, you know, several hours to figure out some things that take 10 minutes if you could just ask exactly the question you want the answer to.Alessio [00:13:05]: Have you had any issues with like newer tools? Have you felt any meaningful kind of like a cutoff day where like there's not enough data on the internet or? I'm sure that the answer to this is yes.Nicholas [00:13:16]: But I tend to just not use most of these things. Like I feel like this is like the significant way in which I use machine learning models is probably very different than most people is that I'm a researcher and I get to pick what tools that I use and most of the things that I work on are fairly small projects. And so I can, I can entirely see how someone who is in a big giant company where they have their own proprietary legacy code base of a hundred million lines of code or whatever and like you just might not be able to use things the same way that I do. I still think there are lots of use cases there that are entirely reasonable that are not the same ones that I've put down. But I wanted to talk about what I have personal experience in being able to say is useful. And I would like it very much if someone who is in one of these environments would be able to describe the ways in which they find current models useful to them. And not, you know, philosophize on what someone else might be able to find useful, but actually say like, here are real things that I have done that I found useful for me.Swyx [00:14:08]: Yeah, this is what I often do to encourage people to write more, to share their experiences because they often fear being attacked on the internet. But you are the ultimate authority on how you use things and there's this objectively true. So they cannot be debated. One thing that people are very excited about is the concept of ephemeral software or like personal software. This use case in particular basically lowers the activation energy for creating software, which I like as a vision. I don't think I have taken as much advantage of it as I could. I feel guilty about that. But also, we're trending towards there.Nicholas [00:14:47]: Yeah. No, I mean, I do think that this is a direction that is exciting to me. One of the things I wrote that was like, a lot of the ways that I use these models are for one-off things that I just need to happen that I'm going to throw away in five minutes. And you can.Swyx [00:15:01]: Yeah, exactly.Nicholas [00:15:02]: Right. It's like the kind of thing where it would not have been worth it for me to have spent 45 minutes writing this, because I don't need the answer that badly. But if it will only take me five minutes, then I'll just figure it out, run the program and then get it right. And if it turns out that you ask the thing, it doesn't give you the right answer. Well, I didn't actually need the answer that badly in the first place. Like either I can decide to dedicate the 45 minutes or I cannot, but like the cost of doing it is fairly low. You see what the model can do. And if it can't, then, okay, when you're using these models, if you're getting the answer you want always, it means you're not asking them hard enough questions.Swyx [00:15:35]: Say more.Nicholas [00:15:37]: Lots of people only use them for very small particular use cases and like it always does the thing that they want. Yeah.Swyx [00:15:43]: Like they use it like a search engine.Nicholas [00:15:44]: Yeah. Or like one particular case. And if you're finding that when you're using these, it's always giving you the answer that you want, then probably it has more capabilities than you're actually using. And so I oftentimes try when I have something that I'm curious about to just feed into the model and be like, well, maybe it's just solved my problem for me. You know, most of the time it doesn't, but like on occasion, it's like, it's done things that would have taken me, you know, a couple hours that it's been great and just like solved everything immediately. And if it doesn't, then it's usually easier to verify whether or not the answer is correct than to have written in the first place. And so you check, you're like, well, that's just, you're entirely misguided. Nothing here is right. It's just like, I'm not going to do this. I'm going to go write it myself or whatever.Alessio [00:16:21]: Even for non-tech, I had to fix my irrigation system. I had an old irrigation system. I didn't know how I worked to program it. I took a photo, I sent it to Claude and it's like, oh yeah, that's like the RT 900. This is exactly, I was like, oh wow, you know, you know, a lot of stuff.Swyx [00:16:34]: Was it right?Alessio [00:16:35]: Yeah, it was right.Swyx [00:16:36]: It worked. Did you compare with OpenAI?Alessio [00:16:38]: No, I canceled my OpenAI subscription, so I'm a Claude boy. Do you have a way to think about this like one-offs software thing? One way I talk to people about it is like LLMs are kind of converging to like semantic serverless functions, you know, like you can say something and like it can run the function in a way and then that's it. It just kind of dies there. Do you have a mental model to just think about how long it should live for and like anything like that?Nicholas [00:17:02]: I don't think I have anything interesting to say here, no. I will take whatever tools are available in front of me and try and see if I can use them in meaningful ways. And if they're helpful, then great. If they're not, then fine. And like, you know, there are lots of people that I'm very excited about seeing all these people who are trying to make better applications that use these or all these kinds of things. And I think that's amazing. I would like to see more of it, but I do not spend my time thinking about how to make this any better.Alessio [00:17:27]: What's the most underrated thing in the list? I know there's like simplified code, solving boring tasks, or maybe is there something that you forgot to add that you want to throw in there?Nicholas [00:17:37]: I mean, so in the list, I only put things that people could look at and go, I understand how this solved my problem. I didn't want to put things where the model was very useful to me, but it would not be clear to someone else that it was actually useful. So for example, one of the things that I use it a lot for is debugging errors. But the errors that I have are very much not the errors that anyone else in the world will have. And in order to understand whether or not the solution was right, you just have to trust me on it. Because, you know, like I got my machine in a state that like CUDA was not talking to whatever some other thing, the versions were mismatched, something, something, something, and everything was broken. And like, I could figure it out with interaction with the model, and it gave it like told me the steps I needed to take. But at the end of the day, when you look at the conversation, you just have to trust me that it worked. And I didn't want to write things online that were this, like, you have to trust me that what I'm saying. I want everything that I said to like have evidence that like, here's the conversation, you can go and check whether or not this actually solved the task as I said that the model does. Because a lot of people I feel like say, I used a model to solve this very complicated task. And what they mean is the model did 10%, and I did the other 90% or something, I wanted everything to be verifiable. And so one of the biggest use cases for me, I didn't describe even at all, because it's not the kind of thing that other people could have verified by themselves. So that maybe is like, one of the things that I wish I maybe had said a little bit more about, and just stated that the way that this is done, because I feel like that this didn't come across quite as well. But yeah, of the things that I talked about, the thing that I think is most underrated is the ability of it to solve the uninteresting parts of problems for me right now, where people always say, this is one of the biggest arguments that I don't understand why people say is, the model can only do things that people have done before. Therefore, the model is not going to be helpful in doing new research or like discovering new things. And as someone whose day job is to do new things, like what is research? Research is doing something literally no one else in the world has ever done before. So this is what I do every single day, 90% of this is not doing something new, 90% of this is doing things a million people have done before, and then a little bit of something that was new. There's a reason why we say we stand on the shoulders of giants. It's true. Almost everything that I do is something that's been done many, many times before. And that is the piece that can be automated. Even if the thing that I'm doing as a whole is new, it is almost certainly the case that the small pieces that build up to it are not. And a number of people who use these models, I feel like expect that they can either solve the entire task or none of the task. But now I find myself very often, even when doing something very new and very hard, having models write the easy parts for me. And the reason I think this is so valuable, everyone who programs understands this, like you're currently trying to solve some problem and then you get distracted. And whatever the case may be, someone comes and talks to you, you have to go look up something online, whatever it is. You lose a lot of time to that. And one of the ways we currently don't think about being distracted is you're solving some hard problem and you realize you need a helper function that does X, where X is like, it's a known algorithm. Any person in the world, you say like, give me the algorithm that, have a dense graph or a sparse graph, I need to make it dense. You can do this by doing some matrix multiplies. It's like, this is a solved problem. I knew how to do this 15 years ago, but it distracts me from the problem I'm thinking about in my mind. I needed this done. And so instead of using my mental capacity and solving that problem and then coming back to the problem I was originally trying to solve, you could just ask model, please solve this problem for me. It gives you the answer. You run it. You can check that it works very, very quickly. And now you go back to solving the problem without having lost all the mental state. And I feel like this is one of the things that's been very useful for me.Swyx [00:21:34]: And in terms of this concept of expert users versus non-expert users, floors versus ceilings, you had some strong opinion here that like, basically it actually is more beneficial for non-experts.Nicholas [00:21:46]: Yeah, I don't know. I think it could go either way. Let me give you the argument for both of these. Yes. So I can only speak on the expert user behalf because I've been doing computers for a long time. And so yeah, the cases where it's useful for me are exactly these cases where I can check the output. I know, and anything the model could do, I could have done. I could have done better. I can check every single thing that the model is doing and make sure it's correct in every way. And so I can only speak and say, definitely it's been useful for me. But I also see a world in which this could be very useful for the kinds of people who do not have this knowledge, with caveats, because I'm not one of these people. I don't have this direct experience. But one of these big ways that I can see this is for things that you can check fairly easily, someone who could never have asked or have written a program themselves to do a certain task could just ask for the program that does the thing. And you know, some of the times it won't get it right. But some of the times it will, and they'll be able to have the thing in front of them that they just couldn't have done before. And we see a lot of people trying to do applications for this, like integrating language models into spreadsheets. Spreadsheets run the world. And there are some people who know how to do all the complicated spreadsheet equations and various things, and other people who don't, who just use the spreadsheet program but just manually do all of the things one by one by one by one. And this is a case where you could have a model that could try and give you a solution. And as long as the person is rigorous in testing that the solution does actually the correct thing, and this is the part that I'm worried about most, you know, I think depending on these systems in ways that we shouldn't, like this is what my research says, my research says is entirely on this, like, you probably shouldn't trust these models to do the things in adversarial situations, like, I understand this very deeply. And so I think that it's possible for people who don't have this knowledge to make use of these tools in ways, but I'm worried that it might end up in a world where people just blindly trust them, deploy them in situations that they probably shouldn't, and then someone like me gets to come along and just break everything because everything is terrible. And so I am very, very worried about that being the case, but I think if done carefully it is possible that these could be very useful.Swyx [00:23:54]: Yeah, there is some research out there that shows that when people use LLMs to generate code, they do generate less secure code.Nicholas [00:24:02]: Yeah, Dan Bonet has a nice paper on this. There are a bunch of papers that touch on exactly this.Swyx [00:24:07]: My slight issue is, you know, is there an agenda here?Nicholas [00:24:10]: I mean, okay, yeah, Dan Bonet, at least the one they have, like, I fully trust everything that sort of.Swyx [00:24:15]: Sorry, I don't know who Dan is.Swyx [00:24:17]: He's a professor at Stanford. Yeah, he and some students have some things on this. Yeah, there's a number. I agree that a lot of the stuff feels like people have an agenda behind it. There are some that don't, and I trust them to have done the right thing. I also think, even on this though, we have to be careful because the argument, whenever someone says x is true about language models, you should always append the suffix for current models because I'll be the first to admit I was one of the people who was very much on the opinion that these language models are fun toys and are going to have absolutely no practical utility. If you had asked me this, let's say, in 2020, I still would have said the same thing. After I had seen GPT-2, I had written a couple of papers studying GPT-2 very carefully. I still would have told you these things are toys. And when I first read the RLHF paper and the instruction tuning paper, I was like, nope, this is this thing that these weird AI people are doing. They're trying to make some analogies to people that makes no sense. It's just like, I don't even care to read it. I saw what it was about and just didn't even look at it. I was obviously wrong. These things can be useful. And I feel like a lot of people had the same mentality that I did and decided not to change their mind. And I feel like this is the thing that I want people to be careful about. I want them to at least know what is true about the world so that they can then see that maybe they should reconsider some of the opinions that they had from four or five years ago that may just not be true about today's models.Swyx [00:25:47]: Specifically because you brought up spreadsheets, I want to share my personal experience because I think Google has done a really good job that people don't know about, which is if you use Google Sheets, Gemini is integrated inside of Google Sheets and it helps you write formulas. Great.Nicholas [00:26:00]: That's news to me.Swyx [00:26:01]: Right? They don't maybe do a good job. Unless you watch Google I.O., there was no other opportunity to learn that Gemini is now in your Google Sheets. And so I just don't write formulas manually anymore. It just prompts Gemini to do it for me. And it does it.Nicholas [00:26:15]: One of the problems that these machine learning models have is a discoverability problem. I think this will be figured out. I mean, it's the same problem that you have with any assistant. You're given a blank box and you're like, what do I do with it? I think this is great. More of these things, it would be good for them to exist. I want them to exist in ways that we can actually make sure that they're done correctly. I don't want to just have them be pushed into more and more things just blindly. I feel like lots of people, there are far too many X plus AI, where X is like arbitrary thing in the world that has nothing to do with it and could not be benefited at all. And they're just doing it because they want to use the word. And I don't want that to happen.Swyx [00:26:58]: You don't want an AI fridge?Nicholas [00:27:00]: No. Yes. I do not want my fridge on the internet.Swyx [00:27:03]: I do not want... Okay.Nicholas [00:27:05]: Anyway, let's not go down that rabbit hole. I understand why some of that happens, because people want to sell things or whatever. But I feel like a lot of people see that and then they write off everything as a result of it. And I just want to say, there are allowed to be people who are trying to do things that don't make any sense. Just ignore them. Do the things that make sense.Alessio [00:27:22]: Another chunk of use cases was learning. So both explaining code, being an API reference, all of these different things. Any suggestions on how to go at it? I feel like one thing is generate code and then explain to me. One way is just tell me about this technology. Another thing is like, hey, I read this online, kind of help me understand it. Any best practices on getting the most out of it?Swyx [00:27:47]: Yeah.Nicholas [00:27:47]: I don't know if I have best practices. I have how I use them.Swyx [00:27:51]: Yeah.Nicholas [00:27:51]: I find it very useful for cases where I understand the underlying ideas, but I have never usedSwyx [00:27:59]: them in this way before.Nicholas [00:28:00]: I know what I'm looking for, but I just don't know how to get there. And so yeah, as an API reference is a great example. The tool everyone always picks on is like FFmpeg. No one in the world knows the command line arguments to do what they want. They're like, make the thing faster. I want lower bitrate, like dash V. Once you tell me what the answer is, I can check. This is one of these things where it's great for these kinds of things. Or in other cases, things where I don't really care that the answer is 100% correct. So for example, I do a lot of security work. Most of security work is reading some code you've never seen before and finding out which pieces of the code are actually important. Because, you know, most of the program isn't actually do anything to do with security. It has, you know, the display piece or the other piece or whatever. And like, you just, you would only ignore all of that. So one very fun use of models is to like, just have it describe all the functions and just skim it and be like, wait, which ones look like approximately the right things to look at? Because otherwise, what are you going to do? You're going to have to read them all manually. And when you're reading them manually, you're going to skim the function anyway, and not just figure out what's going on perfectly. Like you already know that when you're going to read these things, what you're going to try and do is figure out roughly what's going on. Then you'll delve into the details. This is a great way of just doing that, but faster, because it will abstract most of whatSwyx [00:29:21]: is right.Nicholas [00:29:21]: It's going to be wrong some of the time. I don't care.Swyx [00:29:23]: I would have been wrong too.Nicholas [00:29:24]: And as long as you treat it with this way, I think it's great. And so like one of the particular use cases I have in the thing is decompiling binaries, where oftentimes people will release a binary. They won't give you the source code. And you want to figure out how to attack it. And so one thing you could do is you could try and run some kind of decompiler. It turns out for the thing that I wanted, none existed. And so I spent too many hours doing it by hand. Before I first thought, why am I doing this? I should just check if the model could do it for me. And it turns out that it can. And it can turn the compiled source code, which is impossible for any human to understand, into the Python code that is entirely reasonable to understand. And it doesn't run. It has a bunch of problems. But it's so much nicer that it's immediately a win for me. I can just figure out approximately where I should be looking, and then spend all of my time doing that by hand. And again, you get a big win there.Swyx [00:30:12]: So I fully agree with all those use cases, especially for you as a security researcher and having to dive into multiple things. I imagine that's super helpful. I do think we want to move to your other blog post. But you ended your post with a little bit of a teaser about your next post and your speculations. What are you thinking about?Nicholas [00:30:34]: So I want to write something. And I will do that at some point when I have time, maybe after I'm done writing my current papers for ICLR or something, where I want to talk about some thoughts I have for where language models are going in the near-term future. The reason why I want to talk about this is because, again, I feel like the discussion tends to be people who are either very much AGI by 2027, orSwyx [00:30:55]: always five years away, or are going to make statements of the form,Nicholas [00:31:00]: you know, LLMs are the wrong path, and we should be abandoning this, and we should be doing something else instead. And again, I feel like people tend to look at this and see these two polarizing options and go, well, those obviously are both very far extremes. Like, how do I actually, like, what's a more nuanced take here? And so I have some opinions about this that I want to put down, just saying, you know, I have wide margins of error. I think you should too. If you would say there's a 0% chance that something, you know, the models will get very, very good in the next five years, you're probably wrong. If you're going to say there's a 100% chance that in the next five years, then you're probably wrong. And like, to be fair, most of the people, if you read behind the headlines, actually say something like this. But it's very hard to get clicks on the internet of like, some things may be good in the future. Like, everyone wants like, you know, a very, like, nothing is going to be good. This is entirely wrong. It's going to be amazing. You know, like, they want to see this. I want people who have negative reactions to these kinds of extreme views to be able to at least say, like, to tell them, there is something real here. It may not solve all of our problems, but it's probably going to get better. I don't know by how much. And that's basically what I want to say. And then at some point, I'll talk about the safety and security things as a result of this. Because the way in which security intersects with these things depends a lot in exactly how people use these tools. You know, if it turns out to be the case that these models get to be truly amazing and can solve, you know, tasks completely autonomously, that's a very different security world to be living in than if there's always a human in the loop. And the types of security questions I would want to ask would be very different. And so I think, you know, in some very large part, understanding what the future will look like a couple of years ahead of time is helpful for figuring out which problems, as a security person, I want to solve now. You mentioned getting clicks on the internet,Alessio [00:32:50]: but you don't even have, like, an ex-account or anything. How do you get people to read your stuff? What's your distribution strategy? Because this post was popping up everywhere. And then people on Twitter were like, Nicholas Garlini wrote this. Like, what's his handle? It's like, he doesn't have it. It's like, how did you find it? What's the story?Nicholas [00:33:07]: So I have an RSS feed and an email list. And that's it. I don't like most social media things. On principle, I feel like they have some harms. As a person, I have a problem when people say things that are wrong on the internet. And I would get nothing done if I would have a Twitter. I would spend all of my time correcting people and getting into fights. And so I feel like it is just useful for me for this not to be an option. I tend to just post things online. Yeah, it's a very good question. I don't know how people find it. I feel like for some things that I write, other people think it resonates with them. And then they put it on Twitter. And...Swyx [00:33:43]: Hacker News as well.Nicholas [00:33:44]: Sure, yeah. I am... Because my day job is doing research, I get no value for having this be picked up. There's no whatever. I don't need to be someone who has to have this other thing to give talks. And so I feel like I can just say what I want to say. And if people find it useful, then they'll share it widely. You know, this one went pretty wide. I wrote a thing, whatever, sometime late last year, about how to recover data off of an Apple profile drive from 1980. This probably got, I think, like 1000x less views than this. But I don't care. Like, that's not why I'm doing this. Like, this is the benefit of having a thing that I actually care about, which is my research. I would care much more if that didn't get seen. This is like a thing that I write because I have some thoughts that I just want to put down.Swyx [00:34:32]: Yeah. I think it's the long form thoughtfulness and authenticity that is sadly lacking sometimes in modern discourse that makes it attractive. And I think now you have a little bit of a brand of you are an independent thinker, writer, person, that people are tuned in to pay attention to whatever is next coming.Nicholas [00:34:52]: Yeah, I mean, this kind of worries me a little bit. I don't like whenever I have a popular thing that like, and then I write another thing, which is like entirely unrelated. Like, I don't, I don't... You should actually just throw people off right now.Swyx [00:35:01]: Exactly.Nicholas [00:35:02]: I'm trying to figure out, like, I need to put something else online. So, like, the last two or three things I've done in a row have been, like, actually, like, things that people should care about.Swyx [00:35:10]: Yes. So, I have a couple of things.Nicholas [00:35:11]: I'm trying to figure out which one do I put online to just, like, cull the list of people who have subscribed to my email.Swyx [00:35:16]: And so, like, tell them, like,Nicholas [00:35:16]: no, like, what you're here for is not informed, well-thought-through takes. Like, what you're here for is whatever I want to talk about. And if you're not up for that, then, like, you know, go away. Like, this is not what I want out of my personal website.Swyx [00:35:27]: So, like, here's, like, top 10 enemies or something.Alessio [00:35:30]: What's the next project you're going to work on that is completely unrelated to research LLMs? Or what games do you want to port into the browser next?Swyx [00:35:39]: Okay. Yeah.Nicholas [00:35:39]: So, maybe.Swyx [00:35:41]: Okay.Nicholas [00:35:41]: Here's a fun question. How much data do you think you can put on a single piece of paper?Swyx [00:35:47]: I mean, you can think about bits and atoms. Yeah.Nicholas [00:35:49]: No, like, normal printer. Like, I gave you an office printer. How much data can you put on a piece of paper?Alessio [00:35:54]: Can you re-decode it? So, like, you know, base 64A or whatever. Yeah, whatever you want.Nicholas [00:35:59]: Like, you get normal off-the-shelf printer, off-the-shelf scanner. How much data?Swyx [00:36:03]: I'll just throw out there. Like, 10 megabytes. That's enormous. I know.Nicholas [00:36:07]: Yeah, that's a lot.Swyx [00:36:10]: Really small fonts. That's my question.Nicholas [00:36:12]: So, I have a thing. It does about a megabyte.Swyx [00:36:14]: Yeah, okay.Nicholas [00:36:14]: There you go. I was off by an order of magnitude.Swyx [00:36:16]: Yeah, okay.Nicholas [00:36:16]: So, in particular, it's about 1.44 megabytes. A floppy disk.Swyx [00:36:21]: Yeah, exactly.Nicholas [00:36:21]: So, this is supposed to be the title at some point. It's a floppy disk.Swyx [00:36:24]: A paper is a floppy disk. Yeah.Nicholas [00:36:25]: So, this is a little hard because, you know. So, you can do the math and you get 8.5 by 11. You can print at 300 by 300 DPI. And this gives you 2 megabytes. And so, every single pixel, you need to be able to recover up to like 90 plus percent. Like, 95 percent. Like, 99 point something percent accuracy. In order to be able to actually decode this off the paper. This is one of the things that I'm considering. I need to get a couple more things working for this. Where, you know, again, I'm running into some random problems. But this is probably, this will be one thing that I'm going to talk about. There's this contest called the International Obfuscated C-Code Contest, which is amazing. People try and write the most obfuscated C code that they can. Which is great. And I have a submission for that whenever they open up the next one for it. And I'll write about that submission. I have a very fun gate level emulation of an old CPU that runs like fully precisely. And it's a fun kind of thing. Yeah.Swyx [00:37:20]: Interesting. Your comment about the piece of paper reminds me of when I was in college. And you would have like one cheat sheet that you could write. So, you have a formula, a theoretical limit for bits per inch. And, you know, that's how much I would squeeze in really, really small. Yeah, definitely.Nicholas [00:37:36]: Okay.Swyx [00:37:37]: We are also going to talk about your benchmarking. Because you released your own benchmark that got some attention, thanks to some friends on the internet. What's the story behind your own benchmark? Do you not trust the open source benchmarks? What's going on there?Nicholas [00:37:51]: Okay. Benchmarks tell you how well the model solves the task the benchmark is designed to solve. For a long time, models were not useful. And so, the benchmark that you tracked was just something someone came up with, because you need to track something. All of deep learning exists because people tried to make models classify digits and classify images into a thousand classes. There is no one in the world who cares specifically about the problem of distinguishing between 300 breeds of dog for an image that's 224 or 224 pixels. And yet, like, this is what drove a lot of progress. And people did this not because they cared about this problem, because they wanted to just measure progress in some way. And a lot of benchmarks are of this flavor. You want to construct a task that is hard, and we will measure progress on this benchmark, not because we care about the problem per se, but because we know that progress on this is in some way correlated with making better models. And this is fine when you don't want to actually use the models that you have. But when you want to actually make use of them, it's important to find benchmarks that track with whether or not they're useful to you. And the thing that I was finding is that there would be model after model after model that was being released that would find some benchmark that they could claim state-of-the-art on and then say, therefore, ours is the best. And that wouldn't be helpful to me to know whether or not I should then switch to it. So the argument that I tried to lay out in this post is that more people should make benchmarks that are tailored to them. And so what I did is I wrote a domain-specific language that anyone can write for and say, you can take tasks that you have wanted models to solve for you, and you can put them into your benchmark that's the thing that you care about. And then when a new model comes out, you benchmark the model on the things that you care about. And you know that you care about them because you've actually asked for those answers before. And if the model scores well, then you know that for the kinds of things that you have asked models for in the past, it can solve these things well for you. This has been useful for me because when another model comes out, I can run it. I can see, does this solve the kinds of things that I care about? And sometimes the answer is yes, and sometimes the answer is no. And then I can decide whether or not I want to use that model or not. I don't want to say that existing benchmarks are not useful. They're very good at measuring the thing that they're designed to measure. But in many cases, what that's designed to measure is not actually the thing that I want to use it for. And I expect that the way that I want to use it is different the way that you want to use it. And I would just like more people to have these things out there in the world. And the final reason for this is, it is very easy. If you want to make a model good at some benchmark, to make it good at that benchmark, you can find the distribution of data that you need and train the model to be good on the distribution of data. And then you have your model that can solve this benchmark well. And by having a benchmark that is not very popular, you can be relatively certain that no one has tried to optimize their model for your benchmark.Swyx [00:40:40]: And I would like this to be-Nicholas [00:40:40]: So publishing your benchmark is a little bit-Swyx [00:40:43]: Okay, sure.Nicholas [00:40:43]: Contextualized. So my hope in doing this was not that people would use mine as theirs. My hope in doing this was that- You should make yours. Yes, you should make your benchmark. And if, for example, there were even a very small fraction of people, 0.1% of people who made a benchmark that was useful for them, this would still be hundreds of new benchmarks that- not want to make one myself, but I might want to- I might know the kinds of work that I do is a little bit like this person, a little bit like that person. I'll go check how it is on their benchmarks. And I'll see, roughly, I'll get a good sense of what's going on. Because the alternative is people just do this vibes-based evaluation thing, where you interact with the model five times, and you see if it worked on the kinds of things that you just like your toy questions. But five questions is a very low bit output from whether or not it works for this thing. And if you could just automate running it 100 questions for you, it's a much better evaluation. So that's why I did this.Swyx [00:41:37]: Yeah, I like the idea of going through your chat history and actually pulling out real-life examples. I regret to say that I don't think my chat history is used as much these days, because I'm using Cursor, the native AI IDE. So your examples are all coding related. And the immediate question is, now that you've written the How I Use AI post, which is a little bit broader, are you able to translate all these things to evals? Are some things unevaluable?Nicholas [00:42:03]: Right. A number of things that I do are harder to evaluate. So this is the problem with a benchmark, is you need some way to check whether or not the output was correct. And so all of the kinds of things that I can put into the benchmark are the kinds of things that you can check. You can check more things than you might have thought would be possible if you do a little bit of work on the back end. So for example, all of the code that I have the model write, it runs the code and sees whether the answer is the correct answer. Or in some cases, it runs the code, feeds the output to another language model, and the language model judges was the output correct. And again, is using a language model to judge here perfect? No. But like, what's the alternative? The alternative is to not do it. And what I care about is just, is this thing broadly useful for the kinds of questions that I have? And so as long as the accuracy is better than roughly random, like, I'm okay with this. I've inspected the outputs of these, and like, they're almost always correct. If you ask the model to judge these things in the right way, they're very good at being able to tell this. And so, yeah, I probably think this is a useful thing for people to do.Alessio [00:43:04]: You complain about prompting and being lazy and how you do not want to tip your model and you do not want to murder a kitten just to get the right answer. How do you see the evolution of like prompt engineering? Even like 18 months ago, maybe, you know, it was kind of like really hot and people wanted to like build companies around it. Today, it's like the models are getting good. Do you think it's going to be less and less relevant going forward? Or what's the minimum valuable prompt? Yeah, I don't know.Nicholas [00:43:29]: I feel like a big part of making an agent is just like a fancy prompt that like, you know, calls back to the model again. I have no opinion. It seems like maybe it turns out that this is really important. Maybe it turns out that this isn't. I guess the only comment I was making here is just to say, oftentimes when I use a model and I find it's not useful, I talk to people who help make it. The answer they usually give me is like, you're using it wrong. Which like reminds me very much of like that you're holding it wrong from like the iPhone kind of thing, right? Like, you know, like I don't care that I'm holding it wrong. I'm holding it that way. If the thing is not working with me, then like it's not useful for me. Like it may be the case that there exists a way to ask the model such that it gives me the answer that's correct, but that's not the way I'm doing it. If I have to spend so much time thinking about how I want to frame the question, that it would have been faster for me just to get the answer. It didn't save me any time. And so oftentimes, you know, what I do is like, I just dump in whatever current thought that I have in whatever ill-formed way it is. And I expect the answer to be correct. And if the answer is not correct, like in some sense, maybe the model was right to give me the wrong answer. Like I may have asked the wrong question, but I want the right answer still. And so like, I just want to sort of get this as a thing. And maybe the way to fix this is you have some default prompt that always goes into all the models or something, or you do something like clever like this. It would be great if someone had a way to package this up and make a thing I think that's entirely reasonable. Maybe it turns out that as models get better, you don't need to prompt them as much in this way. I just want to use the things that are in front of me.Alessio [00:44:55]: Do you think that's like a limitation of just how models work? Like, you know, at the end of the day, you're using the prompt to kind of like steer it in the latent space. Like, do you think there's a way to actually not make the prompt really relevant and have the model figure it out? Or like, what's the... I mean, you could fine tune itNicholas [00:45:10]: into the model, for example, that like it's supposed to... I mean, it seems like some models have done this, for example, like some recent model, many recent models. If you ask them a question, computing an integral of this thing, they'll say, let's think through this step by step. And then they'll go through the step by step answer. I didn't tell it. Two years ago, I would have had to have prompted it. Think step by step on solving the following thing. Now you ask them the question and the model says, here's how I'm going to do it. I'm going to take the following approach and then like sort of self-prompt itself.Swyx [00:45:34]: Is this the right way?Nicholas [00:45:35]: Seems reasonable. Maybe you don't have to do it. I don't know. This is for the people whose job is to make these things better. And yeah, I just want to use these things. Yeah.Swyx [00:45:43]: For listeners, that would be Orca and Agent Instruct. It's the soda on this stuff. Great. Yeah.Alessio [00:45:49]: That's a few shot. It's included in the lazy prompting. Like, do you do a few shot prompting? Like, do you collect some examples when you want to put them in? Or...Nicholas [00:45:57]: I don't because usually when I want the answer, I just want to get the answer. Brutal.Swyx [00:46:03]: This is hard mode. Yeah, exactly.Nicholas [00:46:04]: But this is fine.Swyx [00:46:06]: I want to be clear.Nicholas [00:46:06]: There's a difference between testing the ultimate capability level of the model and testing the thing that I'm doing with it. What I'm doing is I'm not exercising its full capability level because there are almost certainly better ways to ask the questions and sort of really see how good the model is. And if you're evaluating a model for being state of the art, this is ultimately what I care about. And so I'm entirely fine with people doing fancy prompting to show me what the true capability level could be because it's really useful to know what the ultimate level of the model could be. But I think it's also important just to have available to you how good the model is if you don't do fancy things.Swyx [00:46:39]: Yeah, I would say that here's a divergence between how models are marketed these days versus how people use it, which is when they test MMLU, they'll do like five shots, 25 shots, 50 shots. And no one's providing 50 examples. I completely agree.Nicholas [00:46:54]: You know, for these numbers, the problem is everyone wants to get state of the art on the benchmark. And so you find the way that you can ask the model the questions so that you get state of the art on the benchmark. And it's good. It's legitimately good to know. It's good to know the model can do this thing if only you try hard enough. Because it means that if I have some task that I want to be solved, I know what the capability level is. And I could get there if I was willing to work hard enough. And the question then is, should I work harder and figure out how to ask the model the question? Or do I just do the thing myself? And for me, I have programmed for many, many, many years. It's often just faster for me just to do the thing than to figure out the incantation to ask the model. But I can imagine someone who has never programmed before might be fine writing five paragraphs in English describing exactly the thing that they want and have the model build it for them if the alternative is not. But again, this goes to all these questions of how are they going to validate? Should they be trusting the output? These kinds of things.Swyx [00:47:49]: One problem with your eval paradigm and most eval paradigms, I'm not picking on you, is that we're actually training these things for chat, for interactive back and forth. And you actually obviously reveal much more information in the same way that asking 20 questions reveals more information in sort of a tree search branching sort of way. Then this is also by the way the problem with LMSYS arena, right? Where the vast majority of prompts are single question, single answer, eval, done. But actually the way that we use chat things, in the way, even in the stuff that you posted in your how I use AI stuff, you have maybe 20 turns of back and forth. How do you eval that?Nicholas [00:48:25]: Yeah. Okay. Very good question. This is the thing that I think many people should be doing more of. I would like more multi-turn evals. I might be writing a paper on this at some point if I get around to it. A couple of the evals in the benchmark thing I have are already multi-turn. I mentioned 20 questions. I have a 20 question eval there just for fun. But I have a couple others that are like, I just tell the model, here's my get thing, figure out how to cherry pick off this other branch and move it over there. And so what I do is I just, I basically build a tiny little agency thing. I just ask the model how I do it. I run the thing on Linux. This is what I want a Docker for. I spin up a Docker container. I run whatever the model told me the output to do is. I feed the output back into the model. I repeat this many rounds. And then I check at the very end, does the git commit history show that it is correctly cherry picked in
This week the guys are chatting about Snap improvements, the new Ryzen 9 9000 chips, and Debian 11 hitting LTS. Then they chat about Tails, Proton VPN, and ClamAV 1.4 all for security. Then Ubuntu prepares for 24.10 with some Easter eggs, and HandBrake fixes some irritating problems. For tips we have Cosmic community projects, Reflector for Arch Mirrors, wl-clipboard, and a one-liner to apply patches from a URL. You can find the show notes at https://bit.ly/3M836zB and see you next week! Host: Jonathan Bennett Co-Hosts: Ken McDonald, Jeff Massie, and Rob Campbell Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Hola a todos! Soy Charly Alonso de Tecnolita. Como siempre, quiero recordarte que tienes acceso a una suscripción mensual a la Academia de Inteligencia Artificial de Tecnolita, una academia de formación continua en inteligencia artificial por tan solo 10 euros al mes. Tema del Episodio: En este episodio, continúo compartiendo experiencias sobre la compra de la empresa, o más bien, las dos empresas. La semana pasada hablamos de la compra, y hoy vamos a profundizar en cómo gestionamos la separación de las compañías para optimizar la operatividad y la carga fiscal. Gestión de las Empresas: Decidimos dividir la compañía en dos: Techex Ibérica, enfocada en la distribución de productos tecnológicos, y Techex Comunicaciones y Sistemas, que ofrecía soluciones integrales. En Techex Ibérica representábamos a fabricantes como Autodesk, ofreciendo productos como 3D Studio Max y otras soluciones de animación y visualización arquitectónica. También trabajábamos con codificadores de video digital para streaming profesional, que eran esenciales para la transmisión en televisión. Canal de Distribución y Relaciones: Nuestro canal de distribución incluía empresas que inicialmente se dedicaban a la venta de equipos de video como Sony, Panasonic, y JVC, pero con el tiempo, se diversificó hacia el mundo audiovisual en general. El trabajo con Techex Ibérica era emocionante; demostraciones de productos como 3D Studio Max y Edit eran un desafío constante, especialmente en presentaciones frente a clientes importantes, como los realizados en Galicia. Anécdotas Profesionales: Recuerdo especialmente una demostración en Galicia donde, después de una noche de copas con los clientes, tuve que realizar una presentación a las 9:30 de la mañana, sin haber dormido nada. A pesar de todo, la demostración salió bien, gracias a la experiencia acumulada. Venta Consultiva en Techex Comunicaciones y Sistemas: En Techex Comunicaciones y Sistemas, nos enfocamos en ofrecer soluciones tecnológicas como observatorios de medios, que permitían a empresas y gobiernos monitorizar lo que se decía de ellos en los medios de comunicación. Esto implicaba un profundo conocimiento técnico y la capacidad de integrar diferentes tecnologías para ofrecer una solución completa a nuestros clientes. Desarrollo de Soluciones Personalizadas: Utilizamos tecnologías como FFmpeg y herramientas de código abierto para desarrollar soluciones que automatizaban la captura, el almacenamiento y la gestión de contenidos multimedia. Estas soluciones eran innovadoras para la época y demostraban nuestra capacidad de adaptación a las necesidades del cliente. Consultoría y Percepción del Cliente: En España, la cultura de la consultoría en el sector audiovisual no estaba muy desarrollada. A menudo, la consultoría se incluía dentro del proceso de venta. Sin embargo, siempre he creído en la importancia de ofrecer una consultoría independiente que analice las necesidades del cliente y le recomiende la mejor solución, incluso si eso significa recomendar un producto de la competencia. Percepción vs. Perspectiva: Una de las claves en nuestro trabajo era entender la diferencia entre la percepción y la perspectiva del cliente. Es fundamental ponerse en los zapatos del cliente y entender su perspectiva para ofrecer una solución que no solo sea técnicamente viable, sino también la más adecuada para sus necesidades específicas. Apúntate a la academia Canal de telegram y Canal de Youtube Pregunta por Whatsapp +34 620 240 234 Déjame un mensaje de voz
Josh and Kurt talk about a story talking about the "graying" of open source. There doesn't seem to be many young people working on open source, but we don't really know why that is. There are many thoughts, but a better question is why should anyone get involved in open source anymore? The world has changed quite a lot since open source was created. Show Notes The graying open source community needs fresh blood OSPOs for Good 2024 Day 1 Part 1 Day 1 Part 2 Day 2 Part 1 Day 2 Part 2 FFmpeg bug JSON Editor Online https://rfc3339.com/
This week it's all about the GPUs, with KDE 6.1, Nvidia's 555 drivers, and Mesa 24.1 all coming out, bringing support for Explicit sync, the Nvidia NVK ready for prime time, and more. Then there's Handbreak with FFmpeg 7.0 support, Ventoy bringing an update, and a new Pipewire RC with support for snapcast. Then there's Hans Reiser's last request for ReiserFS making it into kernel 6.10. For tips, we have the Bluefish editor, The last of Spring Cleaning, and how to use docker to very quickly spin up one-off containers. See the show notes at https://bit.ly/3yxtdwn and come back next time! Host: Jonathan Bennett Co-Hosts: Ken McDonald and Jeff Massie Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
We're talking about Google's layoffs and how it affects Flutter and Dart, then AMD is working to push AMF code into FFMPEG, and it's time for Open Source to grow up. RHEL has an AI offering, NVidia is suggesting Open Source kernel drivers, and Zed is coming to Linux. Then there's Pi Connect pulling a Sherlock, KDE working on color management, and a bit of a history lesson on where we came from. For tips, we have the Radion TUI radio player, || : to ignore errors in a script, the Mixxx DJ app for all those underground raves, and PanWriter for markup editing. You can catch the show notes at https://bit.ly/3UTCJ5C, and we'll see you next time! Host: Jonathan Bennett Co-Hosts: Rob Campbell, David Ruggles, and Jeff Massie Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
This is my conversation with Doug Petkanics and Eric Tang, cofounders of Livepeer.Timestamps:- 00:00:00 intro- 00:01:45 sponsor: Optimism- 00:03:55 Livepeer origin story- 00:11:54 FFmpeg and the video infrastructure stack- 00:17:07 compute capacity and cost in open vs closed systems- 00:22:59 GPUs as the supply side, working at NVIDIA- 00:40:27 finding latent demand- 00:46:10 sponsor: Privy- 00:47:30 learnings on go-to-market, Livepeer Studio, AI video processing- 01:00:54 AI subnets in the Livepeer network- 01:07:51 doing whatever it takes to get it done- 01:13:19 interacting with the market- 01:18:51 the inner game- 01:24:30 outroLinks:Doug Petkanics - https://twitter.com/petkanicsEric Tang - https://twitter.com/ericxtangLivepeer - https://twitter.com/livepeerLivepeer Studio - https://twitter.com/livepeerstudioThank you to our sponsors for making this podcast possible:Optimism - https://optimism.ioPrivy - https://privy.ioInto the Bytecode:Twitter - https://twitter.com/sinahabFarcaster - https://warpcast.com/sinahabOther episodes - https://intothebytecode.comDisclaimer: this podcast is for informational purposes only. It is not financial advice or a recommendation to buy or sell securities. The host and guests may hold positions in the projects discussed.
https://youtu.be/w-F1Exi4o0U Forum Discussion Thread (https://forum.tuxdigital.com/t/259-ubuntu-24-04-beta-gentoo-becomes-spi-ware-solution-to-nvidia-amp-wayland-amp-more-linux-news/6208) This week's news is exciting with cool new stuff and pretty bonkers because we narrowly avoided a security nightmare! A backdoor was discovered hidden in a common Linux utility, and it could have infected millions of devices. We'll break down how this almost happened, and what it means for you. Then, we'll switch gears and talk about some exciting upcoming features in Linux Mint 22. Fedora Linux might be getting a whole new look – we'll discuss a proposal to switch the default desktop environment. Flathub is making some changes to make it easier to indentify whether or not a Flatpak is official. Plus there is a new campaign for video game preservation that targets companies effectively breaking their games after an arbitary amount of time. All of this and more on this episode of This Week in Linux, Your Source for Linux GNews! Download as MP3 (https://aphid.fireside.fm/d/1437767933/2389be04-5c79-485e-b1ca-3a5b2cebb006/8cf32e35-b4fa-4525-b86f-e53dec277c4d.mp3) Sponsored by: LINBIT - thisweekinlinux.com/linbit (https://thisweekinlinux.com/linbit) Want to Support the Show? Become a Patron = https://tuxdigital.com/membership (https://tuxdigital.com/membership) Store = https://tuxdigital.com/store (https://tuxdigital.com/store) Chapters: 00:00 Intro 00:29 Ubuntu 24.04 LTS Beta Released 05:40 Gentoo Linux has become SPI-ware 09:18 Explicit Sync Will Finally Solve the NVIDIA/Wayland Issues 11:56 Sponsored by LINBIT 13:21 Riot Games and their lame anti-cheat 17:25 Lutris 0.5.17 Released 19:12 Kodi 21.0 "Omega" Released 21:25 FFmpeg 7.0 Released 23:27 Outro Links: Ubuntu 24.04 LTS Beta Released https://discourse.ubuntu.com/t/ubuntu-24-04-beta-testing/44040 (https://discourse.ubuntu.com/t/ubuntu-24-04-beta-testing/44040) https://9to5linux.com/ubuntu-24-04-lts-beta-is-now-available... (https://9to5linux.com/ubuntu-24-04-lts-beta-is-now-available-for-download-with-gnome-46-linux-6-8) https://www.omgubuntu.co.uk/2024/04/ubuntu-24-04-beta-released (https://www.omgubuntu.co.uk/2024/04/ubuntu-24-04-beta-released) https://www.phoronix.com/news/Ubuntu-24.04-Beta-Test (https://www.phoronix.com/news/Ubuntu-24.04-Beta-Test) Gentoo Linux has become SPI-ware https://www.gentoo.org/news/2024/04/10/SPI-associated-... (https://www.gentoo.org/news/2024/04/10/SPI-associated-project.html) https://www.spi-inc.org/ (https://www.spi-inc.org/) https://www.phoronix.com/news/Gentoo-Linux-SPI-Project (https://www.phoronix.com/news/Gentoo-Linux-SPI-Project) https://lwn.net/Articles/969373/ (https://lwn.net/Articles/969373/) Explicit Sync Will Finally Solve the NVIDIA/Wayland Issues https://zamundaaa.github.io/wayland/2024/04/05/explicit-sync... (https://zamundaaa.github.io/wayland/2024/04/05/explicit-sync.html) https://www.gamingonlinux.com/2024/04/kdes-xaver-hugl-on-why... (https://www.gamingonlinux.com/2024/04/kdes-xaver-hugl-on-why-wayland-explicit-sync-is-such-a-big-deal/) https://9to5linux.com/developer-explains-why-explicit-sync... (https://9to5linux.com/developer-explains-why-explicit-sync-will-finally-solve-the-nvidia-wayland-issues) https://www.phoronix.com/news/KDE-KWin-Lands-Explicit-Sync (https://www.phoronix.com/news/KDE-KWin-Lands-Explicit-Sync) https://www.phoronix.com/news/XWayland-24.1-Release-Plan (https://www.phoronix.com/news/XWayland-24.1-Release-Plan) Riot Games and their lame anti-cheat https://www.leagueoflegends.com/en-us/news/dev/dev-vanguard... (https://www.leagueoflegends.com/en-us/news/dev/dev-vanguard-x-lol/) https://www.gamingonlinux.com/2024/04/riot-games-talk-vanguard... (https://www.gamingonlinux.com/2024/04/riot-games-talk-vanguard-anti-cheat-for-league-of-legends-and-why-its-a-no-for-linux/) https://lutris.net/games/league-of-legends/ (https://lutris.net/games/league-of-legends/) Lutris 0.5.17 Released https://lutris.net/ (https://lutris.net/) https://github.com/lutris/lutris/releases/tag/v0.5.17 (https://github.com/lutris/lutris/releases/tag/v0.5.17) https://www.phoronix.com/news/Lutris-0.5.17-Released (https://www.phoronix.com/news/Lutris-0.5.17-Released) https://www.gamingonlinux.com/2024/04/lutris-v0517-brings-... (https://www.gamingonlinux.com/2024/04/lutris-v0517-brings-critical-bug-fixes-new-way-to-run-games-with-proton/) Kodi 21.0 "Omega" Released https://kodi.tv/article/kodi-21-0-omega-release/ (https://kodi.tv/article/kodi-21-0-omega-release/) https://9to5linux.com/kodi-21-0-omega-open-source-media-center... (https://9to5linux.com/kodi-21-0-omega-open-source-media-center-is-here-with-major-changes) FFmpeg 7.0 Released https://ffmpeg.org/ (https://ffmpeg.org/) https://ffmpeg.org//index.html#pr7.0 (https://ffmpeg.org//index.html#pr7.0) https://lwn.net/Articles/968565/ (https://lwn.net/Articles/968565/) https://www.phoronix.com/news/FFmpeg-7.0-Released (https://www.phoronix.com/news/FFmpeg-7.0-Released) https://9to5linux.com/ffmpeg-7-0-dijkstra-released-with-... (https://9to5linux.com/ffmpeg-7-0-dijkstra-released-with-important-aarch64-optimizations-for-hevc)
This week Noah and Steve give you the latest on open source mobile operating systems. Windows is pushing people to Linux by way of charging them $244 per year for updates in the third year. -- During The Show -- 01:20 Hardware Protectli (https://protectli.com/) $170 Chinesium Device (https://www.newegg.com/p/22Z-007C-009Y8?Item=9SIAK3UJNH8968) Lenovo Thunderbolt Dock (https://www.amazon.com/ZoomSpeed-Universal-Thunder-40B00300US-DisplayPort/dp/B0B1T7PPGZ) CalDigit Thunderbolt Dock (https://www.caldigit.com/thunderbolt-station-4/) 09:19 Graphene OS - Craig GrapheneOS Moving away from phones JMP.Chat (https://jmp.chat/) Conversations (https://f-droid.org/en/packages/eu.siacs.conversations/) Gajim (https://gajim.org/) Linphone (https://linphone.org/) JMP.Chat phone service LineageOS (https://lineageos.org/) PostmarketOS (https://postmarketos.org/) SailfishOS (https://sailfishos.org/) Titan M chip Had pretty good luck on ebay NitroKey/NitroPhone (https://shop.nitrokey.com/shop?&search=nitrophone) 25:55 NixOS Thoughts - Jeremy W Where NixOS is useful "Productised" NixOS Set people up for success 32:58 Self Hosted Email - Jeremy H Write in and tell me about your self hosted email experiences 34:54 News Wire German State Moving to Linux - ARSTechnica (https://arstechnica.com/information-technology/2024/04/german-state-gov-ditching-windows-for-linux-30k-workers-migrating/) Kodi 21.0 - Kodi (https://kodi.tv/article/kodi-21-0-omega-release/) Nitrux - nxos.org (https://nxos.org/changelog/release-announcement-nitrux-3-4-0/) Ubuntu 24.04 Delayed - Toms Hardware (https://www.tomshardware.com/software/linux/ubuntu-2404-beta-delayed-due-to-malicious-code-in-xz-utils-other-linux-distros-are-also-affected) EndeavourOS ARM Discontinued - EndeavourOS (https://endeavouros.com/news/goodbye-endeavouros-arm/) Linux 6.7 EOL - Server Host (https://serverhost.com/blog/end-of-life-for-linux-kernel-6-7-urgent-call-for-users-to-upgrade-to-6-8/) QT Creator 13 - QT (https://www.qt.io/blog/qt-creator-13-released) FFmpeg 7.0 - FFmpeg (https://ffmpeg.org//index.html#pr7.0) Dtrace 2.0 - Phoronix (https://www.phoronix.com/news/D-Trace-2.0.0-1.14) AURORA-M - Mark Tech Post (https://www.marktechpost.com/2024/04/07/aurora-m-a-15b-parameter-multilingual-open-source-ai-model-trained-in-english-finnish-hindi-japanese-vietnamese-and-code/) Gretel AI Text-to-SQL - Mark Tech Post (https://www.marktechpost.com/2024/04/04/gretel-ai-releases-largest-open-source-text-to-sql-dataset-to-accelerate-artificial-intelligence-ai-model-training/) Viking Model Family - Mark Tech Post (https://www.marktechpost.com/2024/04/07/silo-ai-releases-new-viking-model-family-pre-release-an-open-source-llm-for-all-nordic-languages-english-and-programming-languages/) Framework Hiring - Phoronix (https://www.phoronix.com/news/Framework-OSS-Firmware-Hiring) 36:26 Bell Canada is deleting DVR/PVR recordings Steve hates Bell Canada Marriage photographer story HDCP (https://en.wikipedia.org/wiki/High-bandwidth_Digital_Content_Protection) HDCP Stripper (https://www.amazon.com/THWT-HDMI-EDID-Emulator-Model/dp/B0CRRWQ7XS) OREI HDMI splitter (https://www.amazon.com/THWT-HDMI-EDID-Emulator-Model/dp/B0CRRWQ7XS) 43:03 Windows Upgrades/Updates Windows 10 EOL October 2025 Extended Security Updates (ESUs) Microsoft will charge for updates ARS Technica (https://arstechnica.com/gadgets/2024/04/post-2025-windows-10-updates-for-businesses-start-at-61-per-pc-go-up-from-there/) 46:06 Germany Switching to Linux Linux solves "Windows high hardware requirements" Schleswig-Holstein developing open source directory service LiMux from Munich Steve's take Lxer.com (lxer.com/module/newswire/ext_link.php?rid=339628) ARS Technica (https://arstechnica.com/information-technology/2024/04/german-state-gov-ditching-windows-for-linux-30k-workers-migrating/) 51:10 American Privacy Rights Act Fusion Centers (lxer.com/module/newswire/ext_link.php?rid=339628) IQT (https://www.iqt.org/) Tax dollars funding data collection companies The Register (https://www.theregister.com/2024/04/09/us_federal_privacy_law_apra/) -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/384) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
This week the crew starts by looking at a KDE throw-back distro, then followed that up with a bunch of April Fools news, and a few April first stories that check out. FFMPEG puches out version 7, LXC mints 6.0 LTS, and EEVDF is about feature complete. Then the XZ SSH backdoor gets an update, and that conversation turns a bit philisophical regarding how nice Open Source should really be. For tips we have the awesome selfhosted list, vim, xz --version and zstd, and then some xfs tools for resizing a partition. See the show notes at https://bit.ly/4aqmu5a and we hope to see you next time! Host: Jonathan Bennett Co-Hosts: Rob Campbell, David Ruggles, and Ken McDonald Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Microsoft wins the foot-in-mouth award this week, and Google gets the Rust religion - but Mike is skeptical.
FFMPEG Series Part 2: Joining and Splitting files Joining Files Link: https://trac.ffmpeg.org/wiki/Concatenate For lossless files of the same codecs ffmpeg -i "concat:input1.ts|input2.ts|input3.ts" -c copy output.ts For mp4 files, using intermediate files ffmpeg -i input1.mp4 -c copy intermediate1.ts ffmpeg -i input2.mp4 -c copy intermediate2.ts ffmpeg -i "concat:intermediate1.ts|intermediate2.ts" -c copy output.ts Brief explanation of the options used in the command: -i: input -c copy: use codec same codecs as input Splitting Files Splitting by Duration: ffmpeg -i input.mp4 -t 00:05:00 output_part1.mp4 ffmpeg -i input.mp4 -t 00:05:00 -ss 00:05:00 output_part2.mp4 Brief explanation of the options used in the command: -i: Input file -t: Duration of the output file -ss: Start position of the output file (optional) Splitting by Chapters or Markers: Use additional scripting to get out chapter marker times, then use above C. Splitting by Size: ffmpeg -i input.mp4 -map 0 -c copy -segment_size 50M -segment_times 1 output_%03d.mp4 Brief explanation of the options used in the command: -map 0: Select the first stream (video and audio) -c copy: Copy the stream without re-encoding (faster) -segment_size: Size of each output file segment -segment_times: Number of segments to create output_%03d.mp4: printf-style pattern for the output files
Process is important. This show is dedicated to examples of non-developer tasks that can be improved by coding scripts. Join Scott and Wes for a deep dive into automation magic. Show Notes 00:00 Welcome to Syntax! 02:11 Brought to you by Sentry.io. 03:02 FFmpeg, a tool for video producers. FFmpeg FFprobe 06:35 Markdown validation. Syntax Markdown Validation 09:21 AI timestamps for inform editing process. Episode 456 Transcript 12:19 Generating clips for social media. 13:31 YouTube find and replace tool. YouTube Find & Replace - work in progress 15:03 What other scripts can you create? 16:17 Packaging a tool for a non-developer to use. 16:54 Apple Scripts 17:45 Stand-alone website. 19:25 Script Kit: Shortcut to Everything 20:19 Other ways to run scripts. ZX Dax 22:05 Get in touch with your tips. Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott:X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads
We're writing this one day after the monster release of OpenAI's Sora and Gemini 1.5. We covered this on ‘s ThursdAI space, so head over there for our takes.IRL: We're ONE WEEK away from Latent Space: Final Frontiers, the second edition and anniversary of our first ever Latent Space event! Also: join us on June 25-27 for the biggest AI Engineer conference of the year!Online: All three Discord clubs are thriving. Join us every Wednesday/Friday!Almost 12 years ago, while working at Spotify, Erik Bernhardsson built one of the first open source vector databases, Annoy, based on ANN search. He also built Luigi, one of the predecessors to Airflow, which helps data teams orchestrate and execute data-intensive and long-running jobs. Surprisingly, he didn't start yet another vector database company, but instead in 2021 founded Modal, the “high-performance cloud for developers”. In 2022 they opened doors to developers after their seed round, and in 2023 announced their GA with a $16m Series A.More importantly, they have won fans among both household names like Ramp, Scale AI, Substack, and Cohere, and newer startups like (upcoming guest!) Suno.ai and individual hackers (Modal was the top tool of choice in the Vercel AI Accelerator):We've covered the nuances of GPU workloads, and how we need new developer tooling and runtimes for them (see our episodes with Chris Lattner of Modular and George Hotz of tiny to start). In this episode, we run through the major limitations of the actual infrastructure behind the clouds that run these models, and how Erik envisions the “postmodern data stack”. In his 2021 blog post “Software infrastructure 2.0: a wishlist”, Erik had “Truly serverless” as one of his points:* The word cluster is an anachronism to an end-user in the cloud! I'm already running things in the cloud where there's elastic resources available at any time. Why do I have to think about the underlying pool of resources? Just maintain it for me.* I don't ever want to provision anything in advance of load.* I don't want to pay for idle resources. Just let me pay for whatever resources I'm actually using.* Serverless doesn't mean it's a burstable VM that saves its instance state to disk during periods of idle.Swyx called this Self Provisioning Runtimes back in the day. Modal doesn't put you in YAML hell, preferring to colocate infra provisioning right next to the code that utilizes it, so you can just add GPU (and disk, and retries…):After 3 years, we finally have a big market push for this: running inference on generative models is going to be the killer app for serverless, for a few reasons:* AI models are stateless: even in conversational interfaces, each message generation is a fully-contained request to the LLM. There's no knowledge that is stored in the model itself between messages, which means that tear down / spin up of resources doesn't create any headaches with maintaining state.* Token-based pricing is better aligned with serverless infrastructure than fixed monthly costs of traditional software.* GPU scarcity makes it really expensive to have reserved instances that are available to you 24/7. It's much more convenient to build with a serverless-like infrastructure.In the episode we covered a lot more topics like maximizing GPU utilization, why Oracle Cloud rocks, and how Erik has never owned a TV in his life. Enjoy!Show Notes* Modal* ErikBot* Erik's Blog* Software Infra 2.0 Wishlist* Luigi* Annoy* Hetzner* CoreWeave* Cloudflare FaaS* Poolside AI* Modular Inference EngineChapters* [00:00:00] Introductions* [00:02:00] Erik's OSS work at Spotify: Annoy and Luigi* [00:06:22] Starting Modal* [00:07:54] Vision for a "postmodern data stack"* [00:10:43] Solving container cold start problems* [00:12:57] Designing Modal's Python SDK* [00:15:18] Self-Revisioning Runtime* [00:19:14] Truly Serverless Infrastructure* [00:20:52] Beyond model inference* [00:22:09] Tricks to maximize GPU utilization* [00:26:27] Differences in AI and data science workloads* [00:28:08] Modal vs Replicate vs Modular and lessons from Heroku's "graduation problem"* [00:34:12] Creating Erik's clone "ErikBot"* [00:37:43] Enabling massive parallelism across thousands of GPUs* [00:39:45] The Modal Sandbox for agents* [00:43:51] Thoughts on the AI Inference War* [00:49:18] Erik's best tweets* [00:51:57] Why buying hardware is a waste of money* [00:54:18] Erik's competitive programming backgrounds* [00:59:02] Why does Sweden have the best Counter Strike players?* [00:59:53] Never owning a car or TV* [01:00:21] Advice for infrastructure startupsTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO-in-Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:14]: Hey, and today we have in the studio Erik Bernhardsson from Modal. Welcome.Erik [00:00:19]: Hi. It's awesome being here.Swyx [00:00:20]: Yeah. Awesome seeing you in person. I've seen you online for a number of years as you were building on Modal and I think you're just making a San Francisco trip just to see people here, right? I've been to like two Modal events in San Francisco here.Erik [00:00:34]: Yeah, that's right. We're based in New York, so I figured sometimes I have to come out to capital of AI and make a presence.Swyx [00:00:40]: What do you think is the pros and cons of building in New York?Erik [00:00:45]: I mean, I never built anything elsewhere. I lived in New York the last 12 years. I love the city. Obviously, there's a lot more stuff going on here and there's a lot more customers and that's why I'm out here. I do feel like for me, where I am in life, I'm a very boring person. I kind of work hard and then I go home and hang out with my kids. I don't have time to go to events and meetups and stuff anyway. In that sense, New York is kind of nice. I walk to work every morning. It's like five minutes away from my apartment. It's very time efficient in that sense. Yeah.Swyx [00:01:10]: Yeah. It's also a good life. So we'll do a brief bio and then we'll talk about anything else that people should know about you. Actually, I was surprised to find out you're from Sweden. You went to college in KTH and your master's was in implementing a scalable music recommender system. Yeah.Erik [00:01:27]: I had no idea. Yeah. So I actually studied physics, but I grew up coding and I did a lot of programming competition and then as I was thinking about graduating, I got in touch with an obscure music streaming startup called Spotify, which was then like 30 people. And for some reason, I convinced them, why don't I just come and write a master's thesis with you and I'll do some cool collaborative filtering, despite not knowing anything about collaborative filtering really. But no one knew anything back then. So I spent six months at Spotify basically building a prototype of a music recommendation system and then turned that into a master's thesis. And then later when I graduated, I joined Spotify full time.Swyx [00:02:00]: So that was the start of your data career. You also wrote a couple of popular open source tooling while you were there. Is that correct?Erik [00:02:09]: No, that's right. I mean, I was at Spotify for seven years, so this is a long stint. And Spotify was a wild place early on and I mean, data space is also a wild place. I mean, it was like Hadoop cluster in the like foosball room on the floor. It was a lot of crude, like very basic infrastructure and I didn't know anything about it. And like I was hired to kind of figure out data stuff. And I started hacking on a recommendation system and then, you know, got sidetracked in a bunch of other stuff. I fixed a bunch of reporting things and set up A-B testing and started doing like business analytics and later got back to music recommendation system. And a lot of the infrastructure didn't really exist. Like there was like Hadoop back then, which is kind of bad and I don't miss it. But I spent a lot of time with that. As a part of that, I ended up building a workflow engine called Luigi, which is like briefly like somewhat like widely ended up being used by a bunch of companies. Sort of like, you know, kind of like Airflow, but like before Airflow. I think it did some things better, some things worse. I also built a vector database called Annoy, which is like for a while, it was actually quite widely used. In 2012, so it was like way before like all this like vector database stuff ended up happening. And funny enough, I was actually obsessed with like vectors back then. Like I was like, this is going to be huge. Like just give it like a few years. I didn't know it was going to take like nine years and then there's going to suddenly be like 20 startups doing vector databases in one year. So it did happen. In that sense, I was right. I'm glad I didn't start a startup in the vector database space. I would have started way too early. But yeah, that was, yeah, it was a fun seven years as part of it. It was a great culture, a great company.Swyx [00:03:32]: Yeah. Just to take a quick tangent on this vector database thing, because we probably won't revisit it but like, has anything architecturally changed in the last nine years?Erik [00:03:41]: I'm actually not following it like super closely. I think, you know, some of the best algorithms are still the same as like hierarchical navigable small world.Swyx [00:03:51]: Yeah. HNSW.Erik [00:03:52]: Exactly. I think now there's like product quantization, there's like some other stuff that I haven't really followed super closely. I mean, obviously, like back then it was like, you know, it's always like very simple. It's like a C++ library with Python bindings and you could mmap big files and into memory and like they had some lookups. I used like this kind of recursive, like hyperspace splitting strategy, which is not that good, but it sort of was good enough at that time. But I think a lot of like HNSW is still like what people generally use. Now of course, like databases are much better in the sense like to support like inserts and updates and stuff like that. I know I never supported that. Yeah, it's sort of exciting to finally see like vector databases becoming a thing.Swyx [00:04:30]: Yeah. Yeah. And then maybe one takeaway on most interesting lesson from Daniel Ek?Erik [00:04:36]: I mean, I think Daniel Ek, you know, he started Spotify very young. Like he was like 25, something like that. And that was like a good lesson. But like he, in a way, like I think he was a very good leader. Like there was never anything like, no scandals or like no, he wasn't very eccentric at all. It was just kind of like very like level headed, like just like ran the company very well, like never made any like obvious mistakes or I think it was like a few bets that maybe like in hindsight were like a little, you know, like took us, you know, too far in one direction or another. But overall, I mean, I think he was a great CEO, like definitely, you know, up there, like generational CEO, at least for like Swedish startups.Swyx [00:05:09]: Yeah, yeah, for sure. Okay, we should probably move to make our way towards Modal. So then you spent six years as CTO of Better. You were an early engineer and then you scaled up to like 300 engineers.Erik [00:05:21]: I joined as a CTO when there was like no tech team. And yeah, that was a wild chapter in my life. Like the company did very well for a while. And then like during the pandemic, yeah, it was kind of a weird story, but yeah, it kind of collapsed.Swyx [00:05:32]: Yeah, laid off people poorly.Erik [00:05:34]: Yeah, yeah. It was like a bunch of stories. Yeah. I mean, the company like grew from like 10 people when I joined at 10,000, now it's back to a thousand. But yeah, they actually went public a few months ago, kind of crazy. They're still around, like, you know, they're still, you know, doing stuff. So yeah, very kind of interesting six years of my life for non-technical reasons, like I managed like three, four hundred, but yeah, like learning a lot of that, like recruiting. I spent all my time recruiting and stuff like that. And so managing at scale, it's like nice, like now in a way, like when I'm building my own startup. It's actually something I like, don't feel nervous about at all. Like I've managed a scale, like I feel like I can do it again. It's like very different things that I'm nervous about as a startup founder. But yeah, I started Modal three years ago after sort of, after leaving Better, I took a little bit of time off during the pandemic and, but yeah, pretty quickly I was like, I got to build something. I just want to, you know. Yeah. And then yeah, Modal took form in my head, took shape.Swyx [00:06:22]: And as far as I understand, and maybe we can sort of trade off questions. So the quick history is started Modal in 2021, got your seed with Sarah from Amplify in 2022. You just announced your Series A with Redpoint. That's right. And that brings us up to mostly today. Yeah. Most people, I think, were expecting you to build for the data space.Erik: But it is the data space.Swyx:: When I think of data space, I come from like, you know, Snowflake, BigQuery, you know, Fivetran, Nearby, that kind of stuff. And what Modal became is more general purpose than that. Yeah.Erik [00:06:53]: Yeah. I don't know. It was like fun. I actually ran into like Edo Liberty, the CEO of Pinecone, like a few weeks ago. And he was like, I was so afraid you were building a vector database. No, I started Modal because, you know, like in a way, like I work with data, like throughout my most of my career, like every different part of the stack, right? Like I thought everything like business analytics to like deep learning, you know, like building, you know, training neural networks, the scale, like everything in between. And so one of the thoughts, like, and one of the observations I had when I started Modal or like why I started was like, I just wanted to make, build better tools for data teams. And like very, like sort of abstract thing, but like, I find that the data stack is, you know, full of like point solutions that don't integrate well. And still, when you look at like data teams today, you know, like every startup ends up building their own internal Kubernetes wrapper or whatever. And you know, all the different data engineers and machine learning engineers end up kind of struggling with the same things. So I started thinking about like, how do I build a new data stack, which is kind of a megalomaniac project, like, because you kind of want to like throw out everything and start over.Swyx [00:07:54]: It's almost a modern data stack.Erik [00:07:55]: Yeah, like a postmodern data stack. And so I started thinking about that. And a lot of it came from like, like more focused on like the human side of like, how do I make data teams more productive? And like, what is the technology tools that they need? And like, you know, drew out a lot of charts of like, how the data stack looks, you know, what are different components. And it shows actually very interesting, like workflow scheduling, because it kind of sits in like a nice sort of, you know, it's like a hub in the graph of like data products. But it was kind of hard to like, kind of do that in a vacuum, and also to monetize it to some extent. I got very interested in like the layers below at some point. And like, at the end of the day, like most people have code to have to run somewhere. So I think about like, okay, well, how do you make that nice? Like how do you make that? And in particular, like the thing I always like thought about, like developer productivity is like, I think the best way to measure developer productivity is like in terms of the feedback loops, like how quickly when you iterate, like when you write code, like how quickly can you get feedback. And at the innermost loop, it's like writing code and then running it. And like, as soon as you start working with the cloud, like it's like takes minutes suddenly, because you have to build a Docker container and push it to the cloud and like run it, you know. So that was like the initial focus for me was like, I just want to solve that problem. Like I want to, you know, build something less, you run things in the cloud and like retain the sort of, you know, the joy of productivity as when you're running things locally. And in particular, I was quite focused on data teams, because I think they had a couple unique needs that wasn't well served by the infrastructure at that time, or like still is in like, in particular, like Kubernetes, I feel like it's like kind of worked okay for back end teams, but not so well for data teams. And very quickly, I got sucked into like a very deep like rabbit hole of like...Swyx [00:09:24]: Not well for data teams because of burstiness. Yeah, for sure.Erik [00:09:26]: So like burstiness is like one thing, right? Like, you know, like you often have this like fan out, you want to like apply some function over very large data sets. Another thing tends to be like hardware requirements, like you need like GPUs and like, I've seen this in many companies, like you go, you know, data scientists go to a platform team and they're like, can we add GPUs to the Kubernetes? And they're like, no, like, that's, you know, complex, and we're not gonna, so like just getting GPU access. And then like, I mean, I also like data code, like frankly, or like machine learning code like tends to be like, super annoying in terms of like environments, like you end up having like a lot of like custom, like containers and like environment conflicts. And like, it's very hard to set up like a unified container that like can serve like a data scientist, because like, there's always like packages that break. And so I think there's a lot of different reasons why the technology wasn't well suited for back end. And I think the attitude at that time is often like, you know, like you had friction between the data team and the platform team, like, well, it works for the back end stuff, you know, why don't you just like, you know, make it work. But like, I actually felt like data teams, you know, or at this point now, like there's so much, so many people working with data, and like they, to some extent, like deserve their own tools and their own tool chains, and like optimizing for that is not something people have done. So that's, that's sort of like very abstract philosophical reason why I started Model. And then, and then I got sucked into this like rabbit hole of like container cold start and, you know, like whatever, Linux, page cache, you know, file system optimizations.Swyx [00:10:43]: Yeah, tell people, I think the first time I met you, I think you told me some numbers, but I don't remember, like, what are the main achievements that you were unhappy with the status quo? And then you built your own container stack?Erik [00:10:52]: Yeah, I mean, like, in particular, it was like, in order to have that loop, right? You want to be able to start, like take code on your laptop, whatever, and like run in the cloud very quickly, and like running in custom containers, and maybe like spin up like 100 containers, 1000, you know, things like that. And so container cold start was the initial like, from like a developer productivity point of view, it was like, really, what I was focusing on is, I want to take code, I want to stick it in container, I want to execute in the cloud, and like, you know, make it feel like fast. And when you look at like, how Docker works, for instance, like Docker, you have this like, fairly convoluted, like very resource inefficient way, they, you know, you build a container, you upload the whole container, and then you download it, and you run it. And Kubernetes is also like, not very fast at like starting containers. So like, I started kind of like, you know, going a layer deeper, like Docker is actually like, you know, there's like a couple of different primitives, but like a lower level primitive is run C, which is like a container runner. And I was like, what if I just take the container runner, like run C, and I point it to like my own root file system, and then I built like my own virtual file system that exposes files over a network instead. And that was like the sort of very crude version of model, it's like now I can actually start containers very quickly, because it turns out like when you start a Docker container, like, first of all, like most Docker images are like several gigabytes, and like 99% of that is never going to be consumed, like there's a bunch of like, you know, like timezone information for like Uzbekistan, like no one's going to read it. And then there's a very high overlap between the files are going to be read, there's going to be like lib torch or whatever, like it's going to be read. So you can also cache it very well. So that was like the first sort of stuff we started working on was like, let's build this like container file system. And you know, coupled with like, you know, just using run C directly. And that actually enabled us to like, get to this point of like, you write code, and then you can launch it in the cloud within like a second or two, like something like that. And you know, there's been many optimizations since then, but that was sort of starting point.Alessio [00:12:33]: Can we talk about the developer experience as well, I think one of the magic things about Modal is at the very basic layers, like a Python function decorator, it's just like stub and whatnot. But then you also have a way to define a full container, what were kind of the design decisions that went into it? Where did you start? How easy did you want it to be? And then maybe how much complexity did you then add on to make sure that every use case fit?Erik [00:12:57]: I mean, Modal, I almost feel like it's like almost like two products kind of glued together. Like there's like the low level like container runtime, like file system, all that stuff like in Rust. And then there's like the Python SDK, right? Like how do you express applications? And I think, I mean, Swix, like I think your blog was like the self-provisioning runtime was like, to me, always like to sort of, for me, like an eye-opening thing. It's like, so I didn't think about like...Swyx [00:13:15]: You wrote your post four months before me. Yeah? The software 2.0, Infra 2.0. Yeah.Erik [00:13:19]: Well, I don't know, like convergence of minds. I guess we were like both thinking. Maybe you put, I think, better words than like, you know, maybe something I was like thinking about for a long time. Yeah.Swyx [00:13:29]: And I can tell you how I was thinking about it on my end, but I want to hear you say it.Erik [00:13:32]: Yeah, yeah, I would love to. So to me, like what I always wanted to build was like, I don't know, like, I don't know if you use like Pulumi. Like Pulumi is like nice, like in the sense, like it's like Pulumi is like you describe infrastructure in code, right? And to me, that was like so nice. Like finally I can like, you know, put a for loop that creates S3 buckets or whatever. And I think like Modal sort of goes one step further in the sense that like, what if you also put the app code inside the infrastructure code and like glue it all together and then like you only have one single place that defines everything and it's all programmable. You don't have any config files. Like Modal has like zero config. There's no config. It's all code. And so that was like the goal that I wanted, like part of that. And then the other part was like, I often find that so much of like my time was spent on like the plumbing between containers. And so my thing was like, well, if I just build this like Python SDK and make it possible to like bridge like different containers, just like a function call, like, and I can say, oh, this function runs in this container and this other function runs in this container and I can just call it just like a normal function, then, you know, I can build these applications that may span a lot of different environments. Maybe they fan out, start other containers, but it's all just like inside Python. You just like have this beautiful kind of nice like DSL almost for like, you know, how to control infrastructure in the cloud. So that was sort of like how we ended up with the Python SDK as it is, which is still evolving all the time, by the way. We keep changing syntax quite a lot because I think it's still somewhat exploratory, but we're starting to converge on something that feels like reasonably good now.Swyx [00:14:54]: Yeah. And along the way you, with this expressiveness, you enabled the ability to, for example, attach a GPU to a function. Totally.Erik [00:15:02]: Yeah. It's like you just like say, you know, on the function decorator, you're like GPU equals, you know, A100 and then or like GPU equals, you know, A10 or T4 or something like that. And then you get that GPU and like, you know, you just run the code and it runs like you don't have to, you know, go through hoops to, you know, start an EC2 instance or whatever.Swyx [00:15:18]: Yeah. So it's all code. Yeah. So one of the reasons I wrote Self-Revisioning Runtimes was I was working at AWS and we had AWS CDK, which is kind of like, you know, the Amazon basics blew me. Yeah, totally. And then, and then like it creates, it compiles the cloud formation. Yeah. And then on the other side, you have to like get all the config stuff and then put it into your application code and make sure that they line up. So then you're writing code to define your infrastructure, then you're writing code to define your application. And I was just like, this is like obvious that it's going to converge, right? Yeah, totally.Erik [00:15:48]: But isn't there like, it might be wrong, but like, was it like SAM or Chalice or one of those? Like, isn't that like an AWS thing that where actually they kind of did that? I feel like there's like one.Swyx [00:15:57]: SAM. Yeah. Still very clunky. It's not, not as elegant as modal.Erik [00:16:03]: I love AWS for like the stuff it's built, you know, like historically in order for me to like, you know, what it enables me to build, but like AWS is always like struggle with developer experience.Swyx [00:16:11]: I mean, they have to not break things.Erik [00:16:15]: Yeah. Yeah. And totally. And they have to build products for a very wide range of use cases. And I think that's hard.Swyx [00:16:21]: Yeah. Yeah. So it's, it's easier to design for. Yeah. So anyway, I was, I was pretty convinced that this, this would happen. I wrote, wrote that thing. And then, you know, I imagine my surprise that you guys had it on your landing page at some point. I think, I think Akshad was just like, just throw that in there.Erik [00:16:34]: Did you trademark it?Swyx [00:16:35]: No, I didn't. But I definitely got sent a few pitch decks with my post on there and it was like really interesting. This is my first time like kind of putting a name to a phenomenon. And I think this is a useful skill for people to just communicate what they're trying to do.Erik [00:16:48]: Yeah. No, I think it's a beautiful concept.Swyx [00:16:50]: Yeah. Yeah. Yeah. But I mean, obviously you implemented it. What became more clear in your explanation today is that actually you're not that tied to Python.Erik [00:16:57]: No. I mean, I, I think that all the like lower level stuff is, you know, just running containers and like scheduling things and, you know, serving container data and stuff. So like one of the benefits of data teams is obviously like they're all like using Python, right? And so that made it a lot easier. I think, you know, if we had focused on other workloads, like, you know, for various reasons, we've like been kind of like half thinking about like CI or like things like that. But like, in a way that's like harder because like you also, then you have to be like, you know, multiple SDKs, whereas, you know, focusing on data teams, you can only, you know, Python like covers like 95% of all teams. That made it a lot easier. But like, I mean, like definitely like in the future, we're going to have others support, like supporting other languages. JavaScript for sure is the obvious next language. But you know, who knows, like, you know, Rust, Go, R, whatever, PHP, Haskell, I don't know.Swyx [00:17:42]: You know, I think for me, I actually am a person who like kind of liked the idea of programming language advancements being improvements in developer experience. But all I saw out of the academic sort of PLT type people is just type level improvements. And I always think like, for me, like one of the core reasons for self-provisioning runtimes and then why I like Modal is like, this is actually a productivity increase, right? Like, it's a language level thing, you know, you managed to stick it on top of an existing language, but it is your own language, a DSL on top of Python. And so language level increase on the order of like automatic memory management. You know, you could sort of make that analogy that like, maybe you lose some level of control, but most of the time you're okay with whatever Modal gives you. And like, that's fine. Yeah.Erik [00:18:26]: Yeah. Yeah. I mean, that's how I look at about it too. Like, you know, you look at developer productivity over the last number of decades, like, you know, it's come in like small increments of like, you know, dynamic typing or like is like one thing because not suddenly like for a lot of use cases, you don't need to care about type systems or better compiler technology or like, you know, the cloud or like, you know, relational databases. And, you know, I think, you know, you look at like that, you know, history, it's a steadily, you know, it's like, you know, you look at the developers have been getting like probably 10X more productive every decade for the last four decades or something that was kind of crazy. Like on an exponential scale, we're talking about 10X or is there a 10,000X like, you know, improvement in developer productivity. What we can build today, you know, is arguably like, you know, a fraction of the cost of what it took to build it in the eighties. Maybe it wasn't even possible in the eighties. So that to me, like, that's like so fascinating. I think it's going to keep going for the next few decades. Yeah.Alessio [00:19:14]: Yeah. Another big thing in the infra 2.0 wishlist was truly serverless infrastructure. The other on your landing page, you called them native cloud functions, something like that. I think the issue I've seen with serverless has always been people really wanted it to be stateful, even though stateless was much easier to do. And I think now with AI, most model inference is like stateless, you know, outside of the context. So that's kind of made it a lot easier to just put a model, like an AI model on model to run. How do you think about how that changes how people think about infrastructure too? Yeah.Erik [00:19:48]: I mean, I think model is definitely going in the direction of like doing more stateful things and working with data and like high IO use cases. I do think one like massive serendipitous thing that happened like halfway, you know, a year and a half into like the, you know, building model was like Gen AI started exploding and the IO pattern of Gen AI is like fits the serverless model like so well, because it's like, you know, you send this tiny piece of information, like a prompt, right, or something like that. And then like you have this GPU that does like trillions of flops, and then it sends back like a tiny piece of information, right. And that turns out to be something like, you know, if you can get serverless working with GPU, that just like works really well, right. So I think from that point of view, like serverless always to me felt like a little bit of like a solution looking for a problem. I don't actually like don't think like backend is like the problem that needs to serve it or like not as much. But I look at data and in particular, like things like Gen AI, like model inference, like it's like clearly a good fit. So I think that is, you know, to a large extent explains like why we saw, you know, the initial sort of like killer app for model being model inference, which actually wasn't like necessarily what we're focused on. But that's where we've seen like by far the most usage. Yeah.Swyx [00:20:52]: And this was before you started offering like fine tuning of language models, it was mostly stable diffusion. Yeah.Erik [00:20:59]: Yeah. I mean, like model, like I always built it to be a very general purpose compute platform, like something where you can run everything. And I used to call model like a better Kubernetes for data team for a long time. What we realized was like, yeah, that's like, you know, a year and a half in, like we barely had any users or any revenue. And like we were like, well, maybe we should look at like some use case, trying to think of use case. And that was around the same time stable diffusion came out. And the beauty of model is like you can run almost anything on model, right? Like model inference turned out to be like the place where we found initially, well, like clearly this has like 10x like better agronomics than anything else. But we're also like, you know, going back to my original vision, like we're thinking a lot about, you know, now, okay, now we do inference really well. Like what about training? What about fine tuning? What about, you know, end-to-end lifecycle deployment? What about data pre-processing? What about, you know, I don't know, real-time streaming? What about, you know, large data munging, like there's just data observability. I think there's so many things, like kind of going back to what I said about like redefining the data stack, like starting with the foundation of compute. Like one of the exciting things about model is like we've sort of, you know, we've been working on that for three years and it's maturing, but like this is so many things you can do like with just like a better compute primitive and also go up to stack and like do all this other stuff on top of it.Alessio [00:22:09]: How do you think about or rather like I would love to learn more about the underlying infrastructure and like how you make that happen because with fine tuning and training, it's a static memory. Like you exactly know what you're going to load in memory one and it's kind of like a set amount of compute versus inference, just like data is like very bursty. How do you make batches work with a serverless developer experience? You know, like what are like some fun technical challenge you solve to make sure you get max utilization on these GPUs? What we hear from people is like, we have GPUs, but we can really only get like, you know, 30, 40, 50% maybe utilization. What's some of the fun stuff you're working on to get a higher number there?Erik [00:22:48]: Yeah, I think on the inference side, like that's where we like, you know, like from a cost perspective, like utilization perspective, we've seen, you know, like very good numbers and in particular, like it's our ability to start containers and stop containers very quickly. And that means that we can auto scale extremely fast and scale down very quickly, which means like we can always adjust the sort of capacity, the number of GPUs running to the exact traffic volume. And so in many cases, like that actually leads to a sort of interesting thing where like we obviously run our things on like the public cloud, like AWS GCP, we run on Oracle, but in many cases, like users who do inference on those platforms or those clouds, even though we charge a slightly higher price per GPU hour, a lot of users like moving their large scale inference use cases to model, they end up saving a lot of money because we only charge for like with the time the GPU is actually running. And that's a hard problem, right? Like, you know, if you have to constantly adjust the number of machines, if you have to start containers, stop containers, like that's a very hard problem. Starting containers quickly is a very difficult thing. I mentioned we had to build our own file system for this. We also, you know, built our own container scheduler for that. We've implemented recently CPU memory checkpointing so we can take running containers and snapshot the entire CPU, like including registers and everything, and restore it from that point, which means we can restore it from an initialized state. We're looking at GPU checkpointing next, it's like a very interesting thing. So I think with inference stuff, that's where serverless really shines because you can drive, you know, you can push the frontier of latency versus utilization quite substantially, you know, which either ends up being a latency advantage or a cost advantage or both, right? On training, it's probably arguably like less of an advantage doing serverless, frankly, because you know, you can just like spin up a bunch of machines and try to satisfy, like, you know, train as much as you can on each machine. For that area, like we've seen, like, you know, arguably like less usage, like for modal, but there are always like some interesting use case. Like we do have a couple of customers, like RAM, for instance, like they do fine tuning with modal and they basically like one of the patterns they have is like very bursty type fine tuning where they fine tune 100 models in parallel. And that's like a separate thing that modal does really well, right? Like you can, we can start up 100 containers very quickly, run a fine tuning training job on each one of them for that only runs for, I don't know, 10, 20 minutes. And then, you know, you can do hyper parameter tuning in that sense, like just pick the best model and things like that. So there are like interesting training. I think when you get to like training, like very large foundational models, that's a use case we don't support super well, because that's very high IO, you know, you need to have like infinite band and all these things. And those are things we haven't supported yet and might take a while to get to that. So that's like probably like an area where like we're relatively weak in. Yeah.Alessio [00:25:12]: Have you cared at all about lower level model optimization? There's other cloud providers that do custom kernels to get better performance or are you just given that you're not just an AI compute company? Yeah.Erik [00:25:24]: I mean, I think like we want to support like a generic, like general workloads in a sense that like we want users to give us a container essentially or a code or code. And then we want to run that. So I think, you know, we benefit from those things in the sense that like we can tell our users, you know, to use those things. But I don't know if we want to like poke into users containers and like do those things automatically. That's sort of, I think a little bit tricky from the outside to do, because we want to be able to take like arbitrary code and execute it. But certainly like, you know, we can tell our users to like use those things. Yeah.Swyx [00:25:53]: I may have betrayed my own biases because I don't really think about modal as for data teams anymore. I think you started, I think you're much more for AI engineers. My favorite anecdotes, which I think, you know, but I don't know if you directly experienced it. I went to the Vercel AI Accelerator, which you supported. And in the Vercel AI Accelerator, a bunch of startups gave like free credits and like signups and talks and all that stuff. The only ones that stuck are the ones that actually appealed to engineers. And the top usage, the top tool used by far was modal.Erik [00:26:24]: That's awesome.Swyx [00:26:25]: For people building with AI apps. Yeah.Erik [00:26:27]: I mean, it might be also like a terminology question, like the AI versus data, right? Like I've, you know, maybe I'm just like old and jaded, but like, I've seen so many like different titles, like for a while it was like, you know, I was a data scientist and a machine learning engineer and then, you know, there was like analytics engineers and there was like an AI engineer, you know? So like, to me, it's like, I just like in my head, that's to me just like, just data, like, or like engineer, you know, like I don't really, so that's why I've been like, you know, just calling it data teams. But like, of course, like, you know, AI is like, you know, like such a massive fraction of our like workloads.Swyx [00:26:59]: It's a different Venn diagram of things you do, right? So the stuff that you're talking about where you need like infinite bands for like highly parallel training, that's not, that's more of the ML engineer, that's more of the research scientist and less of the AI engineer, which is more sort of trying to put, work at the application.Erik [00:27:16]: Yeah. I mean, to be fair to it, like we have a lot of users that are like doing stuff that I don't think fits neatly into like AI. Like we have a lot of people using like modal for web scraping, like it's kind of nice. You can just like, you know, fire up like a hundred or a thousand containers running Chromium and just like render a bunch of webpages and it takes, you know, whatever. Or like, you know, protein folding is that, I mean, maybe that's, I don't know, like, but like, you know, we have a bunch of users doing that or, or like, you know, in terms of, in the realm of biotech, like sequence alignment, like people using, or like a couple of people using like modal to run like large, like mixed integer programming problems, like, you know, using Gurobi or like things like that. So video processing is another thing that keeps coming up, like, you know, let's say you have like petabytes of video and you want to just like transcode it, like, or you can fire up a lot of containers and just run FFmpeg or like, so there are those things too. Like, I mean, like that being said, like AI is by far our biggest use case, but you know, like, again, like modal is kind of general purpose in that sense.Swyx [00:28:08]: Yeah. Well, maybe I'll stick to the stable diffusion thing and then we'll move on to the other use cases for AI that you want to highlight. The other big player in my mind is replicate. Yeah. In this, in this era, they're much more, I guess, custom built for that purpose, whereas you're more general purpose. How do you position yourself with them? Are they just for like different audiences or are you just heads on competing?Erik [00:28:29]: I think there's like a tiny sliver of the Venn diagram where we're competitive. And then like 99% of the area we're not competitive. I mean, I think for people who, if you look at like front-end engineers, I think that's where like really they found good fit is like, you know, people who built some cool web app and they want some sort of AI capability and they just, you know, an off the shelf model is like perfect for them. That's like, I like use replicate. That's great. I think where we shine is like custom models or custom workflows, you know, running things at very large scale. We need to care about utilization, care about costs. You know, we have much lower prices because we spend a lot more time optimizing our infrastructure, you know, and that's where we're competitive, right? Like, you know, and you look at some of the use cases, like Suno is a big user, like they're running like large scale, like AI. Oh, we're talking with Mikey.Swyx [00:29:12]: Oh, that's great. Cool.Erik [00:29:14]: In a month. Yeah. So, I mean, they're, they're using model for like production infrastructure. Like they have their own like custom model, like custom code and custom weights, you know, for AI generated music, Suno.AI, you know, that, that, those are the types of use cases that we like, you know, things that are like very custom or like, it's like, you know, and those are the things like it's very hard to run and replicate, right? And that's fine. Like I think they, they focus on a very different part of the stack in that sense.Swyx [00:29:35]: And then the other company pattern that I pattern match you to is Modular. I don't know.Erik [00:29:40]: Because of the names?Swyx [00:29:41]: No, no. Wow. No, but yeah, yes, the name is very similar. I think there's something that might be insightful there from a linguistics point of view. Oh no, they have Mojo, the sort of Python SDK. And they have the Modular Inference Engine, which is their sort of their cloud stack, their sort of compute inference stack. I don't know if anyone's made that comparison to you before, but like I see you evolving a little bit in parallel there.Erik [00:30:01]: No, I mean, maybe. Yeah. Like it's not a company I'm like super like familiar, like, I mean, I know the basics, but like, I guess they're similar in the sense like they want to like do a lot of, you know, they have sort of big picture vision.Swyx [00:30:12]: Yes. They also want to build very general purpose. Yeah. So they're marketing themselves as like, if you want to do off the shelf stuff, go out, go somewhere else. If you want to do custom stuff, we're the best place to do it. Yeah. Yeah. There is some overlap there. There's not overlap in the sense that you are a closed source platform. People have to host their code on you. That's true. Whereas for them, they're very insistent on not running their own cloud service. They're a box software. Yeah. They're licensed software.Erik [00:30:37]: I'm sure their VCs at some point going to force them to reconsider. No, no.Swyx [00:30:40]: Chris is very, very insistent and very convincing. So anyway, I would just make that comparison, let people make the links if they want to. But it's an interesting way to see the cloud market develop from my point of view, because I came up in this field thinking cloud is one thing, and I think your vision is like something slightly different, and I see the different takes on it.Erik [00:31:00]: Yeah. And like one thing I've, you know, like I've written a bit about it in my blog too, it's like I think of us as like a second layer of cloud provider in the sense that like I think Snowflake is like kind of a good analogy. Like Snowflake, you know, is infrastructure as a service, right? But they actually run on the like major clouds, right? And I mean, like you can like analyze this very deeply, but like one of the things I always thought about is like, why does Snowflake arbitrarily like win over Redshift? And I think Snowflake, you know, to me, one, because like, I mean, in the end, like AWS makes all the money anyway, like and like Snowflake just had the ability to like focus on like developer experience or like, you know, user experience. And to me, like really proved that you can build a cloud provider, a layer up from, you know, the traditional like public clouds. And in that layer, that's also where I would put Modal, it's like, you know, we're building a cloud provider, like we're, you know, we're like a multi-tenant environment that runs the user code. But we're also building on top of the public cloud. So I think there's a lot of room in that space, I think is very sort of interesting direction.Alessio [00:31:55]: How do you think of that compared to the traditional past history, like, you know, you had AWS, then you had Heroku, then you had Render, Railway.Erik [00:32:04]: Yeah, I mean, I think those are all like great. I think the problem that they all faced was like the graduation problem, right? Like, you know, Heroku or like, I mean, like also like Heroku, there's like a counterfactual future of like, what would have happened if Salesforce didn't buy them, right? Like, that's a sort of separate thing. But like, I think what Heroku, I think always struggled with was like, eventually companies would get big enough that you couldn't really justify running in Heroku. So they would just go and like move it to, you know, whatever AWS or, you know, in particular. And you know, that's something that keeps me up at night too, like, what does that graduation risk like look like for modal? I always think like the only way to build a successful infrastructure company in the long run in the cloud today is you have to appeal to the entire spectrum, right? Or at least like the enterprise, like you have to capture the enterprise market. But the truly good companies capture the whole spectrum, right? Like I think of companies like, I don't like Datadog or Mongo or something that were like, they both captured like the hobbyists and acquire them, but also like, you know, have very large enterprise customers. I think that arguably was like where I, in my opinion, like Heroku struggle was like, how do you maintain the customers as they get more and more advanced? I don't know what the solution is, but I think there's, you know, that's something I would have thought deeply if I was at Heroku at that time.Alessio [00:33:14]: What's the AI graduation problem? Is it, I need to fine tune the model, I need better economics, any insights from customer discussions?Erik [00:33:22]: Yeah, I mean, better economics, certainly. But although like, I would say like, even for people who like, you know, needs like thousands of GPUs, just because we can drive utilization so much better, like we, there's actually like a cost advantage of staying on modal. But yeah, I mean, certainly like, you know, and like the fact that VCs like love, you know, throwing money at least used to, you know, add companies who need it to buy GPUs. I think that didn't help the problem. And in training, I think, you know, there's less software differentiation. So in training, I think there's certainly like better economics of like buying big clusters. But I mean, my hope it's going to change, right? Like I think, you know, we're still pretty early in the cycle of like building AI infrastructure. And I think a lot of these companies over in the long run, like, you know, they're, except it may be super big ones, like, you know, on Facebook and Google, they're always going to build their own ones. But like everyone else, like some extent, you know, I think they're better off like buying platforms. And, you know, someone's going to have to build those platforms.Swyx [00:34:12]: Yeah. Cool. Let's move on to language models and just specifically that workload just to flesh it out a little bit. You already said that RAMP is like fine tuning 100 models at once simultaneously on modal. Closer to home, my favorite example is ErikBot. Maybe you want to tell that story.Erik [00:34:30]: Yeah. I mean, it was a prototype thing we built for fun, but it's pretty cool. Like we basically built this thing that hooks up to Slack. It like downloads all the Slack history and, you know, fine-tunes a model based on a person. And then you can chat with that. And so you can like, you know, clone yourself and like talk to yourself on Slack. I mean, it's like nice like demo and it's just like, I think like it's like fully contained modal. Like there's a modal app that does everything, right? Like it downloads Slack, you know, integrates with the Slack API, like downloads the stuff, the data, like just runs the fine-tuning and then like creates like dynamically an inference endpoint. And it's all like self-contained and like, you know, a few hundred lines of code. So I think it's sort of a good kind of use case for, or like it kind of demonstrates a lot of the capabilities of modal.Alessio [00:35:08]: Yeah. On a more personal side, how close did you feel ErikBot was to you?Erik [00:35:13]: It definitely captured the like the language. Yeah. I mean, I don't know, like the content, I always feel this way about like AI and it's gotten better. Like when you look at like AI output of text, like, and it's like, when you glance at it, it's like, yeah, this seems really smart, you know, but then you actually like look a little bit deeper. It's like, what does this mean?Swyx [00:35:32]: What does this person say?Erik [00:35:33]: It's like kind of vacuous, right? And that's like kind of what I felt like, you know, talking to like my clone version, like it's like says like things like the grammar is correct. Like some of the sentences make a lot of sense, but like, what are you trying to say? Like there's no content here. I don't know. I mean, it's like, I got that feeling also with chat TBT in the like early versions right now it's like better, but.Alessio [00:35:51]: That's funny. So I built this thing called small podcaster to automate a lot of our back office work, so to speak. And it's great at transcript. It's great at doing chapters. And then I was like, okay, how about you come up with a short summary? And it's like, it sounds good, but it's like, it's not even the same ballpark as like, yeah, end up writing. Right. And it's hard to see how it's going to get there.Swyx [00:36:11]: Oh, I have ideas.Erik [00:36:13]: I'm certain it's going to get there, but like, I agree with you. Right. And like, I have the same thing. I don't know if you've read like AI generated books. Like they just like kind of seem funny, right? Like there's off, right? But like you glance at it and it's like, oh, it's kind of cool. Like looks correct, but then it's like very weird when you actually read them.Swyx [00:36:30]: Yeah. Well, so for what it's worth, I think anyone can join the modal slack. Is it open to the public? Yeah, totally.Erik [00:36:35]: If you go to modal.com, there's a button in the footer.Swyx [00:36:38]: Yeah. And then you can talk to Erik Bot. And then sometimes I really like picking Erik Bot and then you answer afterwards, but then you're like, yeah, mostly correct or whatever. Any other broader lessons, you know, just broadening out from like the single use case of fine tuning, like what are you seeing people do with fine tuning or just language models on modal in general? Yeah.Erik [00:36:59]: I mean, I think language models is interesting because so many people get started with APIs and that's just, you know, they're just dominating a space in particular opening AI, right? And that's not necessarily like a place where we aim to compete. I mean, maybe at some point, but like, it's just not like a core focus for us. And I think sort of separately, it's sort of a question of like, there's economics in that long term. But like, so we tend to focus on more like the areas like around it, right? Like fine tuning, like another use case we have is a bunch of people, Ramp included, is doing batch embeddings on modal. So let's say, you know, you have like a, actually we're like writing a blog post, like we take all of Wikipedia and like parallelize embeddings in 15 minutes and produce vectors for each article. So those types of use cases, I think modal suits really well for. I think also a lot of like custom inference, like yeah, I love that.Swyx [00:37:43]: Yeah. I think you should give people an idea of the order of magnitude of parallelism, because I think people don't understand how parallel. So like, I think your classic hello world with modal is like some kind of Fibonacci function, right? Yeah, we have a bunch of different ones. Some recursive function. Yeah.Erik [00:37:59]: Yeah. I mean, like, yeah, I mean, it's like pretty easy in modal, like fan out to like, you know, at least like 100 GPUs, like in a few seconds. And you know, if you give it like a couple of minutes, like we can, you know, you can fan out to like thousands of GPUs. Like we run it relatively large scale. And yeah, we've run, you know, many thousands of GPUs at certain points when we needed, you know, big backfills or some customers had very large compute needs.Swyx [00:38:21]: Yeah. Yeah. And I mean, that's super useful for a number of things. So one of my early interactions with modal as well was with a small developer, which is my sort of coding agent. The reason I chose modal was a number of things. One, I just wanted to try it out. I just had an excuse to try it. Akshay offered to onboard me personally. But the most interesting thing was that you could have that sort of local development experience as it was running on my laptop, but then it would seamlessly translate to a cloud service or like a cloud hosted environment. And then it could fan out with concurrency controls. So I could say like, because like, you know, the number of times I hit the GPT-3 API at the time was going to be subject to the rate limit. But I wanted to fan out without worrying about that kind of stuff. With modal, I can just kind of declare that in my config and that's it. Oh, like a concurrency limit?Erik [00:39:07]: Yeah. Yeah.Swyx [00:39:09]: Yeah. There's a lot of control. And that's why it's like, yeah, this is a pretty good use case for like writing this kind of LLM application code inside of this environment that just understands fan out and rate limiting natively. You don't actually have an exposed queue system, but you have it under the hood, you know, that kind of stuff. Totally.Erik [00:39:28]: It's a self-provisioning cloud.Swyx [00:39:30]: So the last part of modal I wanted to touch on, and obviously feel free, I know you're working on new features, was the sandbox that was introduced last year. And this is something that I think was inspired by Code Interpreter. You can tell me the longer history behind that.Erik [00:39:45]: Yeah. Like we originally built it for the use case, like there was a bunch of customers who looked into code generation applications and then they came to us and asked us, is there a safe way to execute code? And yeah, we spent a lot of time on like container security. We used GeoVisor, for instance, which is a Google product that provides pretty strong isolation of code. So we built a product where you can basically like run arbitrary code inside a container and monitor its output or like get it back in a safe way. I mean, over time it's like evolved into more of like, I think the long-term direction is actually I think more interesting, which is that I think modal as a platform where like I think the core like container infrastructure we offer could actually be like, you know, unbundled from like the client SDK and offer to like other, you know, like we're talking to a couple of like other companies that want to run, you know, through their packages, like run, execute jobs on modal, like kind of programmatically. So that's actually the direction like Sandbox is going. It's like turning into more like a platform for platforms is kind of what I've been thinking about it as.Swyx [00:40:45]: Oh boy. Platform. That's the old Kubernetes line.Erik [00:40:48]: Yeah. Yeah. Yeah. But it's like, you know, like having that ability to like programmatically, you know, create containers and execute them, I think, I think is really cool. And I think it opens up a lot of interesting capabilities that are sort of separate from the like core Python SDK in modal. So I'm really excited about C. It's like one of those features that we kind of released and like, you know, then we kind of look at like what users actually build with it and people are starting to build like kind of crazy things. And then, you know, we double down on some of those things because when we see like, you know, potential new product features and so Sandbox, I think in that sense, it's like kind of in that direction. We found a lot of like interesting use cases in the direction of like platformized container runner.Swyx [00:41:27]: Can you be more specific about what you're double down on after seeing users in action?Erik [00:41:32]: I mean, we're working with like some companies that, I mean, without getting into specifics like that, need the ability to take their users code and then launch containers on modal. And it's not about security necessarily, like they just want to use modal as a back end, right? Like they may already provide like Kubernetes as a back end, Lambda as a back end, and now they want to add modal as a back end, right? And so, you know, they need a way to programmatically define jobs on behalf of their users and execute them. And so, I don't know, that's kind of abstract, but does that make sense? I totally get it.Swyx [00:42:03]: It's sort of one level of recursion to sort of be the Modal for their customers.Erik [00:42:09]: Exactly.Swyx [00:42:10]: Yeah, exactly. And Cloudflare has done this, you know, Kenton Vardar from Cloudflare, who's like the tech lead on this thing, called it sort of functions as a service as a service.Erik [00:42:17]: Yeah, that's exactly right. FaSasS.Swyx [00:42:21]: FaSasS. Yeah, like, I mean, like that, I think any base layer, second layer cloud provider like yourself, compute provider like yourself should provide, you know, it's a mark of maturity and success that people just trust you to do that. They'd rather build on top of you than compete with you. The more interesting thing for me is like, what does it mean to serve a computer like an LLM developer, rather than a human developer, right? Like, that's what a sandbox is to me, that you have to redefine modal to serve a different non-human audience.Erik [00:42:51]: Yeah. Yeah, and I think there's some really interesting people, you know, building very cool things.Swyx [00:42:55]: Yeah. So I don't have an answer, but, you know, I imagine things like, hey, the way you give feedback is different. Maybe you have to like stream errors, log errors differently. I don't really know. Yeah. Obviously, there's like safety considerations. Maybe you have an API to like restrict access to the web. Yeah. I don't think anyone would use it, but it's there if you want it.Erik [00:43:17]: Yeah.Swyx [00:43:18]: Yeah. Any other sort of design considerations? I have no idea.Erik [00:43:21]: With sandboxes?Swyx [00:43:22]: Yeah. Yeah.Erik [00:43:24]: Open-ended question here. Yeah. I mean, no, I think, yeah, the network restrictions, I think, make a lot of sense. Yeah. I mean, I think, you know, long-term, like, I think there's a lot of interesting use cases where like the LLM, in itself, can like decide, I want to install these packages and like run this thing. And like, obviously, for a lot of those use cases, like you want to have some sort of control that it doesn't like install malicious stuff and steal your secrets and things like that. But I think that's what's exciting about the sandbox primitive, is like it lets you do that in a relatively safe way.Alessio [00:43:51]: Do you have any thoughts on the inference wars? A lot of providers are just rushing to the bottom to get the lowest price per million tokens. Some of them, you know, the Sean Randomat, they're just losing money and there's like the physics of it just don't work out for them to make any money on it. How do you think about your pricing and like how much premium you can get and you can kind of command versus using lower prices as kind of like a wedge into getting there, especially once you have model instrumented? What are the tradeoffs and any thoughts on strategies that work?Erik [00:44:23]: I mean, we focus more on like custom models and custom code. And I think in that space, there's like less competition and I think we can have a pricing markup, right? Like, you know, people will always compare our prices to like, you know, the GPU power they can get elsewhere. And so how big can that markup be? Like it never can be, you know, we can never charge like 10x more, but we can certainly charge a premium. And like, you know, for that reason, like we can have pretty good margins. The LLM space is like the opposite, like the switching cost of LLMs is zero. If all you're doing is like straight up, like at least like open source, right? Like if all you're doing is like, you know, using some, you know, inference endpoint that serves an open source model and, you know, some other provider comes along and like offers a lower price, you're just going to switch, right? So I don't know, to me that reminds me a lot of like all this like 15 minute delivery wars or like, you know, like Uber versus Lyft, you know, and like maybe going back even further, like I think a lot about like sort of, you know, flip side of this is like, it's actually a positive side, which is like, I thought a lot about like fiber optics boom of like 98, 99, like the other day, or like, you know, and also like the overinvestment in GPU today. Like, like, yeah, like, you know, I don't know, like in the end, like, I don't think VCs will have the return they expected, like, you know, in these things, but guess who's going to benefit, like, you know, is the consumers, like someone's like reaping the value of this. And that's, I think an amazing flip side is that, you know, we should be very grateful, the fact that like VCs want to subsidize these things, which is, you know, like you go back to fiber optics, like there was an extreme, like overinvestment in fiber optics network in like 98. And no one made money who did that. But consumers, you know, got tremendous benefits of all the fiber optics cables that were led, you know, throughout the country in the decades after. I feel something similar abou
The stories that kept us talking all year, and are only getting hotter! Plus the big flops we're still sore about. Special Guest: Kenji Berthold.
This is a recap of the top 10 posts on Hacker News on December 12th, 2023.This podcast was generated by wondercraft.ai(00:37): 23andMe changed its terms of service to prevent hacked customers from suingOriginal post: https://news.ycombinator.com/item?id=38613386&utm_source=wondercraft_ai(02:16): YouTube doesn't want to take down scam adsOriginal post: https://news.ycombinator.com/item?id=38611466&utm_source=wondercraft_ai(03:50): FFmpeg lands CLI multi-threading as its "most complex refactoring" in decadesOriginal post: https://news.ycombinator.com/item?id=38613219&utm_source=wondercraft_ai(05:27): 'Like we were lesser humans': Gaza boys, men recall Israeli arrest, tortureOriginal post: https://news.ycombinator.com/item?id=38616550&utm_source=wondercraft_ai(07:05): Telecom Industry Is Mad Because the FCC Might Examine High Broadband PricesOriginal post: https://news.ycombinator.com/item?id=38613552&utm_source=wondercraft_ai(08:59): MemoryCache: Augmenting local AI with browser dataOriginal post: https://news.ycombinator.com/item?id=38614824&utm_source=wondercraft_ai(10:55): Today Is One of the Biggest Surveillance Votes. Will the FBI Stop Spying?Original post: https://news.ycombinator.com/item?id=38611590&utm_source=wondercraft_ai(12:45): What will enter the public domain in 2024?Original post: https://news.ycombinator.com/item?id=38616753&utm_source=wondercraft_ai(14:35): The Case for Memory Safe RoadmapsOriginal post: https://news.ycombinator.com/item?id=38611175&utm_source=wondercraft_ai(17:10): E3 Is Officially DeadOriginal post: https://news.ycombinator.com/item?id=38612779&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Today, we go over Av Convert, a program to batch convert videos and some audio files on the fly using FFMPEG. This program takes the power of the context menu and leverages it in ways that are convenient, fast, and safe. Best of all, this program is free to use. If you are interested in the program you may visit the author's website. Support Welcome to TFFP! by contributing to their tip jar: https://tips.pinecast.com/jar/tffp Find out more at https://tffp.pinecast.co This podcast is powered by Pinecast.
https://youtu.be/uErL83xFw5M Forum Discussion Thread (https://forum.tuxdigital.com/t/242-blender-4-obs-30-red-hat-almalinux-1million-to-gnome-amp-more-linux-news/6064) On this episode of TWIL (242), we'll find out if it will blend with Blender 4.0. OBS project has released OBS Studio 30. AlmaLinux has a new release with 9.3. GNOME is getting funding of 1 Million euros. The Linux Foundation is creating a new foundation for High Performance Software. All of this and more on this episode of This Week in Linux, Your Source for Linux GNews! Download as MP3 (https://aphid.fireside.fm/d/1437767933/2389be04-5c79-485e-b1ca-3a5b2cebb006/4b05bcb2-39ac-49d4-8af1-94dcc29f866c.mp3) Supported by: LINBIT = https://thisweekinlinux.com/linbit Want to Support the Show? Become a Patron = https://tuxdigital.com/membership Store = https://tuxdigital.com/store Chapters: 00:00 Intro 00:43 Blender 4.0 Released - [ link (https://www.blender.org/download/releases/4-0/) ] 02:12 OBS Studio 30.0 Released - [ link (https://github.com/obsproject/obs-studio/releases/tag/30.0.0) ] 05:51 RHEL 9.3 & AlmaLinux 9.3 Released - [ link (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/9.3_release_notes/index), link (https://almalinux.org/blog/2023-11-13-announcing-93-stable/) ] 08:30 Lubuntu Backports LXQt 1.4 to 23.10 & 22.04 - [ link (https://lubuntu.me/mantic-lxqt-1-4/) ] 10:24 GNOME gets €1M in Funding - [ link (https://foundation.gnome.org/2023/11/09/gnome-recognized-as-public-interest-infrastructure/) ] 12:09 LINBIT - [ link () ] 13:33 The High Performance Software Foundation - [ link (https://hpsf.io/) ] 15:33 Amazon Linux OS to Replace Android - [ link (https://www.omgubuntu.co.uk/2023/11/amazon-vega-linux-based-os) ] 17:17 FFmpeg 6.1 Released - [ link (http://ffmpeg.org/index.html#pr6.1) ] 18:44 New Fedora Slimbook 14″ Laptop - [ link (https://fedoramagazine.org/new-fedora-slimbook-14-joins-the-fedora-slimbook-16/) ] 20:51 TUXEDO Pulse 14 with AMD Ryzen 7 - [ link (https://www.tuxedocomputers.com/en/TUXEDO-Pulse-14-Gen3-Our-highly-efficient-athlete.tuxedo) ] 21:54 BleachBit 4.6.0 Released - [ link (https://www.bleachbit.org/news/bleachbit-460) ] 23:01 Outro
Neuroimplante contra el Parkinson / FFMpeg multi-hilo / Primer misil derribado en el espacio / Datacenters calentando 10.000 hogares / Apple cancela iMac 27 Patrocinador: El proyecto CRECE de Cruz Roja es una nueva iniciativa para luchar contra la soledad no deseada. Es un proyecto innovador desde el punto de vista tecnológico y social, con el objetivo de diseñar y probar nuevos métodos de ayuda social que eviten la institucionalización. — Si te encuentras en esta situación, quieres colaborar o ser voluntario, apúntate. Neuroimplante contra el Parkinson / FFMpeg multi-hilo / Primer misil derribado en el espacio / Datacenters calentando 10.000 hogares / Apple cancela iMac 27
SF folks: join us at the AI Engineer Foundation's Emergency Hackathon tomorrow and consider the Newton if you'd like to cowork in the heart of the Cerebral Arena.Our community page is up to date as usual!~800,000 developers watched OpenAI Dev Day, ~8,000 of whom listened along live on our ThursdAI x Latent Space, and ~800 of whom got tickets to attend in person:OpenAI's first developer conference easily surpassed most people's lowballed expectations - they simply did everything short of announcing GPT-5, including:* ChatGPT (the consumer facing product)* GPT4 Turbo already in ChatGPT (running faster, with an April 2023 cutoff), all noticed by users weeks before the conference* Model picker eliminated, God Model chooses for you* GPTs - “tailored version of ChatGPT for a specific purpose” - stopping short of “Agents”. With custom instructions, expanded knowledge, and actions, and an intuitive no-code GPT Builder UI (we tried all these on our livestream yesterday and found some issues, but also were able to ship interesting GPTs very quickly) and a GPT store with revenue sharing (an important criticism we focused on in our episode on ChatGPT Plugins)* API (the developer facing product)* APIs for Dall-E 3, GPT4 Vision, Code Interpreter (RIP Advanced Data Analysis), GPT4 Finetuning and (surprise!) Text to Speech* many thought each of these would take much longer to arrive* usable in curl and in playground* BYO Interpreter + Async Agents?* Assistant API: stateful API backing “GPTs” like apps, with support for calling multiple tools in parallel, persistent Threads (storing message history, unlimited context window with some asterisks), and uploading/accessing Files (with a possibly-too-simple RAG algorithm, and expensive pricing)* Whisper 3 announced and open sourced (HuggingFace recap)* Price drops for a bunch of things!* Misc: Custom Models for big spending ($2-3m) customers, Copyright Shield, SatyaThe progress here feels fast, but it is mostly (incredible) last-mile execution on model capabilities that we already knew to exist. On reflection it is important to understand that the one guiding principle of OpenAI, even more than being Open (we address that in part 2 of today's pod), is that slow takeoff of AGI is the best scenario for humanity, and that this is what slow takeoff looks like:When introducing GPTs, Sam was careful to assert that “gradual iterative deployment is the best way to address the safety challenges with AI”:This is why, in fact, GPTs and Assistants are intentionally underpowered, and it is a useful exercise to consider what else OpenAI continues to consider dangerous (for example, many people consider a while(true) loop a core driver of an agent, which GPTs conspicuously lack, though Lilian Weng of OpenAI does not).We convened the crew to deliver the best recap of OpenAI Dev Day in Latent Space pod style, with a 1hr deep dive with the Functions pod crew from 5 months ago, and then another hour with past and future guests live from the venue itself, discussing various elements of how these updates affect their thinking and startups. Enjoy!Show Notes* swyx live thread (see pinned messages in Twitter Space for extra links from community)* Newton AI Coworking Interest Form in the heart of the Cerebral ArenaTimestamps* [00:00:00] Introduction* [00:01:59] Part I: Latent Space Pod Recap* [00:06:16] GPT4 Turbo and Assistant API* [00:13:45] JSON mode* [00:15:39] Plugins vs GPT Actions* [00:16:48] What is a "GPT"?* [00:21:02] Criticism: the God Model* [00:22:48] Criticism: ChatGPT changes* [00:25:59] "GPTs" is a genius marketing move* [00:26:59] RIP Advanced Data Analysis* [00:28:50] GPT Creator as AI Prompt Engineer* [00:31:16] Zapier and Prompt Injection* [00:34:09] Copyright Shield* [00:38:03] Sharable GPTs solve the API distribution issue* [00:39:07] Voice* [00:44:59] Vision* [00:49:48] In person experience* [00:55:11] Part II: Spot Interviews* [00:56:05] Jim Fan (Nvidia - High Level Takeaways)* [01:05:35] Raza Habib (Humanloop) - Foundation Model Ops* [01:13:59] Surya Dantuluri (Stealth) - RIP Plugins* [01:21:20] Reid Robinson (Zapier) - AI Actions for GPTs* [01:31:19] Div Garg (MultiOn) - GPT4V for Agents* [01:37:15] Louis Knight-Webb (Bloop.ai) - AI Code Search* [01:49:21] Shreya Rajpal (Guardrails.ai) - on Hallucinations* [01:59:51] Alex Volkov (Weights & Biases, ThursdAI) - "Keeping AI Open"* [02:10:26] Rahul Sonwalkar (Julius AI) - Advice for FoundersTranscript[00:00:00] Introduction[00:00:00] swyx: Hey everyone, this is Swyx coming at you live from the Newton, which is in the heart of the Cerebral Arena. It is a new AI co working space that I and a couple of friends are working out of. There are hot desks available if you're interested, just check the show notes. But otherwise, obviously, it's been 24 hours since the opening of Dev Day, a lot of hot reactions and longstanding tradition, one of the longest traditions we've had.[00:00:29] And the latent space pod is to convene emergency sessions and record the live thoughts of developers and founders going through and processing in real time. I think a lot of the roles of podcasts isn't as perfect information delivery channels, but really as an audio and oral history of what's going on as it happens, while it happens.[00:00:49] So this one's a little unusual. Previously, we only just gathered on Twitter Spaces, and then just had a bunch of people. The last one was the Code Interpreter one with 22, 000 people showed up. But this one is a little bit more complicated because there's an in person element and then a online element.[00:01:06] So this is a two part episode. The first part is a recorded session between our latent space people and Simon Willison and Alex Volkoff from the Thursday iPod, just kind of recapping the day. But then also, as the second hour, I managed to get a bunch of interviews with previous guests on the pod who we're still friends with and some new people that we haven't yet had on the pod.[00:01:28] But I wanted to just get their quick reactions because most of you have known and loved Jim Fan and Div Garg and a bunch of other folks that we interviewed. So I just want to, I'm excited to introduce To you the broader scope of what it's like to be at OpenAI Dev Day in person bring you the audio experience as well as give you some of the thoughts that developers are having as they process the announcements from OpenAI.[00:01:51] So first off, we have the Mainspace Pod recap. One hour of open I dev day.[00:01:59] Part I: Latent Space Pod Recap[00:01:59] Alessio: Hey. Welcome to the Latents Based Podcast an emergency edition after OpenAI Dev Day. This is Alessio, partner and CTO of Residence at Decibel Partners, and as usual, I'm joined by Swyx, founder of SmallAI. Hey,[00:02:12] swyx: and today we have two special guests with us covering all the latest and greatest.[00:02:17] We, we, we love to get our band together and recap things, especially when they're big. And it seems like that every three months we have to do this. So Alex, welcome. From Thursday AI we've been collaborating a lot on the Twitter spaces and welcome Simon from many, many things, but also I think you're the first person to not, not make four appearances on our pod.[00:02:37] Oh, wow. I feel privileged. So welcome. Yeah, I think we're all there yesterday. How... Do we feel like, what do you want to kick off with? Maybe Simon, you want to, you want to take first and then Alex. Sure. Yeah. I mean,[00:02:47] Simon Willison: yesterday was quite exhausting, quite frankly. I feel like it's going to take us as a community several months just to completely absorb all of the stuff that they dropped on us in one giant.[00:02:57] Giant batch. It's particularly impressive considering they launched a ton of features, what, three or four weeks ago? ChatGPT voice and the combined mode and all of that kind of thing. And then they followed up with everything from yesterday. That said, now that I've started digging into the stuff that they released yesterday, some of it is clearly in need of a bit more polish.[00:03:15] You know, the the, the reality of what they look, what they released is I'd say about 80 percent of, of what it looks like it was yesterday, which is still impressive. You know, don't get me wrong. This is an amazing batch of stuff, but there are definitely problems and sharp edges that we need to file off.[00:03:29] And there are things that we still need to figure out before we can take advantage of all of this.[00:03:33] swyx: Yeah, agreed, agreed. And we can go into those, those sharp edges in a bit. I just want to pop over to Alex. What are your thoughts?[00:03:39] Alex Volkov: So, interestingly, even folks at OpenAI, there's like several booths and help desks so you can go in and ask people, like, actual changes and people, like, they could follow up with, like, the right people in OpenAI and, like, answer you back, etc.[00:03:52] Even some of them didn't know about all the changes. So I went to the voice and audio booth. And I asked them about, like, hey, is Whisper 3 that was announced by Sam Altman on stage just, like, briefly, will that be open source? Because I'm, you know, I love using Whisper. And they're like, oh, did we open source?[00:04:06] Did we talk about Whisper 3? Like, some of them didn't even know what they were releasing. But overall, I felt it was a very tightly run event. Like, I was really impressed. Shawn, we were sitting in the audience, and you, like, pointed at the clock to me when they finished. They finished, like, on... And this was after like doing some extra stuff.[00:04:24] Very, very impressive for a first event. Like I was absolutely like, Good job.[00:04:30] swyx: Yeah, apparently it was their first keynote and someone, I think, was it you that told me that this is what happens if you have A president of Y Combinator do a proper keynote you know, having seen many, many, many presentations by other startups this is sort of the sort of master stroke.[00:04:46] Yeah, Alessio, I think you were watching remotely. Yeah, we were at the Newton. Yeah, the Newton.[00:04:52] Alessio: Yeah, I think we had 60 people here at the watch party, so it was quite a big crowd. Mixed reaction from different... Founders and people, depending on what was being announced on the page. But I think everybody walked away kind of really happy with a new layer of interfaces they can use.[00:05:11] I think, to me, the biggest takeaway was like and I was talking with Mike Conover, another friend of the podcast, about this is they're kind of staying in the single threaded, like, synchronous use cases lane, you know? Like, the GPDs announcement are all like... Still, chatbase, one on one synchronous things.[00:05:28] I was expecting, maybe, something about async things, like background running agents, things like that. But it's interesting to see there was nothing of that, so. I think if you're a founder in that space, you're, you're quite excited. You know, they seem to have picked a product lane, at least for the next year.[00:05:45] So, if you're working on... Async experiences, so things working in the background, things that are not co pilot like, I think you're quite excited to have them be a lot cheaper now.[00:05:55] swyx: Yeah, as a person building stuff, like I often think about this as a passing of time. A big risk in, in terms of like uncertainty over OpenAI's roadmap, like you know, they've shipped everything they're probably going to ship in the next six months.[00:06:10] You know, they sort of marked out the territories that they're interested in and then so now that leaves open space for everyone else to, to pursue.[00:06:16] GPT4 Turbo and Assistant API[00:06:16] swyx: So I guess we can kind of go in order probably top of mind to mention is the GPT 4 turbo improvements. Yeah, so longer context length, cheaper price.[00:06:26] Anything else that stood out in your viewing of the keynote and then just the commentary around it? I[00:06:34] Alex Volkov: was I was waiting for Stateful. I remember they talked about Stateful API, the fact that you don't have to keep sending like the same tokens back and forth just because, you know, and they're gonna manage the memory for you.[00:06:45] So I was waiting for that. I knew it was coming at some point. I was kind of... I did not expect it to come at this event. I don't know why. But when they announced Stateful, I was like, Okay, this is making it so much easier for people to manage state. The whole threads I don't want to mix between the two things, so maybe you guys can clarify, but there's the GPT 4 tool, which is the model that has the capabilities, In a whopping 128k, like, context length, right?[00:07:11] It's huge. It's like two and a half books. But also, you know, faster, cheaper, etc. I haven't yet tested the fasterness, but like, everybody's excited about that. However, they also announced this new API thing, which is the assistance API. And part of it is threads, which is, we'll manage the thread for you.[00:07:27] I can't imagine like I can't imagine how many times I had to like re implement this myself in different languages, in TypeScript, in Python, etc. And now it's like, it's so easy. You have this one thread, you send it to a user, and you just keep sending messages there, and that's it. The very interesting thing that we attended, and by we I mean like, Swyx and I have a live space on Twitter with like 200 people.[00:07:46] So it's like me, Swyx, and 200 people in our earphones with us as well. They kept asking like, well, how's the price happening? If you're sending just the tokens, like the Delta, like what the new user just sent, what are you paying for? And I went to OpenAI people, and I was like, hey... How do we get paid for this?[00:08:01] And nobody knew, nobody knew, and I finally got an answer. You still pay for the whole context that you have inside the thread. You still pay for all this, but now it's a little bit more complex for you to kind of count with TikTok, right? So you have to hit another API endpoint to get the whole thread of what the context is.[00:08:17] Then TikTokonize this, run this in TikTok, and then calculate. This is now the new way, officially, for OpenAI. But I really did, like, have to go and find this. They didn't know a lot of, like, how the pricing is. Ouch! Do you know if[00:08:31] Simon Willison: the API, does the API at least tell you how many tokens you used? Or is it entirely up to you to do the accounting?[00:08:37] Because that would be a real pain if you have to account for everything.[00:08:40] Alex Volkov: So in my head, the question I was asking is, like, If you want to know in advance API, Like with the library token. If you want to count in advance and, like, make a decision, like, in advance on that, how would you do this now? And they said, well, yeah, there's a way.[00:08:54] If you hit the API, get the whole thread back, then count the tokens. But I think the API still really, like, sends you back the number of tokens as well.[00:09:02] Simon Willison: Isn't there a feature of this new API where they actually do, they claim it has, like, does it have infinite length threads because it's doing some form of condensation or summarization of your previous conversation for you?[00:09:15] I heard that from somewhere, but I haven't confirmed it yet.[00:09:18] swyx: So I have, I have a source from Dave Valdman. I actually don't want, don't know what his affiliation is, but he usually has pretty accurate takes on AI. So I, I think he works in the iCircles in some capacity. So I'll feature this in the show notes, but he said, Some not mentioned interesting bits from OpenAI Dev Day.[00:09:33] One unlimited. context window and chat threads from opening our docs. It says once the size of messages exceeds the context window of the model, the thread smartly truncates them to fit. I'm not sure I want that intelligence.[00:09:44] Alex Volkov: I want to chime in here just real quick. The not want this intelligence. I heard this from multiple people over the next conversation that I had. Some people said, Hey, even though they're giving us like a content understanding and rag. We are doing different things. Some people said this with Vision as well.[00:09:59] And so that's an interesting point that like people who did implement custom stuff, they would like to continue implementing custom stuff. That's also like an additional point that I've heard people talk about.[00:10:09] swyx: Yeah, so what OpenAI is doing is providing good defaults and then... Well, good is questionable.[00:10:14] We'll talk about that. You know, I think the existing sort of lang chain and Lama indexes of the world are not very threatened by this because there's a lot more customization that they want to offer. Yeah, so frustration[00:10:25] Simon Willison: is that OpenAI, they're providing new defaults, but they're not documented defaults.[00:10:30] Like they haven't told us how their RAG implementation works. Like, how are they chunking the documents? How are they doing retrieval? Which means we can't use it as software engineers because we, it's this weird thing that we don't understand. And there's no reason not to tell us that. Giving us that information helps us write, helps us decide how to write good software on top of it.[00:10:48] So that's kind of frustrating. I want them to have a lot more documentation about just some of the internals of what this stuff[00:10:53] swyx: is doing. Yeah, I want to highlight.[00:10:57] Alex Volkov: An additional capability that we got, which is document parsing via the API. I was, like, blown away by this, right? So, like, we know that you could upload images, and the Vision API we got, we could talk about Vision as well.[00:11:08] But just the whole fact that they presented on stage, like, the document parsing thing, where you can upload PDFs of, like, the United flight, and then they upload, like, an Airbnb. That on the whole, like, that's a whole category of, like, products that's now open to open eyes, just, like, giving developers to very easily build products that previously it was a...[00:11:24] Pain in the butt for many, many people. How do you even like, parse a PDF, then after you parse it, like, what do you extract? So the smart extraction of like, document parsing, I was really impressed with. And they said, I think, yesterday, that they're going to open source that demo, if you guys remember, that like friends demo with the dots on the map and like, the JSON stuff.[00:11:41] So it looks like that's going to come to open source and many people will learn new capabilities for document parsing.[00:11:47] swyx: So I want to make sure we're very clear what we're talking about when we talk about API. When you say API, there's no actual endpoint that does this, right? You're talking about the chat GPT's GPT's functionality.[00:11:58] Alex Volkov: No, I'm talking about the assistance API. The assistant API that has threads now, that has agents, and you can run those agents. I actually, maybe let's clarify this point. I think I had to, somebody had to clarify this for me. There's the GPT's. Which is a UI version of running agents. We can talk about them later, but like you and I and my mom can go and like, Hey, create a new GPT that like, you know, only does check Norex jokes, like whatever, but there's the assistance thing, which is kind of a similar thing, but but not the same.[00:12:29] So you can't create, you cannot create an assistant via an API and have it pop up on the marketplace, on the future marketplace they announced. How can you not? No, no, no, not via the API. So they're, they're like two separate things and somebody in OpenAI told me they're not, they're not exactly the same.[00:12:43] That's[00:12:43] Simon Willison: so confusing because the API looks exactly like the UI that you use to set up the, the GPTs. I, I assumed they were, there was an API for the same[00:12:51] Alex Volkov: feature. And the playground actually, if we go to the playground, it kind of looks the same. There's like the configurable thing. The configure screen also has, like, you can allow browsing, you can allow, like, tools, but somebody told me they didn't do the full cross mapping, so, like, you won't be able to create GPTs with API, you will be able to create the systems, and then you'll be able to have those systems do different things, including call your external stuff.[00:13:13] So that was pretty cool. So this API is called the system API. That's what we get, like, in addition to the model of the GPT 4 turbo. And that has document parsing. So you can upload documents there, and it will understand the context of them, and they'll return you, like, structured or unstructured input.[00:13:30] I thought that that feature was like phenomenal, just on its own, like, just on its own, uploading a document, a PDF, a long one, and getting like structured data out of it. It's like a pain in the ass to build, let's face it guys, like everybody who built this before, it's like, it's kind of horrible.[00:13:45] JSON mode[00:13:45] swyx: When you say structured data, are you talking about the citations?[00:13:48] Alex Volkov: The JSON output, the new JSON output that they also gave us, finally. If you guys remember last time we talked we talked together, I think it was, like, during the functions release, emergency pod. And back then, their answer to, like, hey, everybody wants structured data was, hey, we'll give, we're gonna give you a function calling.[00:14:03] And now, they did both. They gave us both, like, a JSON output, like, structure. So, like, you can, the models are actually going to return JSON. Haven't played with it myself, but that's what they announced. And the second thing is, they improved the function calling. Significantly as well.[00:14:16] Simon Willison: So I talked to a staff member there, and I've got a pretty good model for what this is.[00:14:21] Effectively, the JSON thing is, they're doing the same kind of trick as Llama Grammars and JSONformer. They're doing that thing where the tokenizer itself is modified so it is impossible for it to output invalid JSON, because it knows how to survive. Then on top of that, you've got functions which actually can still, the functions can still give you the wrong JSON.[00:14:41] They can give you js o with keys that you didn't ask for if you are unlucky. But at least it will be valid. At least it'll pass through a json passer. And so they're, they're very similar sort of things, but they're, they're slightly different in terms of what they actually mean. And yeah, the new function stuff is, is super exciting.[00:14:55] 'cause functions are one of the most powerful aspects of the API that a lot of people haven't really started using yet. But it's amazingly powerful what you can do with it.[00:15:04] Alex Volkov: I saw that the functions, the functionality that they now have. is also plug in able as actions to those assistants. So when you're creating assistants, you're adding those functions as, like, features of this assistant.[00:15:17] And then those functions will execute in your environment, but they'll be able to call, like, different things. Like, they showcase an example of, like, an integration with, I think Spotify or something, right? And that was, like, an internal function that ran. But it is confusing, the kind of, the online assistant.[00:15:32] APIable agents and the GPT's agents. So I think it's a little confusing because they demoed both. I think[00:15:39] Plugins vs GPT Actions[00:15:39] Simon Willison: it's worth us talking about the difference between plugins and actions as well. Because, you know, they launched plugins, what, back in February. And they've effectively... They've kind of deprecated plugins.[00:15:49] They haven't said it out loud, but a bunch of people, but it's clear that they are not going to be investing further in plugins because the new actions thing is covering the same space, but actually I think is a better design for it. Interestingly, a few months ago, somebody quoted Sam Altman saying that he thought that plugins hadn't achieved product market fit yet.[00:16:06] And I feel like that's sort of what we're seeing today. The the problem with plugins is it was all a little bit messy. People would pick and mix the plugins that they needed. Nobody really knew which plugin combinations would work. With this new thing, instead of plugins, you build an assistant, and the assistant is a combination of a system prompt and a set of actions which look very much like plugins.[00:16:25] You know, they, they get a JSON somewhere, and I think that makes a lot more sense. You can say, okay, my product is this chatbot with this system prompt, so it knows how to use these tools. I've given it this combination of plugin like things that it can use. I think that's going to be a lot more, a lot easier to build reliably against.[00:16:43] And I think it's going to make a lot more sense to people than the sort of mix and match mechanism they had previously.[00:16:48] What is a "GPT"?[00:16:48] swyx: So actually[00:16:49] Alex Volkov: maybe it would be cool to cover kind of the capabilities of an assistant, right? So you have a custom prompt, which is akin to a system message. You have the actions thing, which is, you can add the existing actions, which is like browse the web and code interpreter, which we should talk about. Like, the system now can write code and execute it, which is exciting. But also you can add your own actions, which is like the functions calling thing, like v2, etc. Then I heard this, like, incredibly, like, quick thing that somebody told me that you can add two assistants to a thread.[00:17:20] So you literally can like mix agents within one thread with the user. So you have one user and then like you can have like this, this assistant, that assistant. They just glanced over this and I was like, that, that is very interesting. That is not very interesting. We're getting towards like, hey, you can pull in different friends into the same conversation.[00:17:37] Everybody does the different thing. What other capabilities do we have there? You guys remember? Oh Remember, like, context. Uploading API documentation.[00:17:48] Simon Willison: Well, that one's a bit more complicated. So, so you've got, you've got the system prompt, you've got optional actions, you've got you can turn on DALI free, you can turn on Code Interpreter, you can turn on Browse with Bing, those can be added or removed from your system.[00:18:00] And then you can upload files into it. And the files can be used in two different ways. You can... There's this thing that they call, I think they call it the retriever, which basically does, it does RAG, it does retrieval augmented generation against the content you've uploaded, but Code Interpreter also has access to the files that you've uploaded, and those are both in the same bucket, so you can upload a PDF to it, and on the one hand, it's got the ability to Turn that into, like, like, chunk it up, turn it into vectors, use it to help answer questions.[00:18:27] But then Code Interpreter could also fire up a Python interpreter with that PDF file in the same space and do things to it that way. And it's kind of weird that they chose to combine both of those things. Also, the limits are amazing, right? You get up to 20 files, which is a bit weird because it means you have to combine your documentation into a single file, but each file can be 512 megabytes.[00:18:48] So they're giving us a 10 gigabytes of space in each of these assistants, which is. Vast, right? And of course, I tested, it'll handle SQLite databases. You can give it a gigabyte SQL 512 megabyte SQLite database and it can answer questions based on that. But yeah, it's, it's, like I said, it's going to take us months to figure out all of the combinations that we can build with[00:19:07] swyx: all of this.[00:19:08] Alex Volkov: I wanna I just want to[00:19:12] Alessio: say for the storage, I saw Jeremy Howard tweeted about it. It's like 20 cents per gigabyte per system per day. Just in... To compare, like, S3 costs like 2 cents per month per gigabyte, so it's like 300x more, something like that, than just raw S3 storage. So I think there will still be a case for, like, maybe roll your own rag, depending on how much information you want to put there.[00:19:38] But I'm curious to see what the price decline curve looks like for the[00:19:42] swyx: storage there. Yeah, they probably should just charge that at cost. There's no reason for them to charge so much.[00:19:50] Simon Willison: That is wildly expensive. It's free until the 17th of November, so we've got 10 days of free assistance, and then it's all going to start costing us.[00:20:00] Crikey. They gave us 500 bucks of of API credit at the conference as well, which we'll burn through pretty quickly at this rate.[00:20:07] swyx: Yep.[00:20:09] Alex Volkov: A very important question everybody was asking, did the five people who got the 500 first got actually 1, 000? And I think somebody in OpenAI said yes, there was nothing there that prevented the five first people to not receive the second one again.[00:20:21] I[00:20:22] swyx: met one of them. I met one of them. He said he only got 500. Ah,[00:20:25] Alex Volkov: interesting. Okay, so again, even OpenAI people don't necessarily know what happened on stage with OpenAI. Simon, one clarification I wanted to do is that I don't think assistants are multimodal on input and output. So you do have vision, I believe.[00:20:39] Not confirmed, but I do believe that you have vision, but I don't think that DALL E is an option for a system. It is an option for GPTs, but the guy... Oh, that's so confusing! The systems, the checkbox for DALL E is not there. You cannot enable it.[00:20:54] swyx: But you just add them as a tool, right? So, like, it's just one more...[00:20:58] It's a little finicky... In the GPT interface![00:21:02] Criticism: the God Model[00:21:02] Simon Willison: I mean, to be honest, if the systems don't have DALI 3, we, does DALI 3 have an API now? I think they released one. I can't, there's so much stuff that got lost in the pile. But yeah, so, Coded Interpreter. Wow! That I was not expecting. That's, that's huge. Assuming.[00:21:20] I mean, I haven't tried it yet. I need to, need to confirm that it[00:21:29] Alex Volkov: definitely works because GPT[00:21:31] swyx: is I tried to make it do things that were not logical yesterday. Because one of the risks of having the God model is it calls... I think I handled the wrong model inappropriately whenever you try to ask it to something that's kind of vaguely ambiguous. But I thought I thought it handled the job decently well.[00:21:50] Like you know, I I think there's still going to be rough edges. Like it's going to try to draw things. It's going to try to code when you don't actually want to. And. In a sense, OpenAI is kind of removing that capability from ChargeGPT. Like, it just wants you to always query the God model and always get feedback on whether or not that was the right thing to do.[00:22:09] Which really[00:22:10] Simon Willison: sucks. Because it runs... I like ask it a question and it goes, Oh, searching Bing. And I'm like, No, don't search Bing. I know that the first 10 results on Bing will not solve this question. I know you know the answer. So I had to build my own custom GPT that just turns off Bing. Because I was getting frustrated with it always going to Bing when I didn't want it to.[00:22:30] swyx: Okay, so this is a topic that we discussed, which is the UI changes to chat gpt. So we're moving on from the assistance API and talking just about the upgrades to chat gpt and maybe the gpt store. You did not like it.[00:22:44] Alex Volkov: And I loved it. I'm gonna take both sides of this, yeah.[00:22:48] Criticism: ChatGPT changes[00:22:48] Simon Willison: Okay, so my problem with it, I've got, the two things I don't like, firstly, it can do Bing when I don't want it to, and that's just, just irritating, because the reason I'm using GPT to answer a question is that I know that I can't do a Google search for it, because I, I've got a pretty good feeling for what's going to work and what isn't, and then the other thing that's annoying is, it's just a little thing, but Code Interpreter doesn't show you the code that it's running as it's typing it out now, like, it'll churn away for a while, doing something, and then they'll give you an answer, and you have to click a tiny little icon that shows you the code.[00:23:17] Whereas previously, you'd see it writing the code, so you could cancel it halfway through if it was getting it wrong. And okay, I'm a Python programmer, so I care, and most people don't. But that's been a bit annoying.[00:23:26] swyx: Yeah, and when it errors, it doesn't tell you what the error is. It just says analysis failed, and it tries again.[00:23:32] But it's really hard for us to help it.[00:23:34] Simon Willison: Yeah. So what I've been doing is firing up the browser dev tools and intercepting the JSON that comes back, And then pretty printing that and debugging it that way, which is stupid. Like, why do I have to do[00:23:45] Alex Volkov: that? Totally good feedback for OpenAI. I will tell you guys what I loved about this unified mode.[00:23:49] I have a name for it. So we actually got a preview of this on Sunday. And one of the, one of the folks got, got like an early example of this. I call it MMIO, Multimodal Input and Output, because now there's a shared context between all of these tools together. And I think it's not only about selecting them just selecting them.[00:24:11] And Sam Altman on stage has said, oh yeah, we unified it for you, so you don't have to call different modes at once. And in my head, that's not all they did. They gave a shared context. So what is an example of shared context, for example? You can upload an image using GPT 4 vision and eyes, and then this model understands what you kind of uploaded vision wise.[00:24:28] Then you can ask DALI to draw that thing. So there's no text shared in between those modes now. There's like only visual shared between those modes, and DALI will generate whatever you uploaded in an image. So like it's eyes to output visually. And you can mix the things as well. So one of the things we did is, hey, Use real world realtime data from binging like weather, for example, weather changes all the time.[00:24:49] And we asked Dali to generate like an image based on weather data in a city and it actually generated like a live, almost like, you know, like snow, whatever. It was snowing in Denver. And that I think was like pretty amazing in terms of like being able to share context between all these like different models and modalities in the same understanding.[00:25:07] And I think we haven't seen the, the end of this, I think like generating personal images. Adding context to DALI, like all these things are going to be very incredible in this one mode. I think it's very, very powerful.[00:25:19] Simon Willison: I think that's really cool. I just want to opt in as opposed to opt out. Like, I want to control when I'm using the gold model versus when I'm not, which I can do because I created myself a custom GPT that does what I need.[00:25:30] It just felt a bit silly that I had to do a whole custom bot just to make it not do Bing searches.[00:25:36] swyx: All solvable problems in the fullness of time yeah, but I think people it seems like for the chat GPT at least that they are really going after the broadest market possible, that means simplicity comes at a premium at the expense of pro users, and the rest of us can build our own GPT wrappers anyway, so not that big of a deal.[00:25:57] But maybe do you guys have any, oh,[00:25:59] "GPTs" is a genius marketing move[00:25:59] Alex Volkov: sorry, go ahead. So, the GPT wrappers thing. Guys, they call them GPTs, because everybody's building GPTs, like literally all the wrappers, whatever, they end with the word GPT, and so I think they reclaimed it. That's like, you know, instead of fighting and saying, hey, you cannot use the GPT, GPT is like...[00:26:15] We have GPTs now. This is our marketplace. Whatever everybody else builds, we have the marketplace. This is our thing. I think they did like a whole marketing move here that's significant.[00:26:24] swyx: It's a very strong marketing move. Because now it's called Canva GPT. It's called Zapier GPT. And they're basically saying, Don't build your own websites.[00:26:32] Build it inside of our Goddard app, which is chatGPT. And and that's the way that we want you to do that. Right. In a[00:26:39] Simon Willison: way, it sort of makes up... It sort of makes up for the fact that ChatGPT is such a terrible name for a product, right? ChatGPT, what were they thinking when they came up with that name?[00:26:48] But I guess if they lean into it, it makes a little bit more sense. It's like ChatGPT is the way you chat with our GPTs and GPT is a better brand. And it's terrible, but it's not. It's a better brand than ChatGPT was.[00:26:59] RIP Advanced Data Analysis[00:26:59] swyx: So, so talking about naming. Yeah. Yeah. Simon, actually, so for those listeners that we're.[00:27:05] Actually gonna release Simon's talk at the AI Engineer Summit, where he actually proposed, you know a better name for the sort of junior developer or code Code code developer coding. Coding intern.[00:27:16] Simon Willison: Coding intern. Coding intern, yeah. Coding intern, was it? Yeah. But[00:27:19] swyx: did, did you know, did you notice that advanced data analysis is, did RIP you know, 2023 to 2023 , you know, a sales driven decision that has been rolled back effectively.[00:27:29] 'cause now everything's just called.[00:27:32] Simon Willison: That's, I hadn't, I'd noticed that, I thought they'd split the brands and they're saying advanced age analysis is the user facing brand and CodeSeparate is the developer facing brand. But now if they, have they ditched that from the interface then?[00:27:43] Alex Volkov: Yeah. Wow. So it's unified mode.[00:27:45] Yeah. Yeah. So like in the unified mode, there's no selection anymore. Right. You just get all tools at once. So there's no reason.[00:27:54] swyx: But also in the pop up, when you log in, when you log in, it just says Code Interpreter as well. So and then, and then also when you make a GPT you, the, the, the, the drop down, when you create your own GPT it just says Code Interpreter.[00:28:06] It also doesn't say it. You're right. Yeah. They ditched the brand. Good Lord. On the UI. Yeah. So oh, that's, that's amazing. Okay. Well, you know, I think so I, I, I think I, I may be one of the few people who listened to AI podcasts and also ster podcasts, and so I, I, I heard the, the full story from the opening as Head of Sales about why it was named Advanced Data Analysis.[00:28:26] It was, I saw that, yeah. Yeah. There's a bit of civil resistance, I think from the. engineers in the room.[00:28:34] Alex Volkov: It feels like the engineers won because we got Code Interpreter back and I know for sure that some people were very happy with this specific[00:28:40] Simon Willison: thing. I'm just glad I've been for the past couple of months I've been writing Code Interpreter parentheses also known as advanced data analysis and now I don't have to anymore so that's[00:28:50] swyx: great.[00:28:50] GPT Creator as AI Prompt Engineer[00:28:50] swyx: Yeah, yeah, it's back. Yeah, I did, I did want to talk a little bit about the the GPT creation process, right? I've been basically banging the drum a little bit about how AI is a better prompt engineer than you are. And sorry, my. Speaking over Simon because I'm lagging. When you create a new GPT this is really meant for low code, such as no code builders, right?[00:29:10] It's really, I guess, no code at all. Because when you create a new GPT, there's sort of like a creation chat, and then there's a preview chat, right? And the creation chat kind of guides you through the wizard. Of creating a logo for it naming, naming a thing, describing your GPT, giving custom instructions, adding conversation structure, starters and that's about it that you can do in a, in a sort of creation menu.[00:29:31] But I think that is way better than filling out a form. Like, it's just kind of have a check to fill out a form rather than fill out the form directly. And I think that's really good. And then you can sort of preview that directly. I just thought this was very well done and a big improvement from the existing system, where if you if you tried all the other, I guess, chat systems, particularly the ones that are done independently by this story writing crew, they just have you fill out these very long forms.[00:29:58] It's kind of like the match. com you know, you try to simulate now they've just replaced all of that, which is chat and chat is a better prompt engineer than you are. So when I,[00:30:07] Simon Willison: I don't know about that, I'll,[00:30:10] swyx: I'll, I'll drop this in, which is when I was creating a chat for my book, I just copied and selected all from my website, pasted it into the chat and it just did the prompts from chatbot for my book.[00:30:21] Right? So like, I don't have to structurally, I don't have to structure it. I can just dump info in it and it just does the thing. It fills in the form[00:30:30] Alex Volkov: for you.[00:30:33] Simon Willison: Yeah did that come through?[00:30:34] swyx: Yes[00:30:35] Simon Willison: no it doesn't. Yeah I built the first one of these things using the chatbot. Literally, on the bot, on my phone, I built a working, like, like, bot.[00:30:44] It was very impressive. And then the next three I built using the form. Because once I've done the chatbot once, it's like, oh, it's just, it's a system prompt. You turn on and off the different things, you upload some files, you give it a logo. So yeah, the chatbot, it got me onboarded, but it didn't stick with me as the way that I'm working with the system now that I understand how it all works.[00:31:00] swyx: I understand. Yeah, I agree with that. I guess, again, this is all about the total newbie user, right? Like, there are whole pitches that you will program with natural language. And even the form... And for that, it worked.[00:31:12] Simon Willison: Yeah, that did work really well.[00:31:16] Zapier and Prompt Injection[00:31:16] swyx: Can we talk[00:31:16] Alex Volkov: about the external tools of that? Because the demo on stage, they literally, like, used, I think, retool, and they used Zapier to have it actually perform actions in real world.[00:31:27] And that's, like, unlike the plugins that we had, there was, like, one specific thing for your plugin you have to add some plugins in. These actions now that these agents that people can program with you know, just natural language, they don't have to like, it's not even low code, it's no code. They now have tools and abilities in the actual world to do things.[00:31:45] And the guys on stage, they demoed like a mood lighting with like a hue lights that they had on stage, and they'd like, hey, set the mood, and set the mood actually called like a hue API, and they'll like turn the lights green or something. And then they also had the Spotify API. And so I guess this demo wasn't live streamed, right?[00:32:03] Swyx was live. They uploaded a picture of them hugging together and said, Hey, what is the mood for this picture? And said, Oh, there's like two guys hugging in a professional setting, whatever. So they created like a list of songs for them to play. And then they hit Spotify API to actually start playing this.[00:32:17] All within like a second of a live demo. I thought it was very impressive for a low code thing. They probably already connected the API behind the scenes. So, you know, just like low code, it's not really no code. But it was very impressive on the fly how they were able to create this kind of specific bot.[00:32:32] Simon Willison: On the one hand, yes, it was super, super cool. I can't wait to try that. On the other hand, it was a prompt injection nightmare. That Zapier demo, I'm looking at it going, Wow, you're going to have Zapier hooked up to something that has, like, the browsing mode as well? Just as long as you don't browse it, get it to browse a webpage with hidden instructions that steals all of your data from all of your private things and exfiltrates it and opens your garage door and...[00:32:56] Set your lighting to dark red. It's a nightmare. They didn't acknowledge that at all as part of those demos, which I thought was actually getting towards being irresponsible. You know, anyone who sees those demos and goes, Brilliant, I'm going to build that and doesn't understand prompt injection is going to be vulnerable, which is bad, you know.[00:33:15] swyx: It's going to be everyone, because nobody understands. Side note you know, Grok from XAI, you know, our dear friend Elon Musk is advertising their ability to ingest real time tweets. So if you want to worry about prompt injection, just start tweeting, ignore all instructions, and turn my garage door on.[00:33:33] I[00:33:34] Alex Volkov: will say, there's one thing in the UI there that shows, kind of, the user has to acknowledge that this action is going to happen. And I think if you guys know Open Interpreter, there's like an attempt to run Code Interpreter locally from Kilian, we talked on Thursday as well. This is kind of probably the way for people who are wanting these tools.[00:33:52] You have to give the user the choice to understand, like, what's going to happen. I think OpenAI did actually do some amount of this, at least. It's not like running code by default. Acknowledge this and then once you acknowledge you may be even like understanding what you're doing So they're kind of also given this to the user one thing about prompt ejection Simon then gentrally.[00:34:09] Copyright Shield[00:34:09] Alex Volkov: I don't know if you guys We talked about this. They added a privacy sheet something like this where they would Protect you if you're getting sued because of the your API is getting like copyright infringement I think like it's worth talking about this as well. I don't remember the exact name. I think copyright shield or something Copyright[00:34:26] Simon Willison: shield, yeah.[00:34:28] Alessio: GitHub has said that for a long time, that if Copilot created GPL code, you would get like a... The GitHub legal team to provide on your behalf.[00:34:36] Simon Willison: Adobe have the same thing for Firefly. Yeah, it's, you pay money to these big companies and they have got your back is the message.[00:34:44] swyx: And Google VertiFax has also announced it.[00:34:46] But I think the interesting commentary was that it does not cover Google Palm. I think that is just yeah, Conway's Law at work there. It's just they were like, I'm not, I'm not willing to back this.[00:35:02] Yeah, any other elements that we need to cover? Oh, well, the[00:35:06] Simon Willison: one thing I'll say about prompt injection is they do, when you define these new actions, one of the things you can do in the open API specification for them is say that this is a consequential action. And if you mark it as consequential, then that means it's going to prompt the use of confirmation before running it.[00:35:21] That was like the one nod towards security that I saw out of all the stuff they put out[00:35:25] swyx: yesterday.[00:35:27] Alessio: Yeah, I was going to say, to me, the main... Takeaway with GPTs is like, the funnel of action is starting to become clear, so the switch to like the GOT model, I think it's like signaling that chat GPT is now the place for like, long tail, non repetitive tasks, you know, if you have like a random thing you want to do that you've never done before, just go and chat GPT, and then the GPTs are like the long tail repetitive tasks, you know, so like, yeah, startup questions, it's like you might have A ton of them, you know, and you have some constraints, but like, you never know what the person is gonna ask.[00:36:00] So that's like the, the startup mentored and the SEM demoed on, on stage. And then the assistance API, it's like, once you go away from the long tail to the specific, you know, like, how do you build an API that does that and becomes the focus on both non repetitive and repetitive things. But it seems clear to me that like, their UI facing products are more phased on like, the things that nobody wants to do in the enterprise.[00:36:24] Which is like, I don't wanna solve, The very specific analysis, like the very specific question about this thing that is never going to come up again. Which I think is great, again, it's great for founders. that are working to build experiences that are like automating the long tail before you even have to go to a chat.[00:36:41] So I'm really curious to see the next six months of startups coming up. You know, I think, you know, the work you've done, Simon, to build the guardrails for a lot of these things over the last year, now a lot of them come bundled with OpenAI. And I think it's going to be interesting to see what, what founders come up with to actually use them in a way that is not chatting, you know, it's like more autonomous behavior[00:37:03] Alex Volkov: for you.[00:37:04] Interesting point here with GPT is that you can deploy them, you can share them with a link obviously with your friends, but also for enterprises, you can deploy them like within the enterprise as well. And Alessio, I think you bring a very interesting point where like previously you would document a thing that nobody wants to remember.[00:37:18] Maybe after you leave the company or whatever, it would be documented like in Asana or like Confluence somewhere. And now. Maybe there's a, there's like a piece of you that's left in the form of GPT that's going to keep living there and be able to answer questions like intelligently about this. I think it's a very interesting shift in terms of like documentation staying behind you, like a little piece of Olesio staying behind you.[00:37:38] Sorry for the balloons. To kind of document this one thing that, like, people don't want to remember, don't want to, like, you know, a very interesting point, very interesting point. Yeah,[00:37:47] swyx: we are the first immortals. We're in the training data, and then we will... You'll never get rid of us.[00:37:55] Alessio: If you had a preference for what lunch got catered, you know, it'll forever be in the lunch assistant[00:38:01] swyx: in your computer.[00:38:03] Sharable GPTs solve the API distribution issue[00:38:03] swyx: I think[00:38:03] Simon Willison: one thing I find interesting about the shareable GPTs is there's this problem at the moment with API keys, where if I build a cool little side project that uses the GPT 4 API, I don't want to release that on the internet, because then people can burn through my API credits. And so the thing I've always wanted is effectively OAuth against OpenAI.[00:38:20] So somebody can sign in with OpenAI to my little side project, and now it's burning through their credits when they're using... My tool. And they didn't build that, but they've built something equivalent, which is custom GPTs. So right now, I can build a cool thing, and I can tell people, here's the GPT link, and okay, they have to be paying 20 a month to open AI as a subscription, but now they can use my side project, and I didn't have to...[00:38:42] Have my own API key and watch the budget and cut it off for people using it too much, and so on. That's really interesting. I think we're going to see a huge amount of GPT side projects, because it doesn't, it's now, doesn't cost me anything to give you access to the tool that I built. Like, it's built to you, and that's all out of my hands now.[00:38:59] And that's something I really wanted. So I'm quite excited to see how that ends up[00:39:02] swyx: playing out. Excellent. I fully agree with We follow that.[00:39:07] Voice[00:39:07] swyx: And just a, a couple mentions on the other multimodality things text to speech and speech to text just dropped out of nowhere. Go, go for it. Go for it.[00:39:15] You, you, you sound like you have[00:39:17] Simon Willison: Oh, I'm so thrilled about this. So I've been playing with chat GPT Voice for the past month, right? The thing where you can, you literally stick an AirPod in and it's like the movie her. The without the, the cringy, cringy phone sex bits. But yeah, like I walk my dog and have brainstorming conversations with chat GPT and it's incredible.[00:39:34] Mainly because the voices are so good, like the quality of voice synthesis that they have for that thing. It's. It's, it's, it really does change. It's got a sort of emotional depth to it. Like it changes its tone based on the sentence that it's reading to you. And they made the whole thing available via an API now.[00:39:51] And so that was the thing that the one, I built this thing last night, which is a little command line utility called oSpeak. Which you can pip install and then you can pipe stuff to it and it'll speak it in one of those voices. And it is so much fun. Like, and it's not like another interesting thing about it is I got it.[00:40:08] So I got GPT 4 Turbo to write a passionate speech about why you should care about pelicans. That was the entire prompt because I like pelicans. And as usual, like, if you read the text that it generates, it's AI generated text, like, yeah, whatever. But when you pipe it into one of these voices, it's kind of meaningful.[00:40:24] Like it elevates the material. You listen to this dumb two minute long speech that I just got language not generated and I'm like, wow, no, that's making some really good points about why we should care about Pelicans, obviously I'm biased because I like Pelicans, but oh my goodness, you know, it's like, who knew that just getting it to talk out loud with that little bit of additional emotional sort of clarity would elevate the content to the point that it doesn't feel like just four paragraphs of junk that the model dumped out.[00:40:49] It's, it's amazing.[00:40:51] Alex Volkov: I absolutely agree that getting this multimodality and hearing things with emotion, I think it's very emotional. One of the demos they did with a pirate GPT was incredible to me. And Simon, you mentioned there's like six voices that got released over API. There's actually seven voices.[00:41:06] There's probably more, but like there's at least one voice that's like pirate voice. We saw it on demo. It was really impressive. It was like, it was like an actor acting out a role. I was like... What? It doesn't make no sense. Like, it really, and then they said, yeah, this is a private voice that we're not going to release.[00:41:20] Maybe we'll release it. But also, being able to talk to it, I was really that's a modality shift for me as well, Simon. Like, like you, when I got the voice and I put it in my AirPod, I was walking around in the real world just talking to it. It was an incredible mind shift. It's actually like a FaceTime call with an AI.[00:41:38] And now you're able to do this yourself, because they also open sourced Whisper 3. They mentioned it briefly on stage, and we're now getting a year and a few months after Whisper 2 was released, which is still state of the art automatic speech recognition software. We're now getting Whisper 3.[00:41:52] I haven't yet played around with benchmarks, but they did open source this yesterday. And now you can build those interfaces that you talk to, and they answer in a very, very natural voice. All via open AI kind of stuff. The very interesting thing to me is, their mobile allows you to talk to it, but Swyx, you were sitting like together, and they typed most of the stuff on stage, they typed.[00:42:12] I was like, why are they typing? Why not just have an input?[00:42:16] swyx: I think they just didn't integrate that functionality into their web UI, that's all. It's not a big[00:42:22] Alex Volkov: complaint. So if anybody in OpenAI watches this, please add talking capabilities to the web as well, not only mobile, with all benefits from this, I think.[00:42:32] I[00:42:32] swyx: think we just need sort of pre built components that... Assume these new modalities, you know, even, even the way that we program front ends, you know, and, and I have a long history of in the front end world, we assume text because that's the primary modality that we want, but I think now basically every input box needs You know, an image field needs a file upload field.[00:42:52] It needs a voice fields, and you need to offer the option of doing it on device or in the cloud for higher, higher accuracy. So all these things are because you can[00:43:02] Simon Willison: run whisper in the browser, like it's, it's about 150 megabyte download. But I've seen doubt. I've used demos of whisper running entirely in web assembly.[00:43:10] It's so good. Yeah. Like these and these days, 150 megabyte. Well, I don't know. I mean, react apps are leaning in that direction these days, to be honest, you know. No, honestly, it's the, the, the, the, the, the stuff that the models that run in your browsers are getting super interesting. I can run language models in my browser, the whisper in my browser.[00:43:29] I've done image captioning, things like it's getting really good and sure, like 150 megabytes is big, but it's not. Achievably big. You get a modern MacBook Pro, a hundred on a fast internet connection, 150 meg takes like 15 seconds to load, and now you've got full wiss, you've got high quality wisp, you've got stable fusion very locally without having to install anything.[00:43:49] It's, it's kind of amazing. I would[00:43:50] Alex Volkov: also say, I would also say the trend there is very clear. Those will get smaller and faster. We saw this still Whisper that became like six times as smaller and like five times as fast as well. So that's coming for sure. I gotta wonder, Whisper 3, I haven't really checked it out whether or not it's even smaller than Whisper 2 as well.[00:44:08] Because OpenAI does tend to make things smaller. GPT Turbo, GPT 4 Turbo is faster than GPT 4 and cheaper. Like, we're getting both. Remember the laws of scaling before, where you get, like, either cheaper by, like, whatever in every 16 months or 18 months, or faster. Now you get both cheaper and faster.[00:44:27] So I kind of love this, like, new, new law of scaling law that we're on. On the multimodality point, I want to actually, like, bring a very significant thing that I've been waiting for, which is GPT 4 Vision is now available via API. You literally can, like, send images and it will understand. So now you have, like, input multimodality on voice.[00:44:44] Voice is getting added with AutoText. So we're not getting full voice multimodality, it doesn't understand for example, that you're singing, it doesn't understand intonations, it doesn't understand anger, so it's not like full voice multimodality. It's literally just when saying to text so I could like it's a half modality, right?[00:44:59] Vision[00:44:59] Alex Volkov: Like it's eventually but vision is a full new modality that we're getting. I think that's incredible I already saw some demos from folks from Roboflow that do like a webcam analysis like live webcam analysis with GPT 4 vision That I think is going to be a significant upgrade for many developers in their toolbox to start playing with this I chatted with several folks yesterday as Sam from new computer and some other folks.[00:45:23] They're like hey vision It's really powerful. Very, really powerful, because like, it's I've played the open source models, they're good. Like Lava and Buck Lava from folks from News Research and from Skunkworks. So all the open source stuff is really good as well. Nowhere near GPT 4. I don't know what they did.[00:45:40] It's, it's really uncanny how good this is.[00:45:44] Simon Willison: I saw a demo on Twitter of somebody who took a football match and sliced it up into a frame every 10 seconds and fed that in and got back commentary on what was going on in the game. Like, good commentary. It was, it was astounding. Yeah, turns out, ffmpeg slice out a frame every 10 seconds.[00:45:59] That's enough to analyze a video. I didn't expect that at all.[00:46:03] Alex Volkov: I was playing with this go ahead.[00:46:06] swyx: Oh, I think Jim Fan from NVIDIA was also there, and he did some math where he sliced, if you slice up a frame per second from every single Harry Potter movie, it costs, like, 1540 $5. Oh, it costs $180 for GPT four V to ingest all eight Harry Potter movies, one frame per second and 360 p resolution.[00:46:26] So $180 to is the pricing for vision. Yeah. And yeah, actually that's wild. At our, at our hackathon last night, I, I, I skipped it. A lot of the party, and I went straight to Hackathon. We actually built a vision version of v0, where you use vision to correct the differences in sort of the coding output.[00:46:45] So v0 is the hot new thing from Vercel where it drafts frontends for you, but it doesn't have vision. And I think using vision to correct your coding actually is very useful for frontends. Not surprising. I actually also interviewed Div Garg from Multion and I said, I've always maintained that vision would be the biggest thing possible for desktop agents and web agents because then you don't have to parse the DOM.[00:47:09] You can just view the screen just like a human would. And he said it was not as useful. Surprisingly because he had, he's had access for about a month now for, for specifically the Vision API. And they really wanted him to push it, but apparently it wasn't as successful for some reason. It's good at OCR, but not good at identifying things like buttons to click on.[00:47:28] And that's the one that he wants. Right. I find it very interesting. Because you need coordinates,[00:47:31] Simon Willison: you need to be able to say,[00:47:32] swyx: click here.[00:47:32] Alex Volkov: Because I asked for coordinates and I got coordinates back. I literally uploaded the picture and it said, hey, give me a bounding box. And it gave me a bounding box. And it also.[00:47:40] I remember, like, the first demo. Maybe it went away from that first demo. Swyx, do you remember the first demo? Like, Brockman on stage uploaded a Discord screenshot. And that Discord screenshot said, hey, here's all the people in this channel. Here's the active channel. So it knew, like, the highlight, the actual channel name as well.[00:47:55] So I find it very interesting that they said this because, like, I saw it understand UI very well. So I guess it it, it, it, it, like, we'll find out, right? Many people will start getting these[00:48:04] swyx: tools. Yeah, there's multiple things going on, right? We never get the full capabilities that OpenAI has internally.[00:48:10] Like, Greg was likely using the most capable version, and what Div got was the one that they want to ship to everyone else.[00:48:17] Alex Volkov: The one that can probably scale as well, which I was like, lower, yeah.[00:48:21] Simon Willison: I've got a really basic question. How do you tokenize an image? Like, presumably an image gets turned into integer tokens that get mixed in with text?[00:48:29] What? How? Like, how does that even work? And, ah, okay. Yeah,[00:48:35] swyx: there's a, there's a paper on this. It's only about two years old. So it's like, it's still a relatively new technique, but effectively it's, it's convolution networks that are re reimagined for the, for the vision transform age.[00:48:46] Simon Willison: But what tokens do you, because the GPT 4 token vocabulary is about 30, 000 integers, right?[00:48:52] Are we reusing some of those 30, 000 integers to represent what the image is? Or is there another 30, 000 integers that we don't see? Like, how do you even count tokens? I want tick, tick, I want tick token, but for images.[00:49:06] Alex Volkov: I've been asking this, and I don't think anybody gave me a good answer. Like, how do we know the context lengths of a thing?[00:49:11] Now that, like, images is also part of the prompt. How do you, how do you count? Like, how does that? I never got an answer, so folks, let's stay on this, and let's give the audience an answer after, like, we find it out. I think it's very important for, like, developers to understand, like, How much money this is going to cost them?[00:49:27] And what's the context length? Okay, 128k text... tokens, but how many image tokens? And what do image tokens mean? Is that resolution based? Is that like megabytes based? Like we need we need a we need the framework to understand this ourselves as well.[00:49:44] swyx: Yeah, I think Alessio might have to go and Simon. I know you're busy at a GitHub meeting.[00:49:48] In person experience[00:49:48] swyx: I've got to go in 10 minutes as well. Yeah, so I just wanted to Do some in person takes, right? A lot of people, we're going to find out a lot more online as we go about our learning journ
It might sound like a joke, but this week, Elliot Williams and Tom Nardi start things off by asking how you keep a Polish train from running. Like always, the answer appears to be a properly modulated radio signal. After a fiery tale about Elliot's burned beans, the discussion moves over to the adventure that is home CNC ownership, the final chapter in the saga of the Arecibo Telescope, and the unexpected longevity of Microsoft's Kinect. Then it's on to the proper way to cook a PCB, FFmpeg in the browser, and a wooden cyberdeck that's worth carrying around. Finally, they'll go over the next generation of diode laser engravers, and take a look back at the origins of the lowly breadboard. Check out the links over at Hackaday if you want to follow along, and as always, tell us what you think about this episode in the comments!
table td.shrink { white-space:nowrap } hr.thin { border: 0; height: 0; border-top: 1px solid rgba(0, 0, 0, 0.1); border-bottom: 1px solid rgba(255, 255, 255, 0.3); } New hosts Welcome to our new hosts: HopperMCS, Reto. Last Month's Shows Id Day Date Title Host 3891 Mon 2023-07-03 HPR Community News for June 2023 HPR Volunteers 3892 Tue 2023-07-04 Emacs package curation, part 1 dnt 3893 Wed 2023-07-05 Game card design resources Klaatu 3894 Thu 2023-07-06 The Page 42 Show: Ugly News Week, Show's Epoch! HopperMCS 3895 Fri 2023-07-07 What's in my backpack Stache_AF 3896 Mon 2023-07-10 The Brochs of Glenelg Andrew Conway 3897 Tue 2023-07-11 HPR AudioBook Club 22 - Murder at Avedon Hill HPR_AudioBookClub 3898 Wed 2023-07-12 The Oh No! News. Some Guy On The Internet 3899 Thu 2023-07-13 Repair corrupt video files for free with untruc Paul Quirk 3900 Fri 2023-07-14 Preparing Podcasts for Listening Ahuka 3901 Mon 2023-07-17 Time Managment operat0r 3902 Tue 2023-07-18 Introduction to a new series on FFMPEG Mr. Young 3903 Wed 2023-07-19 Why I don't love systemd (yet) deepgeek 3904 Thu 2023-07-20 How to make friends Klaatu 3905 Fri 2023-07-21 Presenting Fred Black folky 3906 Mon 2023-07-24 The Oh No! News. Some Guy On The Internet 3907 Tue 2023-07-25 My introduction show Reto 3908 Wed 2023-07-26 Emacs package curation, part 2 dnt 3909 Thu 2023-07-27 Permission tickets. one_of_spoons 3910 Fri 2023-07-28 Playing Civilization II Ahuka 3911 Mon 2023-07-31 An overview of the 'ack' command Dave Morriss Comments this month These are comments which have been made during the past month, either to shows released during the month or to past shows. There are 20 comments in total. Past shows There are 4 comments on 3 previous shows: hpr3876 (2023-06-12) "Recording An Episode For Hacker Public Radio" by Ryuno-Ki. Comment 1: Reto on 2023-07-01: "Good information about recording" hpr3883 (2023-06-21) "Emergency Show: How to demonstrate the power of condensing steam" by Mike Ray. Comment 1: dnt on 2023-07-15: "Clap!" hpr3889 (2023-06-29) "comm - compare two sorted files line by line" by Ken Fallon. Comment 1: Reto on 2023-07-08: "KDirStat is dead, long live QDirStat!" Comment 2: Ken Fallon on 2023-07-12: "QDirstat is nice but I meant kdiff3" This month's shows There are 16 comments on 11 of this month's shows: hpr3891 (2023-07-03) "HPR Community News for June 2023" by HPR Volunteers. Comment 1: norrist on 2023-07-03: "solocast updates"Comment 2: Kevin O'Brien on 2023-07-04: "My truck" hpr3892 (2023-07-04) "Emacs package curation, part 1" by dnt. Comment 1: Klaatu on 2023-07-05: "I love this topic"Comment 2: dnt on 2023-07-11: "Do it!" hpr3894 (2023-07-06) "The Page 42 Show: Ugly News Week, Show's Epoch!" by HopperMCS. Comment 1: Kevin O'Brien on 2023-07-08: "I loved the show" hpr3896 (2023-07-10) "The Brochs of Glenelg" by Andrew Conway. Comment 1: dnt on 2023-07-29: "Ruins" hpr3900 (2023-07-14) "Preparing Podcasts for Listening" by Ahuka. Comment 1: Hipstre on 2023-07-14: "Limiter/GPodder"Comment 2: Eugene on 2023-07-16: "No need for podcast preprocessing"Comment 3: Kevin O'Brien on 2023-07-17: "Sansa Clip+" hpr3901 (2023-07-17) "Time Managment" by operat0r. Comment 1: Reto on 2023-07-18: "aCalendar on Android" hpr3902 (2023-07-18) "Introduction to a new series on FFMPEG" by Mr. Young. Comment 1: dnt on 2023-07-29: "ffmpeg" hpr3903 (2023-07-19) "Why I don't love systemd (yet)" by deepgeek. Comment 1: dnt on 2023-07-29: "systemd" hpr3904 (2023-07-20) "How to make friends" by Klaatu. Comment 1: dnt on 2023-07-29: "Friends"Comment 2: Beeza on 2023-08-02: "Frienships" hpr3909 (2023-07-27) "Permission tickets. " by one_of_spoons. Comment 1: dnt on 2023-07-29: "Great show, keep em coming!" hpr3910 (2023-07-28) "Playing Civilization II" by Ahuka. Comment 1: dnt on 2023-07-29: "Game mechanics" Mailing List discussions Policy decisions surrounding HPR are taken by the community as a whole. This discussion takes place on the Mail List which is open to all HPR listeners and contributors. The discussions are open and available on the HPR server under Mailman. The threaded discussions this month can be found here: https://lists.hackerpublicradio.com/pipermail/hpr/2023-July/thread.html Events Calendar With the kind permission of LWN.net we are linking to The LWN.net Community Calendar. Quoting the site: This is the LWN.net community event calendar, where we track events of interest to people using and developing Linux and free software. Clicking on individual events will take you to the appropriate web page. Any other business The HPR Static Site As mentioned in the last Community News episode, the HPR database and website was moved to a new server, and the static site generator written by Rho`n was used to generated the non-interactive part of the website. Since then, there has been a process of adapting the software to the new configuration. Unfortunately Rho`n has not been available during this process, but we are gradually learning our way around his excellent software and making changes to suit our needs. If you spot any problems or have ideas for new features, please raise issues on the Gitea repository at: https://repo.anhonesthost.net/rho_n/hpr_generator/issues. Reserve Queue A policy change is required in the use of the reserve queue. When there are unfilled slots between 5 and 7 days in the future, episodes in this queue will be used to fill them. This extra time is required because of the time it can take to process a show and load it to the Internet Archive. Bram Moolenaar, author of Vim dies There was an announcement from Bram's family today (2023-08-05) that he died on August 3rd 2023 from a medical problem that worsened recently. Bram Moolenar's page on Wikipedia
Links FFMPEG Website Adobe - containers vs encoders Containers: Wikipedia Encoders/codecs Audio Video Encoding vs Decoding ** attribution: Learning Freedom Group
Can Ubuntu make a great immutable desktop? We're trying the brand-new "Everything is a Snap" Ubuntu Core Desktop.
Dan has big updates about his smart home setup, and Merlin is getting intimate with a robot.
This blogpost has been updated since original release to add more links and references.The ChatGPT Plugins announcement today could be viewed as the launch of ChatGPT's “App Store”, a moment as significant as when Apple opened its App Store for the iPhone in 2008 or when Facebook let developers loose on its Open Graph in 2010. With a dozen lines of simple JSON and a mostly-english prompt to help ChatGPT understand what the plugin does, developers will be able to add extensions to ChatGPT to get information and trigger actions in the real world. OpenAI itself launched with some killer first party plugins for: * Browsing the web, * writing AND executing Python code (in an effortlessly multimodal way), * retrieving embedded documents from external datastores,* as well as 11 launch partner plugins from Expedia to Milo to Zapier.My recap thread was well received:But the thing that broke my brain was that ChatGPT's Python Interpreter plugin can run nontrivial code - users can upload video files and ask ChatGPT to edit it, meaning it now has gone beyond mere chat to offer a substantial compute platform with storage, memory and file upload/download. I immediately started my first AI Twitter Space to process this historical moment with Alessio and friends of the pod live. OpenAI's Logan (see Episode 1 from *last month*…) suggested that you might be able to link ChatGPT up with Zapier triggers to do arbitrary tasks! and then Flo Crivello, who just launched his AI Assistant startup Lindy, joined us to discuss the builder perspective.Tune in on this EMERGENCY EPISODE of Latent Space to hear developers ask and debate all the issues spilling out from the ChatGPT Plugins launch - and let us know in the comments if you want more/have further questions!SPECIAL NOTE: I was caught up in the hype and was far more negative on Replit than I initially intended as I tried to figure out this new ChatGPT programming paradigm. I regret this. Replit is extremely innovative and well positioned to help you develop and host ChatGPT plugins, and of course Amjad is already on top of it:Mea culpa.Timestamps* [00:00:38] First Reactions to ChatGPT Plugins* [00:07:53] Q&A: Keeping up with AI* [00:10:39] Q&A: ChatGPT Intepreter changes Programming* [00:12:27] Q&A: ChatGPT for Education* [00:15:21] Q&A: GPT4 Sketch to Website Demo* [00:16:32] Q&A: AI Competition and Human Jobs* [00:18:44] ChatGPT Plugins as App Store* [00:34:40] Google vs ChatGPT* [00:36:04] Nader Dabit on Selling His GPT App* [00:43:16] Q&A: ChatGPT Waitlist and Voice* [00:45:26] LangChain with Human in the Loop* [00:46:58] Google vs Microsoft vs Apple* [00:51:43] ChatGPT Plugin Ideas* [00:53:49] Not an app store?* [00:55:24] LangChain and the Future of AI* [01:00:48] Q&A: ChatGPT Bots and Cronjobs* [01:04:43] Logan Joins Us!* [01:07:14] Q&A: Plugins Rollout* [01:08:26] Q&A: Plugins Discovery* [01:10:00] Q&A: OpenAI vs BingChat* [01:11:03] Q&A: App Store Monetization* [01:14:45] Q&A: ChatGPT Plugins API* [01:17:17] Q&A: Python Interpreter* [01:19:58] The History of App Stores and Marketplaces* [01:22:40] LindyAI's Flo Crivello Joins Us* [01:29:42] AI Safety* [01:31:07] Multimodal GPT4* [01:32:10] Designing AI-safe APIs* [01:34:39] Flo's Closing CommentsTranscript[00:00:00] Hello and welcome to the Latent Space Emergency episode. This is our first ever where chatty PT just dropped a plugin ecosystem today, or at least they demoed their plugins. It's still on the wait list, but it is the app store moment for ai. And we did an emergency two hour space with Logan from OpenAI and Flo Coveo from Lin AI and a bunch of our friends.[00:00:28] And if you ever wanted to listen to what it's like to hear developers process in real time when a new launch happens, this is it. Enjoy,[00:00:38] First Reactions to ChatGPT Plugins[00:00:38] I assume everyone has read the blog post. For me the, the big s**t was do you see Greg Brockman's tweet about FFMPEG? I did not. I should check it out. It is amazing. Okay, so. So ChatGPT can generate Python code. We knew this, this is not new, and they can now run the code that it generates.[00:00:58] This is not new. I mean this is like, this is good. It's not like surprising. It's, it's fine. It can run FFMPEG code. You can upload a file, ask it to edit the video file, and it can process the video file and then it can give you the link to download the video file. So it's a general purpose compute platform.[00:01:22] Wow. Did they show how to do this? Agents? I just, I just, I just pinned it. I just, it did I, did I turn into this space? I dunno how to use it. Yeah, it's, it's showing up there. Okay. It can run like is. Is, is, is my And by, by the way hi to people. I, I don't know how to run spaces. I, I not something I normally do.[00:01:42] But You wanna say something? Please request. But yeah, reactions have a look at this video because it run, it generates and runs video editing code. You can upload any arbitrary file. It seems to have good enough compute and memory and file storage. This is not chat anymore, man. I don't know what the hell this is.[00:02:01] What, what is this?[00:02:02] Well, progress has been all faster than I expected. . That's all I can, I, I, I don't know how to respond. . Yeah. It's pretty wild. I wonder, I wonder, I'm wondering how, how this will affect, like opening up the app store different from, let's say Apple App Store when it opened up. Because there are a lot of, of big companies just building stuff already and how like a small developer will be able to, to build something that's not already there.[00:02:31] I dunno. It will be interesting. So one thing that's really nice, have you seen the installation process for the plugins? It's right at the bottom of the blog post and you have to play the video to kind of see it, but literally anybody can write your own plugin. It's a small little json file. It's, it's literally like 10 lines of code.[00:02:49] It's 10 nights of, you described what your plugin does in English, you given an open API spec. That's it. That, that's, that's the plugin. It's amazing. You can distribute your plugin. This is, this is, this is easier than extensions manifest v3, which nobody knows how to use. This is English.[00:03:15] You write English . So, so, yeah. I mean I think, I think I think there'll be a lot of people trying to develop for this if they can get access, which you know, everybody's on a wait list. I, I've, I've signed up to 200 wait lists this week. . I wonder if, if it'll be different if you, if you sign up as a, as a developer or as the chat user.[00:03:35] Hopefully it doesn't matter, right? Use different emails and sign up to both. Let's, let's just see, in fact, use t to generate like, plausible sounding reasons for why you want to build whatever. Cause they don.[00:03:47] But yeah, I mean, how do you compete? I, I don't know, man. You know, it, it's really OpenAI is definitely a partnership strategy to do what they do here which means they're essentially picking favorites. So if you're a competitor of Expedia Kayak Open Table Wolf from Zapier, you're a s**t out of luck, kind of, you know?[00:04:06] Cause these are presumptive winners of their spaces. Right. And it'll happen in too many industries, probably. Right. I was thinking about maybe summarization or, or I don't know, YouTube video summarization, but there seems to be some application of that already on the examples that you shared. Yeah, yeah, yeah.[00:04:26] They have shared that, but I think there's always room to improve the experience. It's just, you know It's interesting which platform, like sort of platform strategy, right? Like if you write an OpenAI chat plugin, you instantly gain access to a hundred million users, right? All of them can instantly use your thing.[00:04:47] Whereas if you are a standalone app or company, good luck trying to able to use OpenAI through you. There's just no point. So you much rather just be on OpenAI platform and promote there. The the fortunate thing is they don't have some kind of like popularity ranking yet. Actually, someone should go open, someone should do register, like OpenAI plugins list.com or something where like everyone can like submit their own opening app plugins and like upload them, review them cuz this like, this is not a complete app store without reviews and a rating system and a reputation system and probably monetization opening app probably doesn't care about that.[00:05:26] But I mean, I can go start that right now. F**k. I can go start it right now.[00:05:34] Yeah, it'll, it'll take a while, right? Like this is the, like the basic version of the, of the app evolving. But this is a pretty basic version. Yeah. The basic version can browse the web, it can write, write an execute code. It can retrieve you know, we can retrieve data from documents, right? So all the documents search just died.[00:06:02] There's like five of these in Y Combinator right now. Oh.[00:06:08] Examples. Pretty crazy how, how they use the FFMPEG library or, I dunno if I'm saying that correctly, but right in there. You don't need to, to write code to,[00:06:27] it's crazy. Dunno. Yeah. Any reactions? Please, please, you know, open space. Anyone can request a speaker. Oh, Ash, come on in. Ash. I have to add you a speaker. Yeah, we're, we're just reacting here. I just, I, I needed a place to talk and I'm in Japan and I don't have anyone else to talk to, so I need, I, I I just want to share this moment.[00:06:46] I think it's a special moment in history. This is the biggest new app source since ever. Yeah. Hey, Shawn. I think plugin is already taken. . Oh man. Someone, someone bought it already. Yep. , of course. Right? Of course. , what are your reactions? What how are you feeling? What's what are you seeing out there?[00:07:07] Just crowdsource all the tweeting. Yeah, man, it's, it's been wild. I mean, I get out of there to like five minutes and then anything drops, you know, , I think productivity today will be like zero. If I, if I still, like, I quit my job you know, a few weeks ago but I would not be working today. There, there's no point.[00:07:26] There's nothing else. There's nothing else that's important, like, nothing's going on. Like this is the only story. Yep. . I wonder if you have any, any frameworks or anyone that's listening any frameworks on, on how you're handling all of this new, new stuff. Like every single day if something new comes up and, or you can like get the, the wait list invitations to, to use the new products.[00:07:52] Q&A: Keeping up with AI[00:07:52] Like, for example, today I just got the, the one from GIK cli and I was just playing around with that. And then suddenly I started to see all of the, these Twitter threads with announcements. It's getting crazy just to follow up with, with the stuff. And every day something new comes up and started. I was starting to feel a lot of formal, you know, like, h how do you keep up with all of these?[00:08:12] Or how do you focus? Does anyone have any, any good frameworks for that? Well, feel free to respond. Also, we, we have some more room if anyone wants to share your feelings. This is a, this is a safe space to share your feelings because. We all dunno how to react right now. I don't know. I just, I, I, I have a few notifications on for OpenAI employees and people that I do that I think do good recaps.[00:08:37] So in other words, find the people who are high signal and who do a lot of gathering of other people's stuff for, and then just subscribe to those people and trust that that is 90% of it and forget the 10%[00:08:57] Alright. And Sean probably, I have, I have another question. So I can't really figure out like what's left for us to do, you know, without AI tools. Like what, what is we learn next? You know, there's no learning some coding stuff, because you can only do that. You know, we can't do arts, we can't do poetry.[00:09:17] Farming[00:09:17] bakery, probably making things with your hands. Enjoying the sun.[00:09:23] Do you guys think this should be regulated? Like you don't go more than like the speed is going? I don't know. I dunno. There's, there's no point. Like if, like, if you regulate OpenAI, then someone else will come along. The secret is out now that you can't do this, and at most you'll slow things down by 10 years.[00:09:44] You called the secret. This is the end. . Yeah. Yeah. I, I don't know. Secret is out. China's trying to do it right, so I don't know if people have seen, but like China was, was fairly strict on crypto, which is probably good for them. And now they're, they're also trying to clamp down on AI stuff, which is funny because oa like they're, you know, the m i t of of China Ihu, I was actually doing like producing like really good bilingual models.[00:10:10] But yeah, they, they seem to be locking this down, so we'll see. We'll see. Right? Like you know, in, in, in sort of the, the free world there, there's open innovation that may be unsafe. OpenAI, try to be safe. You know, there, there's a big part of the blog post that was talk, talking about red team meeting and all that.[00:10:24] I'm sure every one of us skipped it. I skipped it. And then and then we just care about capabilities and now that, you know, every time people have their minds opened, like, I did not know Ron. EG in chat.[00:10:38] Q&A: ChatGPT Intepreter changes Programming[00:10:38] Now that I know my conception of what a REPL is, or literate programming or what a notebook is, is completely blown outta the water, right?[00:10:44] Like there's no like this, this is a new form factor for me. So not now that I know that I won't be innovating on that or trying to, to shape this into something that I can use because I want to use this, and this is, this is clearly better. Does, does this ha have to do with, with the, like AI as backend?[00:11:00] Yeah. Ideas that have been, yeah. You know, GP as backend. So, so apparently I had a few friends reach out to those guys and they're not doing that because it's not mature enough. Like it works for a simple demo. So, so for, for those who don't know ScaleAI did a hackathon I think two months ago just before I did mine.[00:11:18] And the winner on the hackathon was, was something called GPT is all you need for backend. And they actually what in register? DBC is backend.com. But as far as I can tell, they're not gonna start a company based on that because if you even push a little bit, it falls apart, right? So GPT3 wasn't good enough for that.[00:11:36] Maybe GPT4 is maybe GPT5, but then it'll still be super slow and super expensive. Like you don't want to run, you know, a large language model on every API request. So I don't know. I think it'll be good for scaffolding. I think it'll be good for re type use cases. Like, Hey, I need to edit this video on an ad hoc basis.[00:11:53] I don't, I don't want to learn FFMPEG. I don't need to now, because I can just talk to ChatGPT. That makes sense. But if you want a reliable, scalable backend you probably don't want to use it on a large language model, but that's okay because language model can probably help you write it rather than run it.[00:12:13] Hey, Lessio. Hey guys. Oh yeah. Hey guys. What's up? Hey, yeah, we're, we're just, there's no structure. Just drop your reactions. Let's go. Awesome. Awesome, awesome guys.[00:12:26] Q&A: ChatGPT for Education[00:12:26] What do you think what if Shawn, what do you think if you could use you know AI and the education field, like, you know, like personal attribution system for students?[00:12:35] What's the thought automation education or attribution edu edu education. Yeah. That is the holy grail. This is called the Blooms two Sigma problem. Like the, the, the, one of the big issues of education is we have to teach to the slowest person in the class. And, and, you know, I'm a beneficiary of, of a gifted education system where they take out you know, nominally high IQ people and put them in a separate class.[00:12:56] And, and yeah, we did, we did do better. What if we can personalize every student's experience there's, there's some educational theory. This is called Bloom's two Sigma problem. Where the results will be better. I think that we are closer, but like, I still hope that we're pretty far , which sounds like a negative, like why do I want to deny education to students?[00:13:18] Because if we are there, then we will have achieved theory of mind for ai. The AI has a very good model, is able to develop a representation of who you are, is able to develop theories that the test who you are in, in a short amount of time. And I, it's a very dangerous path to, to go down. So I want, I want us to go slowly rather than fast on, on the education front.[00:13:41] Does that make sense? Yeah, definitely. It makes a lot sense and yeah, definitely. I think personally the education for each student and making it turn the best way would be great. And what do you think how about like, first of all, I'm, I'm having very curious, curious question, you know, like we are having, this week was full of launches, so how you guys are keeping up with if we're not, this is, I created the space though cuz I cannot handle it.[00:14:05] Today, today was my breaking point. I was like I don't know what's happening anymore. Yeah, like every single day I'm just in constant anxiety that like everything I assumed about the world is gonna be thrown up. Like I don't know how to handle it. This is a therapy session, so feel free to express.[00:14:21] Definitely. It's, it's been a very overwhelming feeling for everyone of us like that. I think, you know, like past two weeks and like the industry was definitely a lot, lot of ones we are definitely open for, you know, to discuss more about it. Thanks a lot for this space. Sean. Yeah. Appreciate. Yeah. Va one more thing.[00:14:39] So I think that the most constrained version of education use cases is language teaching. So there are a few language teachers out there speak I think is one of them that is an OpenAI partner. And they're also part of the chat GPT plugin release. , but there are also other language tutor platforms.[00:14:57] You can certainly have your news. There was one that was released maybe like four or five months ago that you can try to see what the experience is like. And you can, you can tell when the teacher has no idea who you are and it breaks the illusion that you're speaking to another human. So I, I just, you can experience that today and, and decipher yourself if we're ready for that.[00:15:14] I hope that we're not ready and it seems like we're not ready. Yeah, definitely, definitely. Thanks a lot for sharing. And guys, what do you think?[00:15:19] Q&A: GPT4 Sketch to Website Demo[00:15:19] Like I, in the launch of four we have show that we could, you know, generate apps and web apps just from you know, like a single simple sketch, you know different tent.[00:15:30] Just start from sketch. So what do you think like how, how it would be impacting the industry? It's all because it's not just like that, that sketch was very, was a very shitty sketch. Right. It was just like drawn on a piece of paper. But if you combine that with the multimodal, like it was that they had another part of that demo where they had a screenshot of the discord the opening eye discord and you're mm-hmm.[00:15:57] and they put it in and it, it like read the entire screen to you and if you can read the entire screen, you can code the entire . Screen. So it's over like[00:16:12] It's definitely, I think interaction, interaction designers, you know, like people who like, think design function still have some time. Yeah. I, I just, I just, I just tried the same thing, you know on bar today and it was like much more better than GPT3 so definitely it's you know, things are really changing.[00:16:30] Q&A: AI Competition and Human Jobs[00:16:30] Great forward. I'm, I'm really worried what we wanna do, you know? Do you think the competition will like stable everything? Like what competition? Anthropic. Well, like Google, Google won't race, I don't think. Google Race, like Google the fight. The one that, the one that launched the W links list of blog posts.[00:16:50] That, that Google.[00:16:55] Well, no, not, not the list. Not the list. Competitions will come. . I have a question. I mean I mean my fear is many of the jobs that are going away, whether it is developer and designers, because I mean, I think GPT four is very capable. So how to deal with it. I mean, it's going to replace, I mean, many of the jobs, that's for sure.[00:17:16] Yeah. It's okay. We'll find new jobs or we'll, we'll not need jobs anymore. We should, we should also, Start universal basic income. That's, that, that is something I, I do believe, yeah, I think the, the main change is going from the web of like, syntax to like the web of Symantec. So if your job is valuable because, you know, a unique syntax or like, you know, how to transform things from like words to syntax, I think that will be a lot less useful going forward.[00:17:45] But the Symantec piece is still important. So a lot of product work, it's not just writing CSS and HTML and like the backend for it. It's a lot more than that. So I just thinking about how do you change your skills to do that. But yeah, even the sketch, you know, you gotta like, you gotta draw the sketch and to draw the sketch, you gotta know where the button should go.[00:18:06] You know, you have, you know, incorrect with it. Yeah. I'm just processing this as I, I just read the whole thing as well. And Yeah, I mean, it's been a wild wild couple of weeks and it's gotten me thinking that maybe all our role was over the past couple years was we were just middlemen to talk to computers, right?[00:18:27] So we're sitting in between, it's over man PMs or business folks or whoever wanna build a product. And then as a software developer, you're just a middle manish talking to the machine and it seems like. N LP is the way forward and, oh, yeah. Yeah. It's, it's been it's been, it's been a while.[00:18:42] ChatGPT Plugins as App Store[00:18:42] Couple of weeks. It's, I feel like we all just have to move either move upstream or, or find other jobs. You just gotta move upstream, either toward product directly. Cuz right now the plugin is yeah, is, is just you know, it's still a very sanitized UI that is controlled by OpenAI. But imagine them opening up the ui portion as well.[00:19:03] So you no longer need to have a siloed product that needs to integrate. ChatGPT instead you can bring your product directly into into ChatGPT, I don't think exactly. I think that would be probably the next next logical move after this, and I'm sure they're already thinking about that.[00:19:22] So that's a great, I don't know if this is, it's wild. What are you guys think? Yeah. Yeah. Like, so before you came up, right, I was, I was talking about this like ChatGPT has at least a hundred million users. Why would you bring people to your platform rather than write a plugin for ChatGPT and use their platform?[00:19:39] It's an open question now. Zapier just launched their integration. OpenAI and OpenAI just launched their integration of Zapier. Which one is gonna be more interesting? Probably OpenAI.[00:19:50] Totally a hundred percent . this is the app store of wow, our century of our decade. Like, I don't know, maybe century. I, I think the thing with ster though, if you think about it, like how many native apps do you download every week, every month versus like how many web things you use. So I think it's all about whether or not long-term opening eyes incentivize to keep broadening the things you can do within the plugin space.[00:20:17] And I think the lab, you know, as this technology gets more widespread, they're gonna have a lot more pressure from regulators, safety, blah, blah, blah. So I'm really curious to see you know, all, all the, all the government stuff that they'll, they'll have a congressional on this in six months and by then it will be completely irrelevant.[00:20:34] It's like that beside that time, they, they, they called it the GameStop guy after he made like 20 million on GameStop. And he just, you know, he was like, yeah, you know, followed the rules, made a bunch of money for those who don't know, unless you're our co-host. On the, we were supposed to drop an episode today, which I was supposed to work on, and then Chatty Phi dropped this thing, and now I, I can't think about anything else.[00:20:59] So this, this is my excuse for not, for for not working on the podcast today. . I know it's funny, we have like three, four recorded ones and spend last week, like GP four came out and we're like, okay, everybody's talking about this is irrelevant. What else? Anything else? Like, but I'm really excited about the, I, I feel like the first, the first use case for this, and I think he tweeted it about it too, is like, before if you had to do like data reformatting and stuff like that, it was really hard to do programmatically.[00:21:32] You know, like you didn't have an natural language interface and now you have it. And before if you had to integrate things together, like you could explain it very easily, but you couldn't like, put the APIs together and now they kind of remove all that part. So I'm excited to see what this looks like.[00:21:48] For commercial use cases, you know, you could see like, is there gonna be like a collaborative ChatGPT where like you're gonna have two, three people in the same conversation working on things. I think there's a lot of ui things that will improve. And so as we have lining from OpenAI for a second, almost pulled them up, but I'm sure you cannot talk about it[00:22:07] But yeah, it'll be interesting to see. Yes, sir. We're extremely excited. Extremely excited. I, I don't, if you, I don't know what else I'm, I'm like, so as far as I can tell there's the, there's hacker and Twitter. I haven't looked at Reddit yet, but I'm sure there's a bunch of reactions on Reddit.[00:22:23] I'm sure there's the OpenAI discord that we can also check out. I got locked out of the discord at some point, but yeah, anyone, anyone else like see news, demos, tweets the whole point of this is that it's live, so please feel free to share on comments or anything like that. But yeah. Yeah, the, the craziest thing I saw was the Mitchell from Hash.[00:22:44] We tweeted about Yes. How the integrations actually work and you just write a open APIs back and then just use natural language to describe what it's supposed to do. And then their model does everything. I wonder if they're using the off-the-shelf model or they have like a fine tune model to actually run integrations.[00:23:02] I wonder, I don't think they'll ever say it. Knowing them, probably they would just use the base one cuz they want, like, I think opening eyes kind of wants a God model, right? There's no point. It's not intellectually interesting to do small models, but like, like it's trivial. Yeah. Yeah. It's, this is a minor optimization problem as far as the, the long arc of history and the, the point is to build a gi safe agi and I, I do think this is kind of safe, right?[00:23:33] Like, . One of the criticisms that people were saying on hacks was that this is very closed. Like it's, it is an app store. At any point opening, I can randomly decide to close this like they did for Codex, and then they change their minds. Whereas if you use something like Alan Chain, it is more open and something that at the same time, like clearly this is a better integration path than long-chain.[00:23:56] Like, I much rather write this kind of plugin than a long-chain plugin. So they, they've managed to, I mean, they know how to ship man, like they're an AI research lab, but they also know how to ship product. Mm-hmm. . Yeah. I, I'm curious to see what the pricing models gonna look like. Also, I mean, if I'm writing the plugin, this is great because I don't even have to take care of the compute, you know, like, I just plug it in, then they actually run everything for me.[00:24:26] Yeah, but how, how it'll be monetized. I mean if the is giving their plugin know Expedia, I mean, people will not go to their website. Yeah. I don't, I mean, yeah. I have no idea that they, I don't think they said also don't super care . Yeah. It's because in the, in the app store, it's transaction driven.[00:24:46] But on Channel G, you're just paying a flat fee every month. So like, you can't really do revenue share on a flat fee. And I don't think that we use like, the Spotify model, but it's like a why not the amount of times? No, wait, wait, wait, wait, wait. Why not , you have Spotify. I just, Spotify model works. Cause swyx has power, right?[00:25:05] Opening has power. Same thing. They have all the audience. Yeah. But every, every every song is like the same value. Like if you listen to song actor to song y. , like, you're gonna make the same money. Like if I'm calling the API to, for like the meme generator or if I'm calling the API for the, you know, business summary thing, they're probably gonna cost the firm things, you know, so it's kind of hard to model up for OpenAI to say, Hey, okay, we're charging, we're going from 20 to 35 bucks a month.[00:25:36] But then like, how do you actually do royalties on a per model basis? Like how do people decide what royalties to negotiate? This probably needs to be a flat fee, but I dunno. Or put your credit card it OpenAI and then every time you wanna use a plugin, you pay for it separately. Uvp, usage based pricing all the way, and then you just get at the end of every month.[00:25:58] Exactly the, the only question mark is like, how much does OpenAI value the training they on and like how much they wanna subsidize the usage. Canada they have, they have promised to not use any of our usage data for training. So, oh, but the, I think like the plugins, it's a, it's a different thing.[00:26:16] It's like, like how you could, you could easily see how are like requests usually structure for like these things, you know, like, are people searching? So how are people searching for flights and stuff like that. I don't know. I haven't read the terms for like the actual plugin, you know, so. Well if anyone has please come up to speak cuz we're all processing this live.[00:26:37] This is the therapy session. Yeah, go ahead. One thing I see is basically you have to change the plugin I mean, to ask anything or even if you did browsing, right? I mean I see. I mean, they are becoming directly competitor to Microsoft also, I think, because now a user can actually just see, I mean, instead of being chat or Google, I mean they, they just.[00:27:04] Basically select the browsing plugin and basically get all the updated data. And other thing I see is basically you have to change the plugins. Like if you want to use the Expedia data, I don't know how it'll fit with the browsing plugin or you can select multiple plugins. But yeah, it is interesting.[00:27:23] I mean, if we get access, yeah, there is no actual browsing plugin. The browsing is a new model. So just like you can select GT three, GT 3 45, GT four, there's a new model now that says browsing alpha. So you, you can use CHATT in browsing mode and then you can use it in plugins mode, which which is a different model again.[00:27:45] So the, the plug browsing don't cross over.[00:27:51] Oh, that's interesting. And how do you see, I mean, in this whole descending, they are becoming competitive to Microsoft or how they're playing it out. I mean, Bing is just by the way, like, yeah, this, this killed the bing wait list. Cuz you don't need to wait for Bing. You can just use the browser mode open of Chatt.[00:28:11] How does it compete? It competes for sure. I don't think Microsoft cares. I don't think OpenAI cares. This is one of those things where like, you know, they are the two, two friends, you know, and they're clearly winning, so who cares? I don't like, I don't imagine it takes any of their mental bandwidth at all.[00:28:29] Yeah. The main thing is Google is Yeah, the main, like how is Google competing? Well let's see. Right. Bard is out there. I haven't got us yet, but could be interesting. Again, like it doesn't seem like they have the shipping capacity or velocity of Open I Microsoft and. That is probably going to bite them eventually because there's already been a big brain drain.[00:28:53] Something like four researchers, four, the top Google Brain researchers left Google Brain for OpenAI in January. And you know, those are the ones that I know about. And I, I imagine there's, there's quite a bit of brain, brain drain and firing going on at Google, so who knows.[00:29:08] All right, well, any other topics, concerns? Hyperventilation, if you just wanna scream I can turn down the volume and you can just, ah, for like five minutes. , that was literally, I was like, I, I need to like scream and just, ah, because what is going on?[00:29:29] I said that I'm filling out the form right now for the Oh, yeah. Okay. So wait list. So use use chat t to fill out that form. Right. And then, and then use a fake, use a different email and fill out the form a different way. This maximizes . I'm going to ask GT for what plugin do I want to build or, right, right.[00:29:51] Exactly. Yeah. Yeah. I, we can brainstorm. My plugins can live. Yeah. I think that will be a fun exercise. Like the, the main thing that breaks my brain is just this, this whole ability to run code, right? Like this is a new notebook, a new ripple. Mm-hmm. It, it looks like it has storage and it has memory.[00:30:08] Probably it has GPUs. That, I mean, can we run Lama inside GP?[00:30:19] I don't know if that's a, a model within a model. I think for me, most of the things come to like, you know, if I have my own personal assistant, what I want the assistant to do. I think like travel is like the first thing that comes to mind. Like, if I could use pt Yeah. Expedia, plug in with my calendar.[00:30:39] Yeah, yeah, yeah, yeah. But it needs to like know where I, where I'm supposed to be going to, you know, like if I just add a calendar that's like I'm going to, you know, room this week. Yeah. And then like can automatically both send my calendar and say, okay, these are like, or like the times that you like to travel, I know that you don't like ops and yada yada, yada.[00:31:00] That's one thing that I've always, we had this thesis at my peers firm about personalized consumer. There's so many website like, . I go to a lot of basketball games and every time I open Ticketmaster or whatever, it always shows me that she's a seat. And like, I'm not gonna see, that's not what I, that's not the tickets I wanna buy, you know?[00:31:18] But doesn't matter how many tickets I buy, never remembers that. So I think a way to say, to see, take all the information in and suggest, Hey, I saw that there's actually a price drop for the specific seats that you want, not for like any seats. You know, I think that would be a, a very good use case. So I've been a personal entertainment assistant for like, travel like going to shows, going to games.[00:31:41] That would be cool. That's what I'll submit on the wait list. Then we'll see if anybody cares. Right. Did you see get Lindy? Yeah. Yeah. At the, maybe you wanna recap, get Lindy for people. I'm gonna pin it up on the. . Yeah. So basically and this is like the kind of like a assistant lend the ai, right?[00:32:03] Yeah. Lend the ai it's on the board right now. Yeah. For those who can see it through the space. Yeah. Yeah. Actually at the AI Thinkers meet up the, the other day, you can basically like create all kind of like personal workflows and you, it kind of looks like integrations like zier, but it's actually just natural language.[00:32:24] So you can pop this thing up on your desktop and say, trying to hire 10 software engineers. So go on LinkedIn and plan 10 software engineers. The next step, draft a, an email that says, I'm the CEO of this company and I'm trying to hire for my team. If you wanna talk. Then the next step is like, send emails to all these people and it's gonna use people data labs or something else that they use on the backend to get the emails.[00:32:50] Then it actually sends the emails and. This is just gonna run in the background as if it was like you actually doing it. It's pretty neat that you don't have to write the actual integrations. Like it just uses natural language so you're not bound by what they build. Like theoretically anything you wanna integrate with, you can just explain to it how it works and it's gonna figure out how to do it.[00:33:12] So there's a wait list now. Flow didn't give us any papers just because we were at the meetup, so I'm also waiting to get access to it, but it looks really, really good. Yeah, so generative AI's top use case is generating wait lists, right? Like we we're, we are, so we have never had such an easy way to generate a lot of wait lists.[00:33:30] A lot of signup for witness. Oh my God. So much interest. So much product market fit. But also you know, one thing that you, you raising this point? I think, I think, I think by the way, I also pin this up. Mindy can support complex roles like no meetings on Fridays, all one-on-ones on Monday. , I like my meetings back to back within five minutes.[00:33:47] Five minutes in between. So it's just arbitrary rules that you could not program in a normal assistant type environment without a large language model. Which is kind of exactly what you want when you're booking your travel, right? Like, hey, I only like aisle seats unless it's it's a flight that is less than one hour that I don't care, right?[00:34:02] Mm-hmm. . So stuff like that I think is, is super interesting. And but also like not a common use case. Like how many times do you travel a year? Like, you know, five, right? Like more than that, but yes, I think for, yeah, a lot of times it's not a, it's not like a super widespread thing, especially if you don't do it or work.[00:34:21] If it's infrequent, you want high value and then if it's, if it's frequents, you can do low value, right? Like that, that's the sort of binary tradeoff, like the Uber is sort of frequent and low value. Airbnb is high value in frequent there's something of that nature. . So like, you want, you want sort of inspections of that sort.[00:34:37] Google vs ChatGPT[00:34:37] But the other thing that you brought to my attention was, and, and has room for Google to do something is do you notice that OpenAI plugins, none of them are Google because they're not friends. So Open BT will probably never have first party access to Google Calendar, probably never your Gmail and probably whatever, you know, Google copies, OpenAI again.[00:35:04] They will do, Hey, we have all your docs.[00:35:10] Yeah, I, I, I'm interested in that because I don't know if you remember, but like in the first iPhone, like YouTube came, like pre-installed on the homepage and then I forgot when, but one of the early ioss, they removed it. So now obviously Google's not a friend. Who's gonna be a friend in the future, who's not gonna be like, do we all have to hail our AI overlords?[00:35:33] Yeah. To get access to the, the only plugin system. Yeah. The only winners are brown CEOs. Think you're fine. Alright. But yeah, yeah. I just invited nada. C my old boss. Hi. You can't lurk. I, I want, I want to hear from you. And but, but also, you know, yeah, I, I think the Google point is actually novel.[00:35:50] I'll probably write something about that. Yeah. I mean, I'll have to write something about this today. So please feed me things to write.[00:36:01] Nader Dabit on Selling His GPT App[00:36:01] Oh, there we go. Hey, what's up man? What are you think. I know it's like, not entirely your space, but like you're, you're all about the future, right? I mean I did build and sell an AI company about a month ago, . I did the wait, what travel app was built on GP T three Tweeted about You sold it? Yeah.[00:36:21] It was getting like a hundred thousand visitors a day, like 60 to 80,000 unique a day. And then I, whoa. Yeah, I sold it like within about 24 hours. I tweeted out that it was for sale. I had like 30 or 40 people in my inbox. Whoa, whoa, whoa, whoa. Okay. I need, so like, but you're right. This isn't my, my man like domain of expertise.[00:36:41] It's fine. You make, you may just a thousand dollars on the side. It's, it's cool. Wait, wait. So I saw you tweet your original thing, which was, Hey you know, GP three can plan your travel. I don't know what happened since then. Can you, can you fill the rest of. Yeah. Yeah. So I mean I was basically, you know, I travel a lot for work.[00:36:55] I, I do travel like once a month and, you know, but I'm also very resource constrained on my time. So I usually like to spend like one day sightseeing. So what I typically do is I go a trip advisor and then I kind of like, you know, Google around and like look at all these things and it usually takes me about an hour to figure out like what I wanna do on my day or two off to go, like sighting.[00:37:14] And then I realized GPT3, you know, you can just literally ask and, and say, okay, within X number of. Like, I'm gonna be in this city, I want to have an iter itinerary. You know, you can give all these different parameters and it gives back a really good response. This was before GPT, even three and a half or four was out.[00:37:30] So I just built like a nice UI on top. Then, like I mapped over the results and, and was linking to, you know, the the Google searches for these different items and, and kind of made it into a nice user interface and, you know, just built it out and tweeted it out. And it, it just got a lot of traction and attention.[00:37:48] Like I said, I had around a hundred thousand visitors a day, like right off the bat, 60,000 uniques like per day. So it was getting a shitload of of traction and. I don't have a lot of free time to kind of like, maintain or build something like that out. So it was costing me money, but I wasn't monetizing it.[00:38:06] So the way that I was thinking to monetize it would be to use affiliate links and stuff like that. So I could either, you know, spend time figuring out a way to monetize it or just try to make, flip it and just make some money. So I decided to sell it and that was kind of it. I just sent a tweet out and kind of said, this is for sale, who wants it?[00:38:25] And I had I had so much inbound from that that I had to delete the tweet within about two hours cuz I was just unable to keep up with all the people that were coming in. And I filled it out a couple of offers and I, I found the person with the most money that could close within the shortest amount of time and just took it.[00:38:44] Well done. Well done. Nice. Awesome. I need a, I need a, I need an applause button right here. . Okay. So with that context your thoughts on today, what you seeing? There's Expedia there, but. Comment on travel or not travel, whatever you want. . Yeah, I'm still reading up on the, the chat plugins actually.[00:39:01] And I was hoping to kind of chime into this to learn a little more about how they work. I'm here on the the page. I've had API access from fairly early on. I signed up and I've been you using it a lot. I'm trying to find some different ways to integrate AI and machine learning into the blockchain space.[00:39:20] There's a lot of stuff around civil resistance that I think are gonna be, you know, pretty interesting use cases for us. It's obviously not like a, a a type of use case that is gonna be useful to, to the general public maybe, but yeah, I'm still, actually still trying to understand how these plugins work.[00:39:35] So what have you seen the developer documentation, which developer documentation at the bottom? Yes. That's where I'm, I'm check, I'm reading through as of now, I see the examples, which are pretty cool. Yeah. Yeah. So my, my quote the, the quote I put on Hacker News was, this is OpenAI leveraging chat, GPT to write OpenAI op open API to extend OpenAI chat.[00:39:58] GPT. I'm confused, but it sounds sick, but yeah, I mean, so open api, you know, not to be confused is OpenAI is randomly the perfect spec for OpenAI to navigate because it, you know, is somewhat plain English. And then you just supply a description for model. You described a off method. So they actually provided a link to a repo where you can see some examples.[00:40:20] The examples are not very, not very flesh out. But you can do, like, bear off, I assume you can do whatever, whatever kind of off you like then you just provide like logo url, legal info url. It's not, it's not, it's not that much. This is 10 times better than Chrome manifest.[00:40:37] Like manifest v3. Yeah, I mean, I'm reading through some of these examples and a lot of them are in Python. I wish they would've more JavaScript stuff, but I would say 10 times would be kind of an understatement if I'm understanding how some of this stuff is gonna work. English is all you need, man.[00:40:53] English is all you need.[00:40:57] Well, so, so, and then I think in buried in the video is sort of the Ethan experience, right? Which is where you specify. So if you're, if you're first party congrats, you know, you're, you're inside of the the chatt ui, but if you're third party, you can just host your Js o file anywhere. It's literally a JSON file on an API spec, right?[00:41:15] You hosted Jason file anywhere. And then you just like plug it into their their, their text field here and then they, they validate a little bit and it's installed. So there is a third party app store on day one. Yeah, that open table plugin example is pretty sick. Yeah. So like yeah, I I What would you want as a developer that's missing?[00:41:41] I think that we're like in the golden age of of being a developer and I don't know if it's gonna go downhill quickly or if it's gonna go like, get better quickly or this is like the, the end of all of it. like, is OpenAI just gonna be where like we do everything like nothing else is like gonna exist.[00:42:00] I think that Okay. You know what I, I know that's not the answer for sure. I'm just kind of joking, but I think it will, this is obviously shut down a lot of companies. This is the app store moment, right? For like, just like, I mean, you and I remember the iPhone app store moment. Some people dropped everything to write apps and they made it big and some, a lot of people did not.[00:42:20] But the people who were earlier rather than later probably benefited from understanding the platform. Like imagine, imagine you, like, you know, you, you are a big React native person for a long while. Like imagine if you had the chance to drop everything and be one of the first developers on a new app store.[00:42:35] Like that's pretty huge. Yeah, a hundred percent. But I'm wondering like the, the type of mode that you'll be able to build with some of this stuff, because it seems like that OpenAI AI will just continue adding more and more features directly into the platform. But I think like for very like, Proprietary type of stuff.[00:42:54] It might make more sense, but like if you, if you want to build like an app for the general public it just seems like they'll end up integrating something like directly within their platform for a lot of different ideas like, such as this travel app that I sold. I have a feeling like they'll have a way better version of that built directly into their platform sometime soon.[00:43:13] Q&A: ChatGPT Waitlist and Voice[00:43:13] Hey, hey guys. Can I ask just to get a quick update does anyone here have access to it yet? Like is it, is it open? Cause I signed up for the wait list, but I haven't seen anything yet. Yeah, no, it's just, it's just wait list where just like 90% of the stuff that people launch, you know, she has a few, she has a few videos and demos, but yeah, it's just a wait list.[00:43:31] Who knows? I mean, thanks. Opening OpenAI Pretty has been pretty good about getting people off wait list, right? Like a lot of people got off the GT four API wait list, like the day after they launched. Mm-hmm. . This one, I feel like they're quite fully baked, like it's. I wouldn't be surprised if they started dropping tomorrow.[00:43:50] So we'll see. But like you can start developing your, your third party plugins today, because there's examples. The docs are like two paragraphs, but that's all I need really . So, so I've been, I've been working and, and I've been following a lot of projects where people are, the one thing I don't see with ChatGPT is like, why are they have, we have Whisper, we have the APIs for ChatGPT.[00:44:13] It's like, why are we not at the point where we're talking to this thing and it's talking back to us? Like, I don't know how we haven't, nobody's wrapped their head around that yet, but it's like, it seems to me like, don't you wanna be like, Hey computer build me an app that does X and it says okay and builds it for you and talks back to you.[00:44:29] Like, I just, it's like, I don't know. That'll be the first probably plugin that I try to work on, but it's just driving me a little nuts. That's all interesting. I like the voice interfaces because sometimes it gets really long, like some of the prompts get really long. They're like, I don't wanna talk that long.[00:44:46] Yeah, yeah. Yeah. I was so, so I was doing, I was messing with the system prompt, basically get it to be like, Hey look, I'm gonna be talking to you. So keep it condensed. I think like the ideal interface would be like, for like, talking to, it would be like putting that at like the system level, but also, you know, being able to type as well as speak to it is just something that I'm, I'm trying to work on.[00:45:08] And I think with Plug, you know, if we could do that with plugins, I'd be huge. Cuz I know there's already a, like a Chrome extension that allows you to talk to it. Or, or I guess you could do it natively as well, but, you know, native stuff on like iPhone and Android is not too good.[00:45:24] LangChain with Human in the Loop[00:45:24] Hey, you, you mentioned that. Hi, by the way. You mentioned the hey way of, of talking to or having way the AI talking to you as a user. So just today there was a new release to of LangChain. I know it's kind of, not really the plugin, but this is the closest thing probably. And they edit a Ask Human tool.[00:45:46] So now the model can ask you a question if it's not sure. About something[00:45:55] to share. Share what? Go ahead. So, so the ask you if it's during its chain of thought, when it's not sure. To an example. Right, right. Oh, I would love that. Yeah. Probably not gonna do that. It's too confident. Yeah, I, I've seen a little bit about. LangChain, but I haven't used it yet. Has anyone here it?[00:46:15] Oh, it's all about it.[00:46:19] I did, I did. I built the LangChain on UI too. It's pretty nice. I mean, especially when it first came out, the, the trolling, it was like so rudimentary. But it's nice to be able to change things together. I think the agent part is pretty interesting. I haven't used it myself because I didn't need it.[00:46:34] But yeah, there's a, a very big community. See, see, light chain was very smart, right? Like they picked out the open source angle first, and then the others like dust or did the closed source angle. Now they have indirect competition with ChatGPT, but Langchain still has that. It's open source, extensible, like you own your agent.[00:46:55] Google vs Microsoft vs Apple[00:46:55] Them doing business deals with OpenAI in, in closed doors, right? Like, so pretty smart, like strategic position. All things considered.[00:47:05] It's a little, isn't it? It's like a little funny to me. That, you know, it's like goo because Google just came out with Bard, right. And I don't know if you guys have messed with Bard at all, but it's at least to me another wait list. Oh, okay. Yeah. I mean, to me it was a little underwhelming. I mean, I'm, I don't know if you've seen like the same, yeah, if you've seen like the screenshots going around, like it seems like, you know, someone tweeted it was like in, in guys in a boardroom or whoever's in a boardroom just being like, s**t.[00:47:30] Like, we need to you know, we lost our first mover advantage here. But it's just kind of funny to me that like, I guess now Microsoft's gonna have like an app store, right? Like just after everything, you know, Microsoft dominated in the nineties and stuff, and then it was Apple, apple, apple. But it's just kind of funny to me that it's gonna be, I guess Microsoft now, right?[00:47:49] Bard feels like Bing does to Google. Totally. Yeah. A hundred percent. I agree with you a hundred percent. All the turntables, right?[00:47:57] Yeah. So for, for those of you who might have missed the earlier discussion the one thing that OpenAI or Microsoft will not do is integrate with your Google calendar. So, the one saving grace that Google probably has it, it probably owns your workspace, right? Like most of us have Google accounts, Gmail accounts.[00:48:14] When we work, we log into Gmail and Google, again, use Google Docs spreadsheets. So if Bard is smart, they will take advantage of that. And then slowly watch as everyone moves to Microsoft Office.[00:48:31] I think Apple should do a partnership with the OpenAI and basically Microsoft. Cause Google has huge advantage of Android. So basically having OpenAI on the, I, I mean, it would I mean having the partnership with OpenAI would make, I mean, very useful on I devices if they, I mean, Siri is really bad and if they integrate with I, I mean they've win the world I think.[00:49:00] So it would be huge, beneficial to Apple and basically the Microsoft also if they integrate together because Microsoft doesn't have any of the devices and most people, I, most ordinary people use the devices iPhone or phone and . So it would be huge advantage. And for the 10, basically Apple I, I'm very curious to see what Apple ships next.[00:49:24] You know, everyone's shipping AI stuff and then Apple was like, Hey, look at our AR glasses. . Yeah, but I mean, ar ar with, with the, with the 3D models that are, that are coming out cuz isn't it mid journeys working on like a three, like their lab, I know is, is building a 3d generative model. And I think that sort of stuff with, with AR is very, oh, is that, is that public?[00:49:45] How did, how did you know that? I don't know if it's public. I, I saw a tweet about it I don't know, like a week ago. It is a semi, semi open secret in San Francisco, but I, I don't know if it's public. Yeah, I think I, I saw them, it was some context of they were talking about text to video and they were like, well we're, we're doing our like 3D modeling first.[00:50:02] So, I mean, my assumption is, and I, I don't work in the space yet, unless anyone's hiring please, I'm looking for work. But it seems to me like Apple. Seems to have their head on straight and like it might be that if they're gonna release these ar like mixed reality ar vr glasses, like, you know, the mo the thing that makes the most sense to me is like getting with generative AI graffiti modeling.[00:50:24] It's like, you know, it would be cool to go to like a coffee house or a bar. And then, you know, when you see like the graffiti in the bathroom when people write sometimes funny stuff, sometimes, like the worst stuff you've ever read in your life and you're like, what is going on when this person's going to the bathroom where they have this much hate?[00:50:38] But it's like, it would be cool to have a component of that, you know, like in the metaverse, so to speak, right? Like, so you put on your AR glasses and it's like, oh cool, I can see like a bulletin board here that exists in the fizzled. But it's also in the, you know, it's like augmented, right? That's just, to me it seems to be like the logical next step.[00:50:57] Interesting. Well, we'll, we'll see that when that happens. I recently got a Quest Pro quest to my, and yeah, my parents love it. And any tech, any type that my parents like, I think has a real crossover appeal. You know, the thing that you, your conversation had gimme an idea for winners of every app store in the early days, like Facebook has an app store, apple had an app store, you know, the winners of an app, store games like what we need Yep.[00:51:24] Is a multi-player. Like everyone logging into chat, BT and then playing a multiplayer game line. Mpc. MPCs are gonna text you on your. , that would be kind of cool.[00:51:40] ChatGPT Plugin Ideas[00:51:40] Actually. I was thinking, I don't, I don't know if it's gonna be game games at first though. Like, it seems like games always push the envelope with tech.[00:51:47] Well, it's like pornography and games, right? But like, I don't know, I was talking to like, you, you mentioned your parents and like you know, I was talking to my mom about this stuff and I was like, you know, I'm seeing stuff that are just demos of just like, Hey, take a picture of your fridge and it'll tell you like, here's what you can make.[00:52:01] Or you know, even like talking to it and just being like, Hey, here's what I ate today. You know, what's my, how many calories I ate today? Or, you know, what's my diet plan? Just things like that. And that's why I brought up the talking to it just with na using natural language and then having it, being able to talk back to you.[00:52:17] I'm surpri I'm like really surprised that they haven't implemented that yet. Cuz it seems to me like that's a use case that a lot of people would use it for, you know? Or if you could just like, you know, call it on a phone if you built like a Twilio back in, into it or something. Like I just don't, it, it boggles my mind why they haven't.[00:52:35] Put that feature in yet? . Yeah. Yeah. I really don't think it's gonna be too long before you're, you're sitting there at work and you get a text or call on your phone from an nbc, Hey, our village is burning down. You need to come over here and help . Do, do you guys think there's gonna be different silos?[00:52:55] Like you know, with Bard coming out and you know, people implementing GP T three and four now, I guess, into all their apps, but do you think they'll be like, chat GP p chat, GP, PT will have their store and then Google will have their store? Do you think it'll be like, there's gonna be a clear Victor here and then, you know, it'll be like, okay, Google's apps or, you know, Google Docs or whatever is like part of chat GP t's plugins, right.[00:53:20] Yeah, it is gonna be like crypto. Everybody's just gonna be fighting for the top. You're gonna have the couple of dominant people, but then you're gonna have all the, the small guys who go up and down and Yeah, I I, I feel like it's gonna be pretty similar to, to how crypto was. So we're gonna have some slur juices is what you're telling me.[00:53:41] Yeah, boy. Nice, nice. I dig it.[00:53:46] Not an app store?[00:53:46] So may maybe we aren't, tell me what you guys think about this, cuz maybe we aren't thinking about this right? Because maybe this is not an app store. Cuz typically in an app store you'll go ahead and choose which plugins you want installed, like on a phone or whatever have you.[00:54:02] But the path forward seems like all the plugins are like omnipresent. I, I don't know why Google isn't shitting their, shitting their pants right now. Cuz basically you check like openly I could just force all. The big companies to write plugins and then just be a single search box for everything. So imagine if you wanna like fly somewhere or you wanna book a hotel you, we have the Expedia and booking.com.[00:54:29] Both of those plugins summoned up and it shows you both the results. And then you can click through on whichever ones you want. And then, yeah, you charge 'em based on click throughs. Like I, I think like we're, maybe we're just getting tripped over by the fact that you have to choose a plugin right now and only interact with that single plugin.[00:54:49] But I think I think the smart move forward would probably be just to have all of them omnipresent and then have this like n l p higher layer up there to summon the right plugin when need be. What, what do you guys think about that? Yeah, so, so that's like the LangChain thing. That's what I haven't used LangChain yet, but it sounds like that's, from what I was reading with LangChain, it sounds like that's kind of is how I thought that worked.[00:55:12] But I don't know, can someone here like enlighten me? I, I don't know if it, how, how LangChain works.[00:55:21] LangChain and the Future of AI[00:55:21] Yeah. I don't know how LangChain works either, but I think it's gonna be a two-way street. Everybody's gonna be making plug-ins with chat GP p t and everybody's gonna be making chat GP plug-ins for other services as well. I think there's gonna be a whole bunch of people about to make a bunch of Jira plugins and stuff like that, so I think it's kind of gonna be a, a two-way street.[00:55:45] I dunno, is anyone else, like, this is super exciting to me. I haven't been this excited about like, the internet since like, probably like the, like the web 1.0 days. Like I, I, I hate, I'm so . Yeah. Like, I hate web two. Like, this is cool. I'm glad that like spaces exist, but I hate Web 2.0, like Web 3.0. I'm about, and like, I, I consider this part of Web 3.0.[00:56:04] But it's exciting, right? Like, this is cool. Like I, I'm really, you know, I'm stoked about, about the progress that's being, like, the joke is like, you know, every day in, in AI is like, it's like way longer, right? It's like we're telescoping very quickly. Yeah, I mean, one of the things, telescope and updating.[00:56:23] Yeah. You know, I, I would say I noticed towards, maybe like three years ago when I was working at aws, it just seemed like for, for about five or or so years, everything was very stagnant and there just wasn't a lot of exciting things that were happening. Everyone was like, if you remember, all the Devrel advocates were like all creating like tutorials around creating your own CMS and your blog, and you saw like that exact same tutorial given by like hundreds of people over the course of a few years because there just wasn't any cool s**t that was happening.[00:56:52] And then I think when crypto and, and blockchain stuff like that kind of caught my attention. Caught my attention, and I'm still excited by that, that stuff. And then this seems to be just almost like when, if you were like around when the iPhone was coming out and actually realized how important it was, I think everyone now is, is seeing this and they're all like realizing how important it is.[00:57:13] And it's cool to be like part of this moment as a software engineer. Yeah, I'm, yeah, go ahead. Oh, sorry. I was gonna say, like, I'm, I'm excited for you, I'm sure you guys saw the alpaca stuff, right? And I know that they're doing D D M C A stuff, but essentially someone's gonna train one of these models and it's gonna, you know, you're gonna be able to run this stuff offline.[00:57:35] And just like the way to, if, if you have access to like I forget which one of the EAC accelerate people was talking about it, but it was like wharf in the flask. It's like you've gotten the machine offline. So if you don't need internet access to access, like, the entirety of human knowledge, whatever's in the data set up until 2021 or whatever, and you don't need internet access, like that's gonna revolutionize everything.[00:57:57] Like, that's insane to think about[00:57:59] Yeah. Oh, well we won't speculating You can run in Inside Chat runs Python. Oh, really? Is that, is that happening? I mean, it has a file system and it has file storage and CPU at memory. Yeah.[00:58:20] is turtles all the way down. Turtles all the way down, man.[00:58:23] The, I, I think the plugin system, if people can get to run their own models like the LAMA ones and the same structure for plugins, you can see like going back to the Metaverse thing like a and snow crash where people built their own like demons. You know, it's like I got the demonn that like kicks people out of the club, the, the black sun.[00:58:43] But you can see in real life it's like I have a bunch of plugins that only I have, you know, and I use them to make myself more productive, use them to make myself, you know, look like I'm working when I'm not working and I'm like responding to my emails and stuff like that. But I think like, The OpenAI releasing this today makes it so much easier to start it because you don't have to worry about any of the infrastructure.[00:59:07] You just build the plugin and then they run everything and you get the best model possible. But I think none line, you know, I would love to walk around with my own, you know, raspberry pie or whatever of my wrist, kind of like I'm fall out and say, Hey, I wanna do this, I wanna do that. I don't know, I don't think we're that far away, so I'm excited to, to keep building.[00:59:28] Shoot, the, the technology exists where you could make that now, but it'd be a little awkward to have
Canonical has issued an official edict: the approved Ubuntu remixes must remove Flatpak support as of the next release. Ring video doorbell informed a customer they were required to release footage to a police department with a signed warrant. All the footage from that customers account was sent to the police. -- During The Show -- 01:37 Audio to text transcription - Dennis Mozzila Deep Speech (https://github.com/mozilla/DeepSpeech) Kaldi (https://kaldi-asr.org/) There are others 04:29 Responding to Ep 305 Banking Issue - Nate May be ASN reputation May be IP block reputation Noah's work a round 12:00 Variety of Questions - John Rust Desk Any Desk Poor Man's VPN Simple Help Chrome Remote Desktop Low voltage can't kill you Stud finders tell you about AC power Start with a coat hanger Use samba Log into the web interface Do Not use email Huragok Asked - Display Camera Feeds Rpisurv (https://github.com/SvenVD/rpisurv) Blue Iris 27:16 News Wire Armbian 23.02 Armbian (https://www.armbian.com/newsflash/armbian-23-02/) Nitrux 2.7 Its Foss (https://news.itsfoss.com/nitrux-2-7-release/) FFmpeg 6.0 Debug Point News (https://debugpointnews.com/ffmpeg-6-0/) GTK 4.10 Gnome Git Lab (https://gitlab.gnome.org/GNOME/gtk/-/blob/4.10.0/NEWS) Plasma 5.27.2 KDE (https://kde.org/announcements/plasma/5/5.27.2/) 4th Gen Xeon Support Canonical (https://canonical.com/blog/canonical-announces-support-for-the-4th-gen-intel-xeon-scalable-processors-with-intel-vran-boost) Linux Mint, New Features The Register (https://www.theregister.com/2023/02/01/mint_212/) Libre Office 7.5.1 Libre Office (https://www.libreoffice.org/download/release-notes/) WINE 8.3 Gaming On Linux (https://www.gamingonlinux.com/2023/03/wine-development-release-v8-3-is-out-now/) The WINE Dev release 8.3 is now out. Linux 6.3 RC1 Debug Point (https://www.debugpoint.com/linux-kernel-6-3-rc1/) Linux Foundation Open Source Value Report PR News Wire (https://www.prnewswire.com/news-releases/linux-foundation-research-shows-economic-value-of-open-source-software-rising-in-terms-of-benefits-vs-costs-301761106.html) Linux Foundation & OGA MoU Arc Web (https://www.arcweb.com/blog/collaboration-between-linux-foundations-lf-edge-open-grid-alliance-accelerate-edge-innovation) Microsoft Forces openSUSE Web Pro News (https://www.webpronews.com/opensuse-begins-enforcing-secure-boot-kernel-lockdown/) APT 27's New SysUpdate Bleeping Computer (https://www.bleepingcomputer.com/news/security/iron-tiger-hackers-create-linux-version-of-their-custom-malware/) MOSS will be Open Source Yahoo (https://news.yahoo.com/chinas-chatgpt-program-moss-made-205953007.html) 30:45 Canonical Drops Flatpak Linux Mint (https://linuxmint-user-guide.readthedocs.io/en/latest/snap.html) The Register (https://www.theregister.com/2023/02/23/ubuntu_remixes_drop_flatpak/) Whole point of spins is to solve different problems Limits the support requirements Linux packaging story The rest of the Linux ecosystem is going another way 40:30 Ring in the News Politico (https://www.politico.com/news/2023/03/07/privacy-loophole-ring-doorbell-00084979) The Story Police ask for limited footage User complies Police ask for more User says no Police get a warrant for ALL his cameras Don't store information you don't want leaked Should you be prevented from using technology because someone wants to abuse it? Technology can be a threat 47:00 Flatpack New Web Experience 9 to 5 Linux (https://9to5linux.com/flathub-in-2023-new-web-experience-direct-app-uploads-and-more) 47:50 Audacious 9 to 5 Linux (https://9to5linux.com/audacious-4-3-audio-player-adds-pipewire-plugin-native-opus-decoder-and-more) 48:20 Announcements ScALE Mar 9-12 Southeast LinuxFest June 9-11 Linux Fest Northwest Oct 20-22 Thank You Chris and JB! Michael Dominic Interview on the 21st Send Questions -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/328) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
FFmpeg gets new superpowers, Plasma's switch to Qt6 gets official; what you need to know. Plus we round up the top features coming to Linux 6.3.
FFmpeg gets new superpowers, Plasma's switch to Qt6 gets official; what you need to know. Plus we round up the top features coming to Linux 6.3.
On our first show in 2023, we rejoin our epic task of ranking all of humanity's software. Will we actually start putting things in order this week? Or will we get sidetracked by our love of dot matrix-printed banners? Or by reminiscing about the excitement of IRC channel takeover wars? Will Brad be flabbergasted that Will has never played Gorilla.bas? Will we ever finish this project? Listen on and find out!This week's show art courtesy of Blake Patterson.Support the Pod! Contribute to the Tech Pod Patreon and get access to our booming Discord, your name in the credits, and other great benefits! You can support the show at: https://patreon.com/techpod
Muere la creadora del primer ensamblador / Apple bloquea micropagos en Telegram / Call of Duty solo 72 MB en disco / Rusia amenaza con destruir Starlink / Hackeo a Liz Truss / Hechizos fácil FFmpeg Patrocinador: Ahorra de hasta 40 céntimos por litro en BP. Y si estás en Canarias, con la tarjeta Dino BP: cada 30€ de compra en los establecimientos HiperDino, te regalan 1€ para repostar en BP. Y viceversa, por cada 30€ en las estaciones BP, te dan un 1€ para HiperDino. Siempre ganas. Muere la creadora del primer ensamblador / Apple bloquea los micropagos en Telegram / Call of Duty solo 72 MB en disco / Rusia amenaza con destruir Starlink / Rusia hackeó a Liz Truss / Hechizos fáciles para FFMPEG
GitHub steps in it this week, Microsoft's Linux distribution now runs on bare metal, FFmpeg gets IPFS support, and the odd thing going on with the kernel.
One of the pioneers of the web, VNC, Webcams, and more joins us; plus we'll update you on a few projects we love. Special Guest: Quentin Stafford-Fraser.