POPULARITY
www.eastvillageradio.com, www.brianturnershow.comNIHILIST SPASM BAND - Destroy The Nations - No Record (Allied, 1968)THE FALL - Jet Boy (Live Salisbury 5/7/99)THE MARTHA'S VINEYARD FERRIES - Decorations - 7" (Ernest Jennings, 2025)IZ - 阿肯 Aⱪen - BBC - 回忆 Memory (Old Heaven Books, 2025)ERASER - Dinner Roll - Hideout (Siltbreeze, 2025)SALTCHUNKJERRY / RAHIEM SUPREME - Take It Home - Dead Fingers Talk Raps (Riff Pillage, 2024)ANMON - Moon - Deux (Yuku, 2025)BIMBO HELI + KURT KOPTER - Ich Bin Dein Liebster Freund - Es Ist Ihr Recht, Besondres Zu Erwarten (cs, Red Avenger Productions, 1981)CARDIACS - Susannah's Still Alive - 12" (The Alphabet Business Concern, 1988)RICHARD DAWSON - Bullies - End of the Middle (Weird World, 2025)QUINIE - Macaphee Turn the Cattle - Forefowk, Mind Me (Upset the Rhythm, 2025)WEDNESDAY KNUDSEN & WILLIE LANE - Last Flight To Eden - V/A: Hello Sunshine: A Tribute To Relatively Clean Rivers (Raven Sings The Blues, 2025)DUSTBREEDERS - Woodstock Blind Fest. - No Rain (cs, Tanzprocesz, 2025)CHOUK BWA & THE ANGSTRÖMERS - Sala - Ayiti Kongo Dub #2 (Les Disques Bongo Joe, 2022)PINK SIIFU - V12'!Hml'! (feat. Conquest Tony Phillips & Liv.e) - Black'!Antique (NL, 2025)BALLI MARRAFFA BALLI TRIO - 8 Bit Ra - 8 Bit Jazz Furlough (Sonic Belligeranza, 2024)CREATIVE CONSTRUCTION COMPANY - No More White Gloves Pt. 2 - Vol. 2 (Muse, 1976)POBORSK - Laser Doom - Vaag & Poborsk (Evel, 2025)FREIWILLIGE SELBSTKONTROLLE - Mein Erster Freund - Stürmer (Zickzack, 1982)EXO - Ladybug - Exo (La Vida Es En Mus, 2025)YUASA-EXIDE - Spit - V/A: Battle For L.A. (cs, See/Saw, 2025)DUGLASETTES - Belshill's Son - V/A: The Waaaah! CD (Bring On Bull, 1991)DAN MELCHIOR UND DAS MENACE - Pyramids - Natural Anxiety (cs, NL, 2025)BURNIN RED IVANHOE - 2nd Floor, Croydon - W.W.W. (Sonet, 1971)ASTEMIR MARSHENKULOV - Laje - Xurey (Mimizu Izuru, 2025)BURNIN RED IVANHOE - Avez-Vous Kaskelainen? - W.W.W. (Sonet, 1971)
Lo de Helix https://www.entrepreneur.com/es/tecnologia/helix-el-robot-de-figure-que-promete-revolucionar-las/487468Lo de perplexity https://www.perplexity.ai/cometLo de Pistón. https://amzn.to/41tbXEi
Un an après sa scission avec Sodexo, Aurélien Sonet, DG de Pluxee, le spécialiste des titres-restaurant et cadeau, était l'invité de l'émission Ecorama du 29 janvier 2025, présentée par David Jacquot sur Boursorama.com. Parmi les sujets abordés : la bataille sur le marché des avantages aux salariés, l'évolution de la réglementation des titres-restaurant en France et les résultats financiers solides. Hébergé par Audion. Visitez https://www.audion.fm/fr/privacy-policy pour plus d'informations.
Send Everyday AI and Jordan a text messageLast year, I said don't touch the free version of ChatGPT with a 10-foot pole. Is it still THAT bad? A TON has changed since OpenAI's '12 days of Shipmas' in December. So is the free version of ChatGPT better? Or, are the new paid-only features TOO good to pass up? Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on ChatGPTUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. ChatGPT Features and Plans2. Alternatives to ChatGPT3. Shift in OpenAI's Goals4. Updates from OpenAI5. Model Usage and Limits6. ChatGPT Advice and RecommendationsTimestamps:03:00 Daily AI news06:30 ChatGPT free vs. Paid08:18 Free ChatGPT version significantly improved model performance.10:00 Free ChatGPT is decent but not superior.15:03 They aim to replace Google with AI chatbots.17:22 Custom instructions streamline limited ChatGPT use.20:21 Canvas Mode enables inline AI document editing.25:44 Addressing ChatGPT questions; submit queries anytime.28:39 Google VO 2 superior; limited access available now.30:18 Free ChatGPT improved, but Plus requires payment.34:31 Google AI Studio: powerful, free, data-trained trade-off.37:40 Free ChatGPT exists, full features require payment.40:48 Google AI Studio best for literature reviews.44:33 Free version useful, but paid plan recommended.Keywords:ChatGPT, OpenAI, Everyday AI podcast, AI developments, AI news, AI infrastructure, AI use risks, AI applications, large language models, ChatGPT plans, GPT-4, ChatGPT free version, ChatGPT Pro, ChatGPT Plus, AI for business, AI data security, Gemini 2, Claude 35 SONET, GPT-4o, Anthropic, Google AI Studio, Meta Lama, Microsoft Copilot, Typing Mind, Microsoft CoreAI, model training, DALL-E Image Generator, Canvas Mode, AI chat tools, context windows Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Programa 5x68, amb Jordi Cubino. Avui repassem la biografia de Paul Verlaine, un dels precursors del moviment simbolista franc
Перевод Маршака ранее эти стихи уже были положены на музыку Тихоном Хренниковым в исполнении А Пугачевой Теперь же эксперимент продолжает ИИ Suno Уж если ты разлюбишь, то теперь. Теперь, когда весь мир со мной в раздоре. Будь самой горькой из моих потерь, Но только не последней каплей горя. Но только не последней каплей горя. И если скорбь сумею превозмочь Не наноси удара из засады. Пусть долгая не разродится ночь Тоскливым утром, утром без отрадным. Пусть долгая не разродится ночь Пусть долгая не разродится ночь Пусть долгая не разродится ночь Тоскливым утром, утром без отрадным. Оставь меня! Но нет, в последний миг Когда от мелких бед я ослабею. Оставь меня, чтоб снова ты постиг, Что это горе всех невзгод больнее. Что нет невзгод, а есть одна беда Что нет невзгод, а есть одна беда Моей любви лишится, моей любви лишится Моей любви лишится навсегда. Оставь меня, но только не теперь
The full schedule for Latent Space LIVE! at NeurIPS has been announced, featuring Best of 2024 overview talks for the AI Startup Landscape, Computer Vision, Open Models, Transformers Killers, Synthetic Data, Agents, and Scaling, and speakers from Sarah Guo of Conviction, Roboflow, AI2/Meta, Recursal/Together, HuggingFace, OpenHands and SemiAnalysis. Join us for the IRL event/Livestream! Alessio will also be holding a meetup at AWS Re:Invent in Las Vegas this Wednesday. See our new Events page for dates of AI Engineer Summit, Singapore, and World's Fair in 2025. LAST CALL for questions for our big 2024 recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!When we first observed that GPT Wrappers are Good, Actually, we did not even have Bolt on our radar. Since we recorded our Anthropic episode discussing building Agents with the new Claude 3.5 Sonnet, Bolt.new (by Stackblitz) has easily cleared the $8m ARR bar, repeating and accelerating its initial $4m feat.There are very many AI code generators and VS Code forks out there, but Bolt probably broke through initially because of its incredible zero shot low effort app generation:But as we explain in the pod, Bolt also emphasized deploy (Netlify)/ backend (Supabase)/ fullstack capabilities on top of Stackblitz's existing WebContainer full-WASM-powered-developer-environment-in-the-browser tech. Since then, the team has been shipping like mad (with weekly office hours), with bugfixing, full screen, multi-device, long context, diff based edits (using speculative decoding like we covered in Inference, Fast and Slow).All of this has captured the imagination of low/no code builders like Greg Isenberg and many others on YouTube/TikTok/Reddit/X/Linkedin etc:Just as with Fireworks, our relationship with Bolt/Stackblitz goes a bit deeper than normal - swyx advised the launch and got a front row seat to this epic journey, as well as demoed it with Realtime Voice at the recent OpenAI Dev Day. So we are very proud to be the first/closest to tell the full open story of Bolt/Stackblitz!Flow Engineering + Qodo/AlphaCodium UpdateIn year 2 of the pod we have been on a roll getting former guests to return as guest cohosts (Harrison Chase, Aman Sanger, Jon Frankle), and it was a pleasure to catch Itamar Friedman back on the pod, giving us an update on all things Qodo and Testing Agents from our last catchup a year and a half ago:Qodo (they renamed in September) went viral in early January this year with AlphaCodium (paper here, code here) beating DeepMind's AlphaCode with high efficiency:With a simple problem solving code agent:* The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.* Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output. * The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness. * Then, it generates more diverse tests for the problem, covering cases not part of the original public tests. * Iteratively, pick a solution, generate the code, and run it on a few test cases. * If the tests fail, improve the code and repeat the process until the code passes every test.swyx has previously written similar thoughts on types vs tests for putting bounds on program behavior, but AlphaCodium extends this to AI generated tests and code.More recently, Itamar has also shown that AlphaCodium's techniques also extend well to the o1 models:Making Flow Engineering a useful technique to improve code model performance on every model. This is something we see AI Engineers uniquely well positioned to do compared to ML Engineers/Researchers.Full Video PodcastLike and subscribe!Show Notes* Itamar* Qodo* First episode* Eric* Bolt* StackBlitz* Thinkster* AlphaCodium* WebContainersChapters* 00:00:00 Introductions & Updates* 00:06:01 Generic vs. Specific AI Agents* 00:07:40 Maintaining vs Creating with AI* 00:17:46 Human vs Agent Computer Interfaces* 00:20:15 Why Docker doesn't work for Bolt* 00:24:23 Creating Testing and Code Review Loops* 00:28:07 Bolt's Task Breakdown Flow* 00:31:04 AI in Complex Enterprise Environments* 00:41:43 AlphaCodium* 00:44:39 Strategies for Breaking Down Complex Tasks* 00:45:22 Building in Open Source* 00:50:35 Choosing a product as a founder* 00:59:03 Reflections on Bolt Success* 01:06:07 Building a B2C GTM* 01:18:11 AI Capabilities and Pricing Tiers* 01:20:28 What makes Bolt unique* 01:23:07 Future Growth and Product Development* 01:29:06 Competitive Landscape in AI Engineering* 01:30:01 Advice to Founders and Embracing AI* 01:32:20 Having a baby and completing an Iron ManTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol.ai.Swyx [00:00:12]: Hey, and today we're still in our sort of makeshift in-between studio, but we're very delighted to have a former returning guest host, Itamar. Welcome back.Itamar [00:00:21]: Great to be here after a year or more. Yeah, a year and a half.Swyx [00:00:24]: You're one of our earliest guests on Agents. Now you're CEO co-founder of Kodo. Right. Which has just been renamed. You also raised a $40 million Series A, and we can get caught up on everything, but we're also delighted to have our new guest, Eric. Welcome.Eric [00:00:42]: Thank you. Excited to be here. Should I say Bolt or StackBlitz?Swyx [00:00:45]: Like, is it like its own company now or?Eric [00:00:47]: Yeah. Bolt's definitely bolt.new. That's the thing that we're probably the most known for, I imagine, at this point.Swyx [00:00:54]: Which is ridiculous to say because you were working at StackBlitz for so long.Eric [00:00:57]: Yeah. I mean, within a week, we were doing like double the amount of traffic. And StackBlitz had been online for seven years, and we were like, what? But anyways, yeah. So we're StackBlitz, the company behind bolt.new. If you've heard of bolt.new, that's our stuff. Yeah.Swyx [00:01:12]: Yeah.Itamar [00:01:13]: Excellent. I see, by the way, that the founder mode, you need to know to capture opportunities. So kudos on doing that, right? You're working on some technology, and then suddenly you can exploit that to a new world. Yeah.Eric [00:01:24]: Totally. And I think, well, not to jump, but 100%, I mean, a couple of months ago, we had the idea for Bolt earlier this year, but we haven't really shared this too much publicly. But we actually had tried to build it with some of those state-of-the-art models back in January, February, you can kind of imagine which, and they just weren't good enough to actually do the code generation where the code was accurate and it was fast and whatever have you without a ton of like rag, but then there was like issues with that. So we put it on the shelf and then we got kind of a sneak peek of some of the new models that have come out in the past couple of months now. And so once we saw that, once we actually saw the code gen from it, we were like, oh my God, like, okay, we can build a product around this. And so that was really the impetus of us building the thing. But with that, it was StackBlitz, the core StackBlitz product the past seven years has been an IDE for developers. So the entire user experience flow we've built up just didn't make sense. And so when we kind of went out to build Bolt, we just thought, you know, if we were inventing our product today, what would the interface look like given what is now possible with the AI code gen? And so there's definitely a lot of conversations we had internally, but you know, just kind of when we logically laid it out, we were like, yeah, I think it makes sense to just greenfield a new thing and let's see what happens. If it works great, then we'll figure it out. If it doesn't work great, then it'll get deleted at some point. So that's kind of how it actually came to be.Swyx [00:02:49]: I'll mention your background a little bit. You were also founder of Thinkster before you started StackBlitz. So both of you are second time founders. Both of you have sort of re-founded your company recently. Yours was more of a rename. I think a slightly different direction as well. And then we can talk about both. Maybe just chronologically, should we get caught up on where Kodo is first and then you know, just like what people should know since the last pod? Sure.Itamar [00:03:12]: The last pod was two months after we launched and we basically had the vision that we talked about. The idea that software development is about specification, test and code, etc. We are more on the testing part as in essence, we think that if you solve testing, you solve software development. The beautiful chart that we'll put up on screen. And testing is a really big field, like there are many dimensions, unit testing, the level of the component, how big it is, how large it is. And then there is like different type of testing, is it regression or smoke or whatever. So back then we only had like one ID extension with unit tests as in focus. One and a half year later, first ID extension supports more type of testing as context aware. We index local, local repos, but also 10,000s of repos for Fortune 500 companies. We have another agent, another tool that is called, the pure agent is the open source and the commercial one is CodoMerge. And then we have another open source called CoverAgent, which is not yet a commercial product coming very soon. It's very impressive. It could be that already people are approving automated pull requests that they don't even aware in really big open sources. So once we have enough of these, we will also launch another agent. So for the first one and a half year, what we did is grew in our offering and mostly on the side of, does this code actually works, testing, code review, et cetera. And we believe that's the critical milestone that needs to be achieved to actually have the AI engineer for enterprise software. And then like for the first year was everything bottom up, getting to 1 million installation. 2024, that was 2023, 2024 was starting to monetize, to feel like how it is to make the first buck. So we did the teams offering, it went well with a thousand of teams, et cetera. And then we started like just a few months ago to do enterprise with everything you need, which is a lot of things that discussed in the last post that was just released by Codelm. So that's how we call it at Codelm. Just opening the brackets, our company name was Codelm AI, and we renamed to Codo and we call our models Codelm. So back to my point, so we started Enterprise Motion and already have multiple Fortune 100 companies. And then with that, we raised a series of $40 million. And what's exciting about it is that enables us to develop more agents. That's our focus. I think it's very different. We're not coming very soon with an ID or something like that.Swyx [00:06:01]: You don't want to fork this code?Itamar [00:06:03]: Maybe we'll fork JetBrains or something just to be different.Swyx [00:06:08]: I noticed that, you know, I think the promise of general purpose agents has kind of died. Like everyone is doing kind of what you're doing. There's Codogen, Codomerge, and then there's a third one. What's the name of it?Itamar [00:06:17]: Yeah. Codocover. Cover. Which is like a commercial version of a cover agent. It's coming soon.Swyx [00:06:23]: Yeah. It's very similar with factory AI, also doing like droids. They all have special purpose doing things, but people don't really want general purpose agents. Right. The last time you were here, we talked about AutoGBT, the biggest thing of 2023. This year, not really relevant anymore. And I think it's mostly just because when you give me a general purpose agent, I don't know what to do with it.Eric [00:06:42]: Yeah.Itamar [00:06:43]: I totally agree with that. We're seeing it for a while and I think it will stay like that despite the computer use, et cetera, that supposedly can just replace us. You can just like prompt it to be, hey, now be a QA or be a QA person or a developer. I still think that there's a few reasons why you see like a dedicated agent. Again, I'm a bit more focused, like my head is more on complex software for big teams and enterprise, et cetera. And even think about permissions and what are the data sources and just the same way you manage permissions for users. Developers, you probably want to have dedicated guardrails and dedicated approvals for agents. I intentionally like touched a point on not many people think about. And of course, then what you can think of, like maybe there's different tools, tool use, et cetera. But just the first point by itself is a good reason why you want to have different agents.Alessio [00:07:40]: Just to compare that with Bot.new, you're almost focused on like the application is very complex and now you need better tools to kind of manage it and build on top of it. On Bot.new, it's almost like I was using it the other day. There's basically like, hey, look, I'm just trying to get started. You know, I'm not very opinionated on like how you're going to implement this. Like this is what I want to do. And you build a beautiful app with it. What people ask as the next step, you know, going back to like the general versus like specific, have you had people say, hey, you know, this is great to start, but then I want a specific Bot.new dot whatever else to do a more vertical integration and kind of like development or what's the, what do people say?Eric [00:08:18]: Yeah. I think, I think you kind of hit the, hit it head on, which is, you know, kind of the way that we've, we've kind of talked about internally is it's like people are using Bolt to go from like 0.0 to 1.0, like that's like kind of the biggest unlock that Bolt has versus most other things out there. I mean, I think that's kind of what's, what's very unique about Bolt. I think the, you know, the working on like existing enterprise applications is, I mean, it's crazy important because, you know, there's a, you look, when you look at the fortune 500, I mean, these code bases, some of these have been around for 20, 30 plus years. And so it's important to be going from, you know, 101.3 to 101.4, et cetera. I think for us, so what's been actually pretty interesting is we see there's kind of two different users for us that are coming in and it's very distinct. It's like people that are developers already. And then there's people that have never really written software and more if they have, it's been very, very minimal. And so in the first camp, what these developers are doing, like to go from zero to one, they're coming to Bolt and then they're ejecting the thing to get up or just downloading it and, you know, opening cursor, like whatever to, to, you know, keep iterating on the thing. And sometimes they'll bring it back to Bolt to like add in a huge piece of functionality or something. Right. But for the people that don't know how to code, they're actually just, they, they live in this thing. And that was one of the weird things when we launched is, you know, within a day of us being online, one of the most popular YouTube videos, and there's been a ton since, which was, you know, there's like, oh, Bolt is the cursor killer. And I originally saw the headlines and I was like, thanks for the views. I mean, I don't know. This doesn't make sense to me. That's not, that's not what we kind of thought.Swyx [00:09:44]: It's how YouTubers talk to each other. Well, everything kills everything else.Eric [00:09:47]: Totally. But what blew my mind was that there was any comparison because it's like cursor is a, is a local IDE product. But when, when we actually kind of dug into it and we, and we have people that are using our product saying this, I'm not using cursor. And I was like, what? And it turns out there are hundreds of thousands of people that we have seen that we're using cursor and we're trying to build apps with that where they're not traditional software does, but we're heavily leaning on the AI. And as you can imagine, it is very complicated, right? To do that with cursor. So when Bolt came out, they're like, wow, this thing's amazing because it kind of inverts the complexity where it's like, you know, it's not an IDE, it's, it's a, it's a chat-based sort of interface that we have. So that's kind of the split, which is rather interesting. We've had like the first startups now launch off of Bolt entirely where this, you know, tomorrow I'm doing a live stream with this guy named Paul, who he's built an entire CRM using this thing and you know, with backend, et cetera. And people have made their first money on the internet period, you know, launching this with Stripe or whatever have you. So that's, that's kind of the two main, the two main categories of folks that we see using Bolt though.Itamar [00:10:51]: I agree that I don't understand the comparison. It doesn't make sense to me. I think like we have like two type of families of tools. One is like we re-imagine the software development. I think Bolt is there and I think like a cursor is more like a evolution of what we already have. It's like taking the IDE and it's, it's amazing and it's okay, let's, let's adapt the IDE to an era where LLMs can do a lot for us. And Bolt is more like, okay, let's rethink everything totally. And I think we see a few tools there, like maybe Vercel, Veo and maybe Repl.it in that area. And then in the area of let's expedite, let's change, let's, let's progress with what we already have. You can see Cursor and Kodo, but we're different between ourselves, Cursor and Kodo, but definitely I think that comparison doesn't make sense.Alessio [00:11:42]: And just to set the context, this is not a Twitter demo. You've made 4 million of revenue in four weeks. So this is, this is actually working, you know, it's not a, what, what do you think that is? Like, there's been so many people demoing coding agents on Twitter and then it doesn't really work. And then you guys were just like, here you go, it's live, go use it, pay us for it. You know, is there anything in the development that was like interesting and maybe how that compares to building your own agents?Eric [00:12:08]: We had no idea, honestly, like we, we, we've been pretty blown away and, and things have just kind of continued to grow faster since then. We're like, oh, today is week six. So I, I kind of came back to the point you just made, right, where it's, you, you kind of outlined, it's like, there's kind of this new market of like kind of rethinking the software development and then there's heavily augmenting existing developers. I think that, you know, both of which are, you know, AI code gen being extremely good, it's allowed existing developers, it's allowing existing developers to camera out software far faster than they could have ever before, right? It's like the ultimate power tool for an existing developer. But this code gen stuff is now so good. And then, and we saw this over the past, you know, from the beginning of the year when we tried to first build, it's actually lowered the barrier to people that, that aren't traditionally software engineers. But the kind of the key thing is if you kind of think about it from, imagine you've never written software before, right? My co-founder and I, he and I grew up down the street from each other in Chicago. We learned how to code when we were 13 together and we've been building stuff ever since. And this is back in like the mid 2000s or whatever, you know, there was nothing for free to learn from online on the internet and how to code. For our 13th birthdays, we asked our parents for, you know, O'Reilly books cause you couldn't get this at the library, right? And so instead of like an Xbox, we got, you know, programming books. But the hardest part for everyone learning to code is getting an environment set up locally, you know? And so when we built StackBlitz, like kind of the key thesis, like seven years ago, the insight we had was that, Hey, it seems like the browser has a lot of new APIs like WebAssembly and service workers, et cetera, where you could actually write an operating system that ran inside the browser that could boot in milliseconds. And you, you know, basically there's this missing capability of the web. Like the web should be able to build apps for the web, right? You should be able to build the web on the web. Every other platform has that, Visual Studio for Windows, Xcode for Mac. The web has no built in primitive for this. And so just like our built in kind of like nerd instinct on this was like, that seems like a huge hole and it's, you know, it will be very valuable or like, you know, very valuable problem to solve. So if you want to set up that environments, you know, this is what we spent the past seven years doing. And the reality is existing developers have running locally. They already know how to set up that environment. So the problem isn't as acute for them. When we put Bolt online, we took that technology called WebContainer and married it with these, you know, state of the art frontier models. And the people that have the most pain with getting stuff set up locally is people that don't code. I think that's been, you know, really the big explosive reason is no one else has been trying to make dev environments work inside of a browser tab, you know, for the past if since ever, other than basically our company, largely because there wasn't an immediate demand or need. So I think we kind of find ourselves at the right place at the right time. And again, for this market of people that don't know how to write software, you would kind of expect that you should be able to do this without downloading something to your computer in the same way that, hey, I don't have to download Photoshop now to make designs because there's Figma. I don't have to download Word because there's, you know, Google Docs. They're kind of looking at this as that sort of thing, right? Which was kind of the, you know, our impetus and kind of vision from the get-go. But you know, the code gen, the AI code gen stuff that's come out has just been, you know, an order of magnitude multiplier on how magic that is, right? So that's kind of my best distillation of like, what is going on here, you know?Alessio [00:15:21]: And you can deploy too, right?Eric [00:15:22]: Yeah.Alessio [00:15:23]: Yeah.Eric [00:15:24]: And so that's, what's really cool is it's, you know, we have deployment built in with Netlify and this is actually, I think, Sean, you actually built this at Netlify when you were there. Yeah. It's one of the most brilliant integrations actually, because, you know, effectively the API that Sean built, maybe you can speak to it, but like as a provider, we can just effectively give files to Netlify without the user even logging in and they have a live website. And if they want to keep, hold onto it, they can click a link and claim it to their Netlify account. But it basically is just this really magic experience because when you come to Bolt, you say, I want a website. Like my mom, 70, 71 years old, made her first website, you know, on the internet two weeks ago, right? It was about her nursing days.Swyx [00:16:03]: Oh, that's fantastic though. It wouldn't have been made.Eric [00:16:06]: A hundred percent. Cause even in, you know, when we've had a lot of people building personal, like deeply personal stuff, like in the first week we launched this, the sales guy from the East Coast, you know, replied to a tweet of mine and he said, thank you so much for building this to your team. His daughter has a medical condition and so for her to travel, she has to like line up donors or something, you know, so ahead of time. And so he actually used Bolt to make a website to do that, to actually go and send it to folks in the region she was going to travel to ahead of time. I was really touched by it, but I also thought like, why, you know, why didn't he use like Wix or Squarespace? Right? I mean, this is, this is a solved problem, quote unquote, right? And then when I thought, I actually use Squarespace for my, for my, uh, the wedding website for my wife and I, like back in 2021, so I'm familiar, you know, it was, it was faster. I know how to code. I was like, this is faster. Right. And I thought back and I was like, there's a whole interface you have to learn how to use. And it's actually not that simple. There's like a million things you can configure in that thing. When you come to Bolt, there's a, there's a text box. You just say, I need a, I need a wedding website. Here's the date. Here's where it is. And here's a photo of me and my wife, put it somewhere relevant. It's actually the simplest way. And that's what my, when my mom came, she said, uh, I'm Pat Simons. I was a nurse in the seventies, you know, and like, here's the things I did and a website came out. So coming back to why is this such a, I think, why are we seeing this sort of growth? It's, this is the simplest interface I think maybe ever created to actually build it, a deploy a website. And then that website, my mom made, she's like, okay, this looks great. And there's, there's one button, you just click it, deploy, and it's live and you can buy a domain name, attach it to it. And you know, it's as simple as it gets, it's getting even simpler with some of the stuff we're working on. But anyways, so that's, it's, it's, uh, it's been really interesting to see some of the usage like that.Swyx [00:17:46]: I can offer my perspective. So I, you know, I probably should have disclosed a little bit that, uh, I'm a, uh, stack list investor.Alessio [00:17:53]: Canceled the episode. I know, I know. Don't play it now. Pause.Eric actually reached out to ShowMeBolt before the launch. And we, you know, we talked a lot about, like, the framing of, of what we're going to talk about how we marketed the thing, but also, like, what we're So that's what Bolt was going to need, like a whole sort of infrastructure.swyx: Netlify, I was a maintainer but I won't take claim for the anonymous upload. That's actually the origin story of Netlify. We can have Matt Billman talk about it, but that was [00:18:00] how Netlify started. You could drag and drop your zip file or folder from your desktop onto a website, it would have a live URL with no sign in.swyx: And so that was the origin story of Netlify. And it just persists to today. And it's just like it's really nice, interesting that both Bolt and CognitionDevIn and a bunch of other sort of agent type startups, they all use Netlify to deploy because of this one feature. They don't really care about the other features.swyx: But, but just because it's easy for computers to use and talk to it, like if you build an interface for computers specifically, that it's easy for them to Navigate, then they will be used in agents. And I think that's a learning that a lot of developer tools companies are having. That's my bolt launch story and now if I say all that stuff.swyx: And I just wanted to come back to, like, the Webcontainers things, right? Like, I think you put a lot of weight on the technical modes. I think you also are just like, very good at product. So you've, you've like, built a better agent than a lot of people, the rest of us, including myself, who have tried to build these things, and we didn't get as far as you did.swyx: Don't shortchange yourself on products. But I think specifically [00:19:00] on, on infra, on like the sandboxing, like this is a thing that people really want. Alessio has Bax E2B, which we'll have on at some point, talking about like the sort of the server full side. But yours is, you know, inside of the browser, serverless.swyx: It doesn't cost you anything to serve one person versus a million people. It doesn't, doesn't cost you anything. I think that's interesting. I think in theory, we should be able to like run tests because you can run the full backend. Like, you can run Git, you can run Node, you can run maybe Python someday.swyx: We talked about this. But ideally, you should be able to have a fully gentic loop, running code, seeing the errors, correcting code, and just kind of self healing, right? Like, I mean, isn't that the dream?Eric: Totally.swyx: Yeah,Eric: totally. At least in bold, we've got, we've got a good amount of that today. I mean, there's a lot more for us to do, but one of the nice things, because like in web container, you know, there's a lot of kind of stuff you go Google like, you know, turn docker container into wasm.Eric: You'll find a lot of stuff out there that will do that. The problem is it's very big, it's slow, and that ruins the experience. And so what we ended up doing is just writing an operating system from [00:20:00] scratch that was just purpose built to, you know, run in a browser tab. And the reason being is, you know, Docker 2 awesome things will give you an image that's like out 60 to 100 megabits, you know, maybe more, you know, and our, our OS, you know, kind of clocks in, I think, I think we're in like a, maybe, maybe a megabyte or less or something like that.Eric: I mean, it's, it's, you know, really, really, you know, stripped down.swyx: This is basically the task involved is I understand that it's. Mapping every single, single Linux call to some kind of web, web assembly implementation,Eric: but more or less, and, and then there's a lot of things actually, like when you're looking at a dev environment, there's a lot of things that you don't need that a traditional OS is gonna have, right?Eric: Like, you know audio drivers or you like, there's just like, there's just tons of things. Oh, yeah. Right. Yeah. That goes . Yeah. You can just kind, you can, you can kind of tos them. Or alternatively, what you can do is you can actually be the nice thing. And this is, this kind of comes back to the origins of browsers, which is, you know, they're, they're at the beginning of the web and, you know, the late nineties, there was two very different kind of visions for the web where Alan Kay vehemently [00:21:00] disagree with the idea that should be document based, which is, you know, Tim Berners Lee, you know, that, and that's kind of what ended up winning, winning was this document based kind of browsing documents on the web thing.Eric: Alan Kay, he's got this like very famous quote where he said, you know, you want web browsers to be mini operating systems. They should download little mini binaries and execute with like a little mini virtualized operating system in there. And what's kind of interesting about the history, not to geek out on this aspect, what's kind of interesting about the history is both of those folks ended up being right.Eric: Documents were actually the pragmatic way that the web worked. Was, you know, became the most ubiquitous platform in the world to the degree now that this is why WebAssembly has been invented is that we're doing, we need to do more low level things in a browser, same thing with WebGPU, et cetera. And so all these APIs, you know, to build an operating system came to the browser.Eric: And that was actually the realization we had in 2017 was, holy heck, like you can actually, you know, service workers, which were designed for allowing your app to work offline. That was the kind of the key one where it was like, wait a second, you can actually now run. Web servers within a [00:22:00] browser, like you can run a server that you open up.Eric: That's wild. Like full Node. js. Full Node. js. Like that capability. Like, I can have a URL that's programmatically controlled. By a web application itself, boom. Like the web can build the web. The primitive is there. Everyone at the time, like we talked to people that like worked on, you know Chrome and V8 and they were like, uhhhh.Eric: You know, like I don't know. But it's one of those things you just kind of have to go do it to find out. So we spent a couple of years, you know, working on it and yeah. And, and, and got to work in back in 2021 is when we kind of put the first like data of web container online. Butswyx: in partnership with Google, right?swyx: Like Google actually had to help you get over the finish line with stuff.Eric: A hundred percent, because well, you know, over the years of when we were doing the R and D on the thing. Kind of the biggest challenge, the two ways that you can kind of test how powerful and capable a platform are, the two types of applications are one, video games, right, because they're just very compute intensive, a lot of calculations that have to happen, right?Eric: The second one are IDEs, because you're talking about actually virtualizing the actual [00:23:00] runtime environment you are in to actually build apps on top of it, which requires sophisticated capabilities, a lot of access to data. You know, a good amount of compute power, right, to effectively, you know, building app in app sort of thing.Eric: So those, those are the stress tests. So if your platform is missing stuff, those are the things where you find out. Those are, those are the people building games and IDEs. They're the ones filing bugs on operating system level stuff. And for us, browser level stuff.Eric [00:23:47]: yeah, what ended up happening is we were just hammering, you know, the Chromium bug tracker, and they're like, who are these guys? Yeah. And, and they were amazing because I mean, just making Chrome DevTools be able to debug, I mean, it's, it's not, it wasn't originally built right for debugging an operating system, right? They've been phenomenal working with us and just kind of really pushing the limits, but that it's a rising tide that's kind of lifted all boats because now there's a lot of different types of applications that you can debug with Chrome Dev Tools that are running a browser that runs more reliably because just the stress testing that, that we and, you know, games that are coming to the web are kind of pushing as well, but.Itamar [00:24:23]: That's awesome. About the testing, I think like most, let's say coding assistant from different kinds will need this loop of testing. And even I would add code review to some, to some extent that you mentioned. How is testing different from code review? Code review could be, for example, PR review, like a code review that is done at the point of when you want to merge branches. But I would say that code review, for example, checks best practices, maintainability, and so on. It's not just like CI, but more than CI. And testing is like a more like checking functionality, et cetera. So it's different. We call, by the way, all of these together code integrity, but that's a different story. Just to go back to the, to the testing and specifically. Yeah. It's, it's, it's since the first slide. Yeah. We're consistent. So if we go back to the testing, I think like, it's not surprising that for us testing is important and for Bolt it's testing important, but I want to shed some light on a different perspective of it. Like let's think about autonomous driving. Those startups that are doing autonomous driving for highway and autonomous driving for the city. And I think like we saw the autonomous of the highway much faster and reaching to a level, I don't know, four or so much faster than those in the city. Now, in both cases, you need testing and quote unquote testing, you know, verifying validation that you're doing the right thing on the road and you're reading and et cetera. But it's probably like so different in the city that it could be like actually different technology. And I claim that we're seeing something similar here. So when you're building the next Wix, and if I was them, I was like looking at you and being a bit scared. That's what you're disrupting, what you just said. Then basically, I would say that, for example, the UX UI is freaking important. And because you're you're more aiming for the end user. In this case, maybe it's an end user that doesn't know how to develop for developers. It's also important. But let alone those that do not know to develop, they need a slick UI UX. And I think like that's one reason, for example, I think Cursor have like really good technology. I don't know the underlying what's under the hood, but at least what they're saying. But I think also their UX UI is great. It's a lot because they did their own ID. While if you're aiming for the city AI, suddenly like there's a lot of testing and code review technology that it's not necessarily like that important. For example, let's talk about integration tests. Probably like a lot of what you're building involved at the moment is isolated applications. Maybe the vision or the end game is maybe like having one solution for everything. It could be that eventually the highway companies will go into the city and the other way around. But at the beginning, there is a difference. And integration tests are a good example. I guess they're a bit less important. And when you think about enterprise software, they're really important. So to recap, like I think like the idea of looping and verifying your test and verifying your code in different ways, testing or code review, et cetera, seems to be important in the highway AI and the city AI, but in different ways and different like critical for the city, even more and more variety. Actually, I was looking to ask you like what kind of loops you guys are doing. For example, when I'm using Bolt and I'm enjoying it a lot, then I do see like sometimes you're trying to catch the errors and fix them. And also, I noticed that you're breaking down tasks into smaller ones and then et cetera, which is already a common notion for a year ago. But it seems like you're doing it really well. So if you're willing to share anything about it.Eric [00:28:07]: Yeah, yeah. I realized I never actually hit the punchline of what I was saying before. I mentioned the point about us kind of writing an operating system from scratch because what ended up being important about that is that to your point, it's actually a very, like compared to like a, you know, if you're like running cursor on anyone's machine, you kind of don't know what you're dealing with, with the OS you're running on. There could be an error happens. It could be like a million different things, right? There could be some config. There could be, it could be God knows what, right? The thing with WebConnect is because we wrote the entire thing from scratch. It's actually a unified image basically. And we can instrument it at any level that we think is going to be useful, which is exactly what we did when we started building Bolt is we instrumented stuff at like the process level, at the runtime level, you know, et cetera, et cetera, et cetera. Stuff that would just be not impossible to do on local, but to do that in a way that works across any operating system, whatever is, I mean, would just be insanely, you know, insanely difficult to do right and reliably. And that's what you saw when you've used Bolt is that when an error actually will occur, whether it's in the build process or the actual web application itself is failing or anything kind of in between, you can actually capture those errors. And today it's a very primitive way of how we've implemented it largely because the product just didn't exist 90 days ago. So we're like, we got some work ahead of us and we got to hire some more a little bit, but basically we present and we say, Hey, this is, here's kind of the things that went wrong. There's a fix it button and then a ignore button, and then you can just hit fix it. And then we take all that telemetry through our agent, you run it through our agent and say, kind of, here's the state of the application. Here's kind of the errors that we got from Node.js or the browser or whatever, and like dah, dah, dah, dah. And it can take a crack at actually solving it. And it's actually pretty darn good at being able to do that. That's kind of been a, you know, closing the loop and having it be a reliable kind of base has seemed to be a pretty big upgrade over doing stuff locally, just because I think that's a pretty key ingredient of it. And yeah, I think breaking things down into smaller tasks, like that's, that's kind of a key part of our agent. I think like Claude did a really good job with artifacts. I think, you know, us and kind of everyone else has, has kind of taken their approach of like actually breaking out certain tasks in a certain order into, you know, kind of a concrete way. And, and so actually the core of Bolt, I know we actually made open source. So you can actually go and check out like the system prompts and et cetera, and you can run it locally and whatever have you. So anyone that's interested in this stuff, I'd highly recommend taking a look at. There's not a lot of like stuff that's like open source in this realm. It's, that was one of the fun things that we've we thought would be cool to do. And people, people seem to like it. I mean, there's a lot of forks and people adding different models and stuff. So it's been cool to see.Swyx [00:30:41]: Yeah. I'm happy to add, I added real-time voice for my opening day demo and it was really fun to hack with. So thank you for doing that. Yeah. Thank you. I'm going to steal your code.Eric [00:30:52]: Because I want that.Swyx [00:30:52]: It's funny because I built on top of the fork of Bolt.new that already has the multi LLM thing. And so you just told me you're going to merge that in. So then you're going to merge two layers of forks down into this thing. So it'll be fun.Eric [00:31:03]: Heck yeah.Alessio [00:31:04]: Just to touch on like the environment, Itamar, you maybe go into the most complicated environments that even the people that work there don't know how to run. How much of an impact does that have on your performance? Like, you know, it's most of the work you're doing actually figuring out environment and like the libraries, because I'm sure they're using outdated version of languages, they're using outdated libraries, they're using forks that have not been on the public internet before. How much of the work that you're doing is like there versus like at the LLM level?Itamar [00:31:32]: One of the reasons I was asking about, you know, what are the steps to break things down, because it really matters. Like, what's the tech stack? How complicated the software is? It's hard to figure it out when you're dealing with the real world, any environment of enterprise as a city, when I'm like, while maybe sometimes like, I think you do enable like in Bolt, like to install stuff, but it's quite a like controlled environment. And that's a good thing to do, because then you narrow down and it's easier to make things work. So definitely, there are two dimensions, I think, actually spaces. One is the fact just like installing our software without yet like doing anything, making it work, just installing it because we work with enterprise and Fortune 500, etc. Many of them want on prem solution.Swyx [00:32:22]: So you have how many deployment options?Itamar [00:32:24]: Basically, we had, we did a metric metrics, say 96 options, because, you know, they're different dimensions. Like, for example, one dimension, we connect to your code management system to your Git. So are you having like GitHub, GitLab? Subversion? Is it like on cloud or deployed on prem? Just an example. Which model agree to use its APIs or ours? Like we have our Is it TestGPT? Yeah, when we started with TestGPT, it was a huge mistake name. It was cool back then, but I don't think it's a good idea to name a model after someone else's model. Anyway, that's my opinion. So we gotSwyx [00:33:02]: I'm interested in these learnings, like things that you change your mind on.Itamar [00:33:06]: Eventually, when you're building a company, you're building a brand and you want to create your own brand. By the way, when I thought about Bolt.new, I also thought about if it's not a problem, because when I think about Bolt, I do think about like a couple of companies that are already called this way.Swyx [00:33:19]: Curse companies. You could call it Codium just to...Itamar [00:33:24]: Okay, thank you. Touche. Touche.Eric [00:33:27]: Yeah, you got to imagine the board meeting before we launched Bolt, one of our investors, you can imagine they're like, are you sure? Because from the investment side, it's kind of a famous, very notorious Bolt. And they're like, are you sure you want to go with that name? Oh, yeah. Yeah, absolutely.Itamar [00:33:43]: At this point, we have actually four models. There is a model for autocomplete. There's a model for the chat. There is a model dedicated for more for code review. And there is a model that is for code embedding. Actually, you might notice that there isn't a good code embedding model out there. Can you name one? Like dedicated for code?Swyx [00:34:04]: There's code indexing, and then you can do sort of like the hide for code. And then you can embed the descriptions of the code.Itamar [00:34:12]: Yeah, but you do see a lot of type of models that are dedicated for embedding and for different spaces, different fields, etc. And I'm not aware. And I know that if you go to the bedrock, try to find like there's a few code embedding models, but none of them are specialized for code.Swyx [00:34:31]: Is there a benchmark that you would tell us to pay attention to?Itamar [00:34:34]: Yeah, so it's coming. Wait for that. Anyway, we have our models. And just to go back to the 96 option of deployment. So I'm closing the brackets for us. So one is like dimensional, like what Git deployment you have, like what models do you agree to use? Dotter could be like if it's air-gapped completely, or you want VPC, and then you have Azure, GCP, and AWS, which is different. Do you use Kubernetes or do not? Because we want to exploit that. There are companies that do not do that, etc. I guess you know what I mean. So that's one thing. And considering that we are dealing with one of all four enterprises, we needed to deal with that. So you asked me about how complicated it is to solve that complex code. I said, it's just a deployment part. And then now to the software, we see a lot of different challenges. For example, some companies, they did actually a good job to build a lot of microservices. Let's not get to if it's good or not, but let's first assume that it is a good thing. A lot of microservices, each one of them has their own repo. And now you have tens of thousands of repos. And you as a developer want to develop something. And I remember me coming to a corporate for the first time. I don't know where to look at, like where to find things. So just doing a good indexing for that is like a challenge. And moreover, the regular indexing, the one that you can find, we wrote a few blogs on that. By the way, we also have some open source, different than yours, but actually three and growing. Then it doesn't work. You need to let the tech leads and the companies influence your indexing. For example, Mark with different repos with different colors. This is a high quality repo. This is a lower quality repo. This is a repo that we want to deprecate. This is a repo we want to grow, etc. And let that be part of your indexing. And only then things actually work for enterprise and they don't get to a fatigue of, oh, this is awesome. Oh, but I'm starting, it's annoying me. I think Copilot is an amazing tool, but I'm quoting others, meaning GitHub Copilot, that they see not so good retention of GitHub Copilot and enterprise. Ooh, spicy. Yeah. I saw snapshots of people and we have customers that are Copilot users as well. And also I saw research, some of them is public by the way, between 38 to 50% retention for users using Copilot and enterprise. So it's not so good. By the way, I don't think it's that bad, but it's not so good. So I think that's a reason because, yeah, it helps you auto-complete, but then, and especially if you're working on your repo alone, but if it's need that context of remote repos that you're code-based, that's hard. So to make things work, there's a lot of work on that, like giving the controllability for the tech leads, for the developer platform or developer experience department in the organization to influence how things are working. A short example, because if you have like really old legacy code, probably some of it is not so good anymore. If you just fine tune on these code base, then there is a bias to repeat those mistakes or old practices, etc. So you need, for example, as I mentioned, to influence that. For example, in Coda, you can have a markdown of best practices by the tech leads and Coda will include that and relate to that and will not offer suggestions that are not according to the best practices, just as an example. So that's just a short list of things that you need to do in order to deal with, like you mentioned, the 100.1 to 100.2 version of software. I just want to say what you're doing is extremelyEric [00:38:32]: impressive because it's very difficult. I mean, the business of Stackplus, kind of before bulk came online, we sold a version of our IDE that went on-prem. So I understand what you're saying about the difficulty of getting stuff just working on-prem. Holy heck. I mean, that is extremely hard. I guess the question I have for you is, I mean, we were just doing that with kind of Kubernetes-based stuff, but the spread of Fortune 500 companies that you're working with, how are they doing the inference for this? Are you kind of plugging into Azure's OpenAI stuff and AWS's Bedrock, you know, Cloud stuff? Or are they just like running stuff on GPUs? Like, what is that? How are these folks approaching that? Because, man, what we saw on the enterprise side, I mean, I got to imagine that that's a huge challenge. Everything you said and more, like,Itamar [00:39:15]: for example, like someone could be, and I don't think any of these is bad. Like, they made their decision. Like, for example, some people, they're, I want only AWS and VPC on AWS, no matter what. And then they, some of them, like there is a subset, I will say, I'm willing to take models only for from Bedrock and not ours. And we have a problem because there is no good code embedding model on Bedrock. And that's part of what we're doing now with AWS to solve that. We solve it in a different way. But if you are willing to run on AWS VPC, but run your run models on GPUs or inferentia, like the new version of the more coming out, then our models can run on that. But everything you said is right. Like, we see like on-prem deployment where they have their own GPUs. We see Azure where you're using OpenAI Azure. We see cases where you're running on GCP and they want OpenAI. Like this cross, like a case, although there is Gemini or even Sonnet, I think is available on GCP, just an example. So all the options, that's part of the challenge. I admit that we thought about it, but it was even more complicated. And it took us a few months to actually, that metrics that I mentioned, to start clicking each one of the blocks there. A few months is impressive. I mean,Eric [00:40:35]: honestly, just that's okay. Every one of these enterprises is, their networking is different. Just everything's different. Every single one is different. I see you understand. Yeah. So that just cannot be understated. That it is, that's extremely impressive. Hats off.Itamar [00:40:50]: It could be, by the way, like, for example, oh, we're only AWS, but our GitHub enterprise is on-prem. Oh, we forgot. So we need like a private link or whatever, like every time like that. It's not, and you do need to think about it if you want to work with an enterprise. And it's important. Like I understand like their, I respect their point of view.Swyx [00:41:10]: And this primarily impacts your architecture, your tech choices. Like you have to, you can't choose some vendors because...Itamar [00:41:15]: Yeah, definitely. To be frank, it makes us hard for a startup because it means that we want, we want everyone to enjoy all the variety of models. By the way, it was hard for us with our technology. I want to open a bracket, like a window. I guess you're familiar with our Alpha Codium, which is an open source.Eric [00:41:33]: We got to go over that. Yeah. So I'll do that quickly.Itamar [00:41:36]: Yeah. A pin in that. Yeah. Actually, we didn't have it in the last episode. So, so, okay.Swyx [00:41:41]: Okay. We'll come back to that later, but let's talk about...Itamar [00:41:43]: Yeah. So, so just like shortly, and then we can double click on Alpha Codium. But Alpha Codium is a open source tool. You can go and try it and lets you compete on CodeForce. This is a website and a competition and actually reach a master level level, like 95% with a click of a button. You don't need to do anything. And part of what we did there is taking a problem and breaking it to different, like smaller blocks. And then the models are doing a much better job. Like we all know it by now that taking small tasks and solving them, by the way, even O1, which is supposed to be able to do system two thinking like Greg from OpenAI like hinted, is doing better on these kinds of problems. But still, it's very useful to break it down for O1, despite O1 being able to think by itself. And that's what we presented like just a month ago, OpenAI released that now they are doing 93 percentile with O1 IOI left and International Olympiad of Formation. Sorry, I forgot. Exactly. I told you I forgot. And we took their O1 preview with Alpha Codium and did better. Like it just shows like, and there is a big difference between the preview and the IOI. It shows like that these models are not still system two thinkers, and there is a big difference. So maybe they're not complete system two. Yeah, they need some guidance. I call them system 1.5. We can, we can have it. I thought about it. Like, you know, I care about this philosophy stuff. And I think like we didn't see it even close to a system two thinking. I can elaborate later. But closing the brackets, like we take Alpha Codium and as our principle of thinking, we take tasks and break them down to smaller tasks. And then we want to exploit the best model to solve them. So I want to enable anyone to enjoy O1 and SONET and Gemini 1.5, etc. But at the same time, I need to develop my own models as well, because some of the Fortune 500 want to have all air gapped or whatever. So that's a challenge. Now you need to support so many models. And to some extent, I would say that the flow engineering, the breaking down to two different blocks is a necessity for us. Why? Because when you take a big block, a big problem, you need a very different prompt for each one of the models to actually work. But when you take a big problem and break it into small tasks, we can talk how we do that, then the prompt matters less. What I want to say, like all this, like as a startup trying to do different deployment, getting all the juice that you can get from models, etc. is a big problem. And one need to think about it. And one of our mitigation is that process of taking tasks and breaking them down. That's why I'm really interested to know how you guys are doing it. And part of what we do is also open source. So you can see.Swyx [00:44:39]: There's a lot in there. But yeah, flow over prompt. I do believe that that does make sense. I feel like there's a lot that both of you can sort of exchange notes on breaking down problems. And I just want you guys to just go for it. This is fun to watch.Eric [00:44:55]: Yeah. I mean, what's super interesting is the context you're working in is, because for us too with Bolt, we've started thinking because our kind of existing business line was going behind the firewall, right? We were like, how do we do this? Adding the inference aspect on, we're like, okay, how does... Because I mean, there's not a lot of prior art, right? I mean, this is all new. This is all new. So I definitely am going to have a lot of questions for you.Itamar [00:45:17]: I'm here. We're very open, by the way. We have a paper on a blog or like whatever.Swyx [00:45:22]: The Alphacodeum, GitHub, and we'll put all this in the show notes.Itamar [00:45:25]: Yeah. And even the new results of O1, we published it.Eric [00:45:29]: I love that. And I also just, I think spiritually, I like your approach of being transparent. Because I think there's a lot of hype-ium around AI stuff. And a lot of it is, it's just like, you have these companies that are just kind of keep their stuff closed source and then just max hype it, but then it's kind of nothing. And I think it kind of gives a bad rep to the incredible stuff that's actually happening here. And so I think it's stuff like what you're doing where, I mean, true merit and you're cracking open actual code for others to learn from and use. That strikes me as the right approach. And it's great to hear that you're making such incredible progress.Itamar [00:46:02]: I have something to share about the open source. Most of our tools are, we have an open source version and then a premium pro version. But it's not an easy decision to do that. I actually wanted to ask you about your strategy, but I think in your case, there is, in my opinion, relatively a good strategy where a lot of parts of open source, but then you have the deployment and the environment, which is not right if I get it correctly. And then there's a clear, almost hugging face model. Yeah, you can do that, but why should you try to deploy it yourself, deploy it with us? But in our case, and I'm not sure you're not going to hit also some competitors, and I guess you are. I wanted to ask you, for example, on some of them. In our case, one day we looked on one of our competitors that is doing code review. We're a platform. We have the code review, the testing, et cetera, spread over the ID to get. And in each agent, we have a few startups or a big incumbents that are doing only that. So we noticed one of our competitors having not only a very similar UI of our open source, but actually even our typo. And you sit there and you're kind of like, yeah, we're not that good. We don't use enough Grammarly or whatever. And we had a couple of these and we saw it there. And then it's a challenge. And I want to ask you, Bald is doing so well, and then you open source it. So I think I know what my answer was. I gave it before, but still interestingEric [00:47:29]: to hear what you think. GeoHot said back, I don't know who he was up to at this exact moment, but I think on comma AI, all that stuff's open source. And someone had asked him, why is this open source? And he's like, if you're not actually confident that you can go and crush it and build the best thing, then yeah, you should probably keep your stuff closed source. He said something akin to that. I'm probably kind of butchering it, but I thought it was kind of a really good point. And that's not to say that you should just open source everything, because for obvious reasons, there's kind of strategic things you have to kind of take in mind. But I actually think a pretty liberal approach, as liberal as you kind of can be, it can really make a lot of sense. Because that is so validating that one of your competitors is taking your stuff and they're like, yeah, let's just kind of tweak the styles. I mean, clearly, right? I think it's kind of healthy because it keeps, I'm sure back at HQ that day when you saw that, you're like, oh, all right, well, we have to grind even harder to make sure we stay ahead. And so I think it's actually a very useful, motivating thing for the teams. Because you might feel this period of comfort. I think a lot of companies will have this period of comfort where they're not feeling the competition and one day they get disrupted. So kind of putting stuff out there and letting people push it forces you to face reality soon, right? And actually feel that incrementally so you can kind of adjust course. And that's for us, the open source version of Bolt has had a lot of features people have been begging us for, like persisting chat messages and checkpoints and stuff. Within the first week, that stuff was landed in the open source versions. And they're like, why can't you ship this? It's in the open, so people have forked it. And we're like, we're trying to keep our servers and GPUs online. But it's been great because the folks in the community did a great job, kept us on our toes. And we've got to know most of these folks too at this point that have been building these things. And so it actually was very instructive. Like, okay, well, if we're going to go kind of land this, there's some UX patterns we can kind of look at and the code is open source to this stuff. What's great about these, what's not. So anyways, NetNet, I think it's awesome. I think from a competitive point of view for us, I think in particular, what's interesting is the core technology of WebContainer going. And I think that right now, there's really nothing that's kind of on par with that. And we also, we have a business of, because WebContainer runs in your browser, but to make it work, you have to install stuff from NPM. You have to make cores bypass requests, like connected databases, which all require server-side proxying or acceleration. And so we actually sell WebContainer as a service. One of the core reasons we open-sourced kind of the core components of Bolt when we launched was that we think that there's going to be a lot more of these AI, in-your-browser AI co-gen experiences, kind of like what Anthropic did with Artifacts and Clod. By the way, Artifacts uses WebContainers. Not yet. No, yeah. Should I strike that? I think that they've got their own thing at the moment, but there's been a lot of interest in WebContainers from folks doing things in that sort of realm and in the AI labs and startups and everything in between. So I think there'll be, I imagine, over the coming months, there'll be lots of things being announced to folks kind of adopting it. But yeah, I think effectively...Swyx [00:50:35]: Okay, I'll say this. If you're a large model lab and you want to build sandbox environments inside of your chat app, you should call Eric.Itamar [00:50:43]: But wait, wait, wait, wait, wait, wait. I have a question about that. I think OpenAI, they felt that people are not using their model as they would want to. So they built ChatGPT. But I would say that ChatGPT now defines OpenAI. I know they're doing a lot of business from their APIs, but still, is this how you think? Isn't Bolt.new your business now? Why don't you focus on that instead of the...Swyx [00:51:16]: What's your advice as a founder?Eric [00:51:18]: You're right. And so going into it, we, candidly, we were like, Bolt.new, this thing is super cool. We think people are stoked. We think people will be stoked. But we were like, maybe that's allowed. Best case scenario, after month one, we'd be mind blown if we added a couple hundred K of error or something. And we were like, but we think there's probably going to be an immediate huge business. Because there was some early poll on folks wanting to put WebContainer into their product offerings, kind of similar to what Bolt is doing or whatever. We were actually prepared for the inverse outcome here. But I mean, well, I guess we've seen poll on both. But I mean, what's happened with Bolt, and you're right, it's actually the same strategy as like OpenAI or Anthropic, where we have our ChatGPT to OpenAI's APIs is Bolt to WebContainer. And so we've kind of taken that same approach. And we're seeing, I guess, some of the similar results, except right now, the revenue side is extremely lopsided to Bolt.Itamar [00:52:16]: I think if you ask me what's my advice, I think you have three options. One is to focus on Bolt. The other is to focus on the WebContainer. The third is to raise one billion dollars and do them both. I'm serious. I think otherwise, you need to choose. And if you raise enough money, and I think it's big bucks, because you're going to be chased by competitors. And I think it will be challenging to do both. And maybe you can. I don't know. We do see these numbers right now, raising above $100 million, even without havingEric [00:52:49]: a product. You can see these. It's excellent advice. And I think what's been amazing, but also kind of challenging is we're trying to forecast, okay, well, where are these things going? I mean, in the initial weeks, I think us and all the investors in the company that we're sharing this with, it was like, this is cool. Okay, we added 500k. Wow, that's crazy. Wow, we're at a million now. Most things, you have this kind of the tech crunch launch of initiation and then the thing of sorrow. And if there's going to be a downtrend, it's just not coming yet. Now that we're kind of looking ahead, we're six weeks in. So now we're getting enough confidence in our convictions to go, okay, this se
Avui sentirem: Serenata simf
We have a full slate of upcoming events: AI Engineer London, AWS Re:Invent in Las Vegas, and now Latent Space LIVE! at NeurIPS in Vancouver and online. Sign up to join and speak!We are still taking questions for our next big recap episode! Submit questions and messages on Speakpipe here for a chance to appear on the show!We try to stay close to the inference providers as part of our coverage, as our podcasts with Together AI and Replicate will attest: However one of the most notable pull quotes from our very well received Braintrust episode was his opinion that open source model adoption has NOT gone very well and is actually declining in relative market share terms (it is of course increasing in absolute terms):Today's guest, Lin Qiao, would wholly disagree. Her team of Pytorch/GPU experts are wholly dedicated toward helping you serve and finetune the full stack of open source models from Meta and others, across all modalities (Text, Audio, Image, Embedding, Vision-understanding), helping customers like Cursor and Hubspot scale up open source model inference both rapidly and affordably.Fireworks has emerged after its successive funding rounds with top tier VCs as one of the leaders of the Compound AI movement, a term first coined by the Databricks/Mosaic gang at Berkeley AI and adapted as “Composite AI” by Gartner:Replicating o1We are the first podcast to discuss Fireworks' f1, their proprietary replication of OpenAI's o1. This has become a surprisingly hot area of competition in the past week as both Nous Forge and Deepseek r1 have launched competitive models.Full Video PodcastLike and subscribe!Timestamps* 00:00:00 Introductions* 00:02:08 Pre-history of Fireworks and PyTorch at Meta* 00:09:49 Product Strategy: From Framework to Model Library* 00:13:01 Compound AI Concept and Industry Dynamics* 00:20:07 Fireworks' Distributed Inference Engine* 00:22:58 OSS Model Support and Competitive Strategy* 00:29:46 Declarative System Approach in AI* 00:31:00 Can OSS replicate o1?* 00:36:51 Fireworks f1* 00:41:03 Collaboration with Cursor and Speculative Decoding* 00:46:44 Fireworks quantization (and drama around it)* 00:49:38 Pricing Strategy* 00:51:51 Underrated Features of Fireworks Platform* 00:55:17 HiringTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner at CTO at Danceable Partners, and I'm joined by my co-host, Swyx founder, Osmalayar.Swyx [00:00:11]: Hey, and today we're in a very special studio inside the Fireworks office with Lin Qiang, CEO of Fireworks. Welcome. Yeah.Lin [00:00:20]: Oh, you should welcome us.Swyx [00:00:21]: Yeah, welcome. Yeah, thanks for having us. It's unusual to be in the home of a startup, but it's also, I think our relationship is a bit unusual compared to all our normal guests. Definitely.Lin [00:00:34]: Yeah. I'm super excited to talk about very interesting topics in that space with both of you.Swyx [00:00:41]: You just celebrated your two-year anniversary yesterday.Lin [00:00:43]: Yeah, it's quite a crazy journey. We circle around and share all the crazy stories across these two years, and it has been super fun. All the way from we experienced Silicon Valley bank run to we delete some data that shouldn't be deleted operationally. We went through a massive scale where we actually are busy getting capacity to, yeah, we learned to kind of work with it as a team with a lot of brilliant people across different places to join a company. It has really been a fun journey.Alessio [00:01:24]: When you started, did you think the technical stuff will be harder or the bank run and then the people side? I think there's a lot of amazing researchers that want to do companies and it's like the hardest thing is going to be building the product and then you have all these different other things. So, were you surprised by what has been your experience the most?Lin [00:01:42]: Yeah, to be honest with you, my focus has always been on the product side and then after the product goes to market. And I didn't realize the rest has been so complicated, operating a company and so on. But because I don't think about it, I just kind of manage it. So it's done. I think I just somehow don't think about it too much and solve whatever problem coming our way and it worked.Swyx [00:02:08]: So let's, I guess, let's start at the pre-history, the initial history of Fireworks. You ran the PyTorch team at Meta for a number of years and we previously had Sumit Chintal on and I think we were just all very interested in the history of GenEI. Maybe not that many people know how deeply involved Faire and Meta were prior to the current GenEI revolution.Lin [00:02:35]: My background is deep in distributed system, database management system. And I joined Meta from the data side and I saw this tremendous amount of data growth, which cost a lot of money and we're analyzing what's going on. And it's clear that AI is driving all this data generation. So it's a very interesting time because when I joined Meta, Meta is going through ramping down mobile-first, finishing the mobile-first transition and then starting AI-first. And there's a fundamental reason about that sequence because mobile-first gave a full range of user engagement that has never existed before. And all this user engagement generated a lot of data and this data power AI. So then the whole entire industry is also going through, falling through this same transition. When I see, oh, okay, this AI is powering all this data generation and look at where's our AI stack. There's no software, there's no hardware, there's no people, there's no team. I want to dive up there and help this movement. So when I started, it's very interesting industry landscape. There are a lot of AI frameworks. It's a kind of proliferation of AI frameworks happening in the industry. But all the AI frameworks focus on production and they use a very certain way of defining the graph of neural network and then use that to drive the model iteration and productionization. And PyTorch is completely different. So they could also assume that he was the user of his product. And he basically says, researchers face so much pain using existing AI frameworks, this is really hard to use and I'm going to do something different for myself. And that's the origin story of PyTorch. PyTorch actually started as the framework for researchers. They don't care about production at all. And as they grow in terms of adoption, so the interesting part of AI is research is the top of our normal production. There are so many researchers across academic, across industry, they innovate and they put their results out there in open source and that power the downstream productionization. So it's brilliant for MATA to establish PyTorch as a strategy to drive massive adoption in open source because MATA internally is a PyTorch shop. So it creates a flying wheel effect. So that's kind of a strategy behind PyTorch. But when I took on PyTorch, it's kind of at Caspo, MATA established PyTorch as the framework for both research and production. So no one has done that before. And we have to kind of rethink how to architect PyTorch so we can really sustain production workload, the stability, reliability, low latency, all this production concern was never a concern before. Now it's a concern. And we actually have to adjust its design and make it work for both sides. And that took us five years because MATA has so many AI use cases, all the way from ranking recommendation as powering the business top line or as ranking newsfeed, video ranking to site integrity detect bad content automatically using AI to all kinds of effects, translation, image classification, object detection, all this. And also across AI running on the server side, on mobile phones, on AI VR devices, the wide spectrum. So by the time we actually basically managed to support AI across ubiquitous everywhere across MATA. But interestingly, through open source engagement, we work with a lot of companies. It is clear to us like this industry is starting to take on AI first transition. And of course, MATA's hyperscale always go ahead of industry. And it feels like when we start this AI journey at MATA, there's no software, no hardware, no team. For many companies we engage with through PyTorch, we feel the pain. That's the genesis why we feel like, hey, if we create fireworks and support industry going through this transition, it will be a huge amount of impact. Of course, the problem that the industry is facing will not be the same as MATA. MATA is so big, right? So it's kind of skewed towards extreme scale and extreme optimization in the industry will be different. But we feel like we have the technical chop and we've seen a lot. We'll look to kind of drive that. So yeah, so that's how we started.Swyx [00:06:58]: When you and I chatted about the origins of fireworks, it was originally envisioned more as a PyTorch platform, and then later became much more focused on generative AI. Is that fair to say? What was the customer discovery here?Lin [00:07:13]: Right. So I would say our initial blueprint is we should build a PyTorch cloud because a PyTorch library and there's no SaaS platform to enable AI workloads.Swyx [00:07:26]: Even in 2022, it's interesting.Lin [00:07:28]: I would not say absolutely no, but cloud providers have some of those, but it's not first class citizen, right? At 2022, there's still like TensorFlow is massively in production. And this is all pre-gen AI, and PyTorch is kind of getting more and more adoption. But there's no PyTorch-first SaaS platform existing. At the same time, we are also a very pragmatic set of people. We really want to make sure from the get-go, we get really, really close to customers. We understand their use case, we understand their pain points, we understand the value we deliver to them. So we want to take a different approach instead of building a horizontal PyTorch cloud. We want to build a verticalized platform first. And then we talk with many customers. And interestingly, we started the company in September 2022, and in October, November, the OpenAI announced ChatGPT. And then boom, when we talked with many customers, they were like, can you help us work on the JNS aspect? So of course, there are some open source models. It's not as good at that time, but people are already putting a lot of attention there. Then we decided that if we're going to pick a vertical, we're going to pick JNI. The other reason is all JNI models are PyTorch models. So that's another reason. We believe that because of the nature of JNI, it's going to generate a lot of human consumable content. It will drive a lot of consumer, customer-developer-facing application and product innovation. Guaranteed. We're just at the beginning of this. Our prediction is for those kind of applications, the inference is much more important than training because inference scale is proportional to the up-limit award population. And training scale is proportional to the number of researchers. Of course, each training round could be very expensive. Although PyTorch supports both inference and training, we decided to laser focus on inference. So yeah, so that's how we got started. And we launched our public platform August last year. When we launched, it was a single product. It's a distributed inference engine with a simple API, open AI compatible API with many models. We started with LM and then we added a lot of models. Fast forward to now, we are a full platform with multiple product lines. So we love to kind of dive deep into what we offer. But that's a very fun journey in the past two years.Alessio [00:09:49]: What was the transition from you start to focus on PyTorch and people want to understand the framework, get it live. And now say maybe most people that use you don't even really know much about PyTorch at all. You know, they're just trying to consume a model. From a product perspective, like what were some of the decisions early on? Like right in October, November, you were just like, hey, most people just care about the model, not about the framework. We're going to make it super easy or was it more a gradual transition to the model librarySwyx [00:10:16]: you have today?Lin [00:10:17]: Yeah. So our product decision is all based on who is our ICP. And one thing I want to acknowledge here is the generic technology is disruptive. It's very different from AI before GNI. So it's a clear leap forward. Because before GNI, the companies that want to invest in AI, they have to train from scratch. There's no other way. There's no foundation model. It doesn't exist. So that means then to start a team, first hire a team who is capable of crunch data. There's a lot of data to crunch, right? Because training from scratch, you have to prepare a lot of data. And then they need to have GPUs to train, and then you start to manage GPUs. So then it becomes a very complex project. It takes a long time and not many companies can afford it, actually. And the GNI is a very different game right now, because it is a foundation model. So you don't have to train anymore. That makes AI much more accessible as a technology. As an app developer or product manager, even, not a developer, they can interact with GNI models directly. So our goal is to make AI accessible to all app developers and product engineers. That's our goal. So then getting them into the building model doesn't make any sense anymore with this new technology. And then building easy, accessible APIs is the most important. Early on, when we got started, we decided we're going to be open AI compatible. It's just kind of very easy for developers to adopt this new technology, and we will manage the underlying complexity of serving all these models.Swyx [00:11:56]: Yeah, open AI has become the standard. Even as we're recording today, Gemini announced that they have open AI compatible APIs. Interesting. So we just need to drop it all in line, and then we have everyone popping in line.Lin [00:12:09]: That's interesting, because we are working very closely with Meta as one of the partners. Meta, of course, is kind of very generous to donate many very, very strong open source models, expecting more to come. But also they have announced LamaStack, which is basically standardized, the upper level stack built on top of Lama models. So they don't just want to give out models and you figure out what the upper stack is. They instead want to build a community around the stack and build a new standard. I think there's an interesting dynamics in play in the industry right now, when it's more standardized across open AI, because they are kind of creating the top of the funnel, or standardized across Lama, because this is the most used open source model. So I think it's a lot of fun working at this time.Swyx [00:13:01]: I've been a little bit more doubtful on LamaStack, I think you've been more positive. Basically it's just like the meta version of whatever Hugging Face offers, you know, or TensorRT, or BLM, or whatever the open source opportunity is. But to me, it's not clear that just because Meta open sources Lama, that the rest of LamaStack will be adopted. And it's not clear why I should adopt it. So I don't know if you agree.Lin [00:13:27]: It's very early right now. That's why I kind of work very closely with them and give them feedback. The feedback to the meta team is very important. So then they can use that to continue to improve the model and also improve the higher level I think the success of LamaStack heavily depends on the community adoption. And there's no way around it. And I know the meta team would like to kind of work with a broader set of community. But it's very early.Swyx [00:13:52]: One thing that after your Series B, so you raced for Benchmark, and then Sequoia. I remember being close to you for at least your Series B announcements, you started betting heavily on this term of Compound AI. It's not a term that we've covered very much in the podcast, but I think it's definitely getting a lot of adoption from Databricks and Berkeley people and all that. What's your take on Compound AI? Why is it resonating with people?Lin [00:14:16]: Right. So let me give a little bit of context why we even consider that space.Swyx [00:14:22]: Because like pre-Series B, there was no message, and now it's like on your landing page.Lin [00:14:27]: So it's kind of very organic evolution from when we first launched our public platform, we are a single product. We are a distributed inference engine, where we do a lot of innovation, customized KUDA kernels, raw kernel kernels, running on different kinds of hardware, and build distributed disaggregated execution, inference execution, build all kinds of caching. So that is one. So that's kind of one product line, is the fast, most cost-efficient inference platform. Because we wrote PyTorch code, we know we basically have a special PyTorch build for that, together with a custom kernel we wrote. And then we worked with many more customers, we realized, oh, the distributed inference engine, our design is one size fits all. We want to have this inference endpoint, then everyone come in, and no matter what kind of form and shape or workload they have, it will just work for them. So that's great. But the reality is, we realized all customers have different kinds of use cases. The use cases come in all different forms and shapes. And the end result is the data distribution in their inference workload doesn't align with the data distribution in the training data for the model. It's a given, actually. If you think about it, because researchers have to guesstimate what is important, what's not important in preparing data for training. So because of that misalignment, then we leave a lot of quality, latency, cost improvement on the table. So then we're saying, OK, we want to heavily invest in a customization engine. And we actually announced it called FHIR Optimizer. So FHIR Optimizer basically helps users navigate a three-dimensional optimization space across quality, latency, and cost. So it's a three-dimensional curve. And even for one company, for different use cases, they want to land in different spots. So we automate that process for our customers. It's very simple. You have your inference workload. You inject into the optimizer along with the objective function. And then we spit out inference deployment config and the model setup. So it's your customized setup. So that is a completely different product. So that product thinking is one size fits all. And now on top of that, we provide a huge variety of state-of-the-art models, hundreds of them, varying from text to large state-of-the-art English models. That's where we started. And as we talk with many customers, we realize, oh, audio and text are very, very close. Many of our customers start to build assistants, all kinds of assistants using text. And they immediately want to add audio, audio in, audio out. So we support transcription, translation, speech synthesis, text, audio alignment, all different kinds of audio features. It's a big announcement. You should have heard by the time this is out. And the other areas of vision and text are very close with each other. Because a lot of information doesn't live in plain text. A lot of information lives in multimedia format, images, PDFs, screenshots, and many other different formats. So oftentimes to solve a problem, we need to put the vision model first to extract information and then use language model to process and then send out results. So vision is important. We also support vision model, various different kinds of vision models specialized in processing different kinds of source and extraction. And we're also going to have another announcement of a new API endpoint we'll support for people to upload various different kinds of multimedia content and then get the extract very accurate information out and feed that into LM. And of course, we support embedding because embedding is very important for semantic search, for RAG, and all this. And in addition to that, we also support text-to-image, image generation models, text-to-image, image-to-image, and we're adding text-to-video as well in our portfolio. So it's a very comprehensive set of model catalog that built on top of File Optimizer and Distributed Inference Engine. But then we talk with more customers, they solve business use case, and then we realize one model is not sufficient to solve their problem. And it's very clear because one is the model hallucinates. Many customers, when they onboard this JNI journey, they thought this is magical. JNI is going to solve all my problems magically. But then they realize, oh, this model hallucinates. It hallucinates because it's not deterministic, it's probabilistic. So it's designed to always give you an answer, but based on probabilities, so it hallucinates. And that's actually sometimes a feature for creative writing, for example. Sometimes it's a bug because, hey, you don't want to give misinformation. And different models also have different specialties. To solve a problem, you want to ask different special models to kind of decompose your task into multiple small tasks, narrow tasks, and then have an expert model solve that task really well. And of course, the model doesn't have all the information. It has limited knowledge because the training data is finite, not infinite. So the model oftentimes doesn't have real-time information. It doesn't know any proprietary information within the enterprise. It's clear that in order to really build a compiling application on top of JNI, we need a compound AI system. Compound AI system basically is going to have multiple models across modalities, along with APIs, whether it's public APIs, internal proprietary APIs, storage systems, database systems, knowledge to work together to deliver the best answer.Swyx [00:20:07]: Are you going to offer a vector database?Lin [00:20:09]: We actually heavily partner with several big vector database providers. Which is your favorite? They are all great in different ways. But it's public information, like MongoDB is our investor. And we have been working closely with them for a while.Alessio [00:20:26]: When you say distributed inference engine, what do you mean exactly? Because when I hear your explanation, it's almost like you're centralizing a lot of the decisions through the Fireworks platform on the quality and whatnot. What do you mean distributed? It's like you have GPUs in a lot of different clusters, so you're sharding the inference across the same model.Lin [00:20:45]: So first of all, we run across multiple GPUs. But the way we distribute across multiple GPUs is unique. We don't distribute the whole model monolithically across multiple GPUs. We chop them into pieces and scale them completely differently based on what's the bottleneck. We also are distributed across regions. We have been running in North America, EMEA, and Asia. We have regional affinity to applications because latency is extremely important. We are also doing global load balancing because a lot of applications there, they quickly scale to global population. And then at that scale, different content wakes up at a different time. And you want to kind of load balancing across. So all the way, and we also have, we manage various different kinds of hardware skew from different hardware vendors. And different hardware design is best for different types of workload, whether it's long context, short context, long generation. So all these different types of workload is best fitted for different kinds of hardware skew. And then we can even distribute across different hardware for a workload. So the distribution actually is all around in the full stack.Swyx [00:22:02]: At some point, we'll show on the YouTube, the image that Ray, I think, has been working on with all the different modalities that you offer. To me, it's basically you offer the open source version of everything that OpenAI typically offers. I don't think there is. Actually, if you do text to video, you will be a superset of what OpenAI offers because they don't have Sora. Is that Mochi, by the way? Mochi. Mochi, right?Lin [00:22:27]: Mochi. And there are a few others. I will say, the interesting thing is, I think we're betting on the open source community is going to proliferate. This is literally what we're seeing. And there's amazing video generation companies. There is amazing audio companies. Like cross-border, the innovation is off the chart, and we are building on top of that. I think that's the advantage we have compared with a closed source company.Swyx [00:22:58]: I think I want to restate the value proposition of Fireworks for people who are comparing you versus a raw GPU provider like a RunPod or Lambda or anything like those, which is like you create the developer experience layer and you also make it easily scalable or serverless or as an endpoint. And then, I think for some models, you have custom kernels, but not all models.Lin [00:23:25]: Almost for all models. For all large language models, all your models, and the VRMs. Almost for all models we serve.Swyx [00:23:35]: And so that is called Fire Attention. I don't remember the speed numbers, but apparently much better than VLM, especially on a concurrency basis.Lin [00:23:44]: So Fire Attention is specific mostly for language models, but for other modalities, we'll also have a customized kernel.Swyx [00:23:51]: And I think the typical challenge for people is understanding that has value, and then there are other people who are also offering open-source models. Your mode is your ability to offer a good experience for all these customers. But if your existence is entirely reliant on people releasing nice open-source models, other people can also do the same thing.Lin [00:24:14]: So I would say we build on top of open-source model foundation. So that's the kind of foundation we build on top of. But we look at the value prop from the lens of application developers and product engineers. So they want to create new UX. So what's happening in the industry right now is people are thinking about a completely new way of designing products. And I'm talking to so many founders, it's just mind-blowing. They help me understand existing way of doing PowerPoint, existing way of coding, existing way of managing customer service. It's actually putting a box in our head. For example, PowerPoint. So PowerPoint generation is we always need to think about how to fit into my storytelling into this format of slide one after another. And I'm going to juggle through design together with what story to tell. But the most important thing is what's our storytelling lines, right? And why don't we create a space that is not limited to any format? And those kind of new product UX design combined with automated content generation through Gen AI is the new thing that many founders are doing. What are the challenges they're facing? Let's go from there. One is, again, because a lot of products built on top of Gen AI, they are consumer-personal developer facing, and they require interactive experience. It's just a kind of product experience we all get used to. And our desire is to actually get faster and faster interaction. Otherwise, nobody wants to spend time, right? And then that requires low latency. And the other thing is the nature of consumer-personal developer facing is your audience is very big. You want to scale up to product market fit quickly. But if you lose money at a small scale, you're going to bankrupt quickly. So it's actually a big contrast. I actually have product market fit, but when I scale, I scale out of my business. So that's kind of a very funny way to think about it. So then having low latency and low cost is essential for those new applications and products to survive and really become a generation company. So that's the design point for our distributed inference engine and the file optimizer. File optimizer, you can think about that as a feedback loop. The more you feed your inference workload to our inference engine, the more we help you improve quality, lower latency further, lower your cost. It basically becomes better. And we automate that because we don't want you as an app developer or product engineer to think about how to figure out all these low-level details. It's impossible because you're not trained to do that at all. You should kind of keep your focus on the product innovation. And then the compound AI, we actually feel a lot of pain as the app developers, engineers, there are so many models. Every week, there's at least a new model coming out.Swyx [00:27:09]: Tencent had a giant model this week. Yeah, yeah.Lin [00:27:13]: I saw that. I saw that.Swyx [00:27:15]: It's like $500 billion.Lin [00:27:18]: So they're like, should I keep chasing this or should I forget about it? And which model should I pick to solve what kind of sub-problem? How do I even decompose my problem into those smaller problems and fit the model into it? I have no idea. And then there are two ways to think about this design. I think I talked about that in the past. One is imperative, as in you figure out how to do it. You give developer tools to dictate how to do it. Or you build a declarative system where a developer tells what they want to do, not how. So these are completely two different designs. So the analogy I want to draw is, in the data world, the database management system is a declarative system because people use database, use SQL. SQL is a way you say, what do you want to extract out of a database? What kind of result do you want? But you don't figure out which node is going to, how many nodes you're going to run on top of, how you redefine your disk, which index you use, which project. You don't need to worry about any of those. And database management system will figure out, generate a new best plan, and execute on that. So database is declarative. And it makes it super easy. You just learn SQL, which is learn a semantic meaning of SQL, and you can use it. Imperative side is there are a lot of ETL pipelines. And people design this DAG system with triggers, with actions, and you dictate exactly what to do. And if it fails, then how to recover. So that's an imperative system. We have seen a range of systems in the ecosystem go different ways. I think there's value of both. There's value of both. I don't think one is going to subsume the other. But we are leaning more into the philosophy of the declarative system. Because from the lens of app developer and product engineer, that would be easiest for them to integrate.Swyx [00:29:07]: I understand that's also why PyTorch won as well, right? This is one of the reasons. Ease of use.Lin [00:29:14]: Focus on ease of use, and then let the system take on the hard challenges and complexities. So we follow, we extend that thinking into current system design. So another announcement is we will also announce our next declarative system is going to appear as a model that has extremely high quality. And this model is inspired by Owen's announcement for OpenAI. You should see that by the time we announce this or soon.Alessio [00:29:46]: Trained by you.Lin [00:29:47]: Yes.Alessio [00:29:48]: Is this the first model that you trained? It's not the first.Lin [00:29:52]: We actually have trained a model called FireFunction. It's a function calling model. It's our first step into compound AI system. Because function calling model can dispatch a request into multiple APIs. We have pre-baked set of APIs the model learned. You can also add additional APIs through the configuration to let model dispatch accordingly. So we have a very high quality function calling model that's already released. We have actually three versions. The latest version is very high quality. But now we take a further step that you don't even need to use function calling model. You use our new model we're going to release. It will solve a lot of problems approaching very high OpenAI quality. So I'm very excited about that.Swyx [00:30:41]: Do you have any benchmarks yet?Lin [00:30:43]: We have a benchmark. We're going to release it hopefully next week. We just put our model to LMSYS and people are guessing. Is this the next Gemini model or a MADIS model? People are guessing. That's very interesting. We're watching the Reddit discussion right now.Swyx [00:31:00]: I have to ask more questions about this. When OpenAI released o1, a lot of people asked about whether or not it's a single model or whether it's a chain of models. Noam and basically everyone on the Strawberry team was very insistent that what they did for reinforcement learning, chain of thought, cannot be replicated by a whole bunch of open source model calls. Do you think that that is wrong? Have you done the same amount of work on RL as they have or was it a different direction?Lin [00:31:29]: I think they take a very specific approach where the caliber of team is very high. So I do think they are the domain expert in doing the things they are doing. I don't think there's only one way to achieve the same goal. We're on the same direction in the sense that the quality scaling law is shifting from training to inference. For that, I fully agree with them. But we're taking a completely different approach to the problem. All of that is because, of course, we didn't train the model from scratch. All of that is because we built on the show of giants. The current model available we have access to is getting better and better. The future trend is the gap between the open source model and the co-source model. It's just going to shrink to the point there's not much difference. And then we're on the same level field. That's why I think our early investment in inference and all the work we do around balancing across quality, latency, and cost pay off because we have accumulated a lot of experience and that empowers us to release this new model that is approaching open-ended quality.Alessio [00:32:39]: I guess the question is, what do you think the gap to catch up will be? Because I think everybody agrees with open source models eventually will catch up. And I think with 4, then with Lama 3.2, 3.1, 4.5b, we close the gap. And then 0.1 just reopened the gap so much and it's unclear. Obviously, you're saying your model will have...Swyx [00:32:57]: We're closing that gap.Alessio [00:32:58]: But you think in the future, it's going to be months?Lin [00:33:02]: So here's the thing that's happened. There's public benchmark. It is what it is. But in reality, open source models in certain dimensions are already on par or beat closed source models. So for example, in the coding space, open source models are really, really good. And in function calling, file function is also really, really good. So it's all a matter of whether you build one model to solve all the problems and you want to be the best of solving all the problems, or in the open source domain, it's going to specialize. All these different model builders specialize in certain narrow area. And it's logical that they can be really, really good in that very narrow area. And that's our prediction is with specialization, there will be a lot of expert models really, really good and even better than one-size-fits-all closed source models.Swyx [00:33:55]: I think this is the core debate that I am still not 100% either way on in terms of compound AI versus normal AI. Because you're basically fighting the bitter lesson.Lin [00:34:09]: Look at the human society, right? We specialize. And you feel really good about someone specializing doing something really well, right? And that's how our way evolved from ancient times. We're all journalists. We do everything. Now we heavily specialize in different domains. So my prediction is in the AI model space, it will happen also. Except for the bitter lesson.Swyx [00:34:30]: You get short-term gains by having specialists, domain specialists, and then someone just needs to train like a 10x bigger model on 10x more inference, 10x more data, 10x more model perhaps, whatever the current scaling law is. And then it supersedes all the individual models because of some generalized intelligence slash world knowledge. I think that is the core insight of the GPTs, the GPT-123 networks. Right.Lin [00:34:56]: But the training scaling law is because you have an increasing amount of data to train from. And you can do a lot of compute. So I think on the data side, we're approaching the limit. And the only data to increase that is synthetic generated data. And then there's like what is the secret sauce there, right? Because if you have a very good large model, you can generate very good synthetic data and then continue to improve quality. So that's why I think in OpenAI, they are shifting from the training scaling law intoSwyx [00:35:25]: inference scaling law.Lin [00:35:25]: And it's the test time and all this. So I definitely believe that's the future direction. And that's where we are really good at, doing inference.Swyx [00:35:34]: A couple of questions on that. Are you planning to share your reasoning choices?Lin [00:35:39]: That's a very good question. We are still debating.Swyx [00:35:43]: Yeah.Lin [00:35:45]: We're still debating.Swyx [00:35:46]: I would say, for example, it's interesting that, for example, SweetBench. If you want to be considered for ranking, you have to submit your reasoning choices. And that has actually disqualified some of our past guests. Cosign was doing well on SweetBench, but they didn't want to leak those results. So that's why you don't see O1 preview on SweetBench, because they don't submit their reasoning choices. And obviously, it's IP. But also, if you're going to be more open, then that's one way to be more open. So your model is not going to be open source, right? It's going to be an endpoint that you provide. Okay, cool. And then pricing, also the same as OpenAI, just kind of based on...Lin [00:36:25]: Yeah, this is... I don't have, actually, information. Everything is going so fast, we haven't even thought about that yet. Yeah, I should be more prepared.Swyx [00:36:33]: I mean, this is live. You know, it's nice to just talk about it as it goes live. Any other things that you want feedback on or you're thinking through? It's kind of nice to just talk about something when it's not decided yet. About this new model. It's going to be exciting. It's going to generate a lot of buzz. Right.Lin [00:36:51]: I'm very excited to see how people are going to use this model. So there's already a Reddit discussion about it. And people are asking very deep, mathematical questions. And since the model got it right, surprising. And internally, we're also asking the model to generate what is AGI. And it generates a very complicated DAG thinking process. So we're having a lot of fun testing this internally. But I'm more curious, how will people use it? What kind of application they're going to try and test on it? And that's where we really like to hear feedback from the community. And also feedback to us. What works out well? What doesn't work out well? What works out well, but surprising them? And what kind of thing they think we should improve on? And those kind of feedback will be tremendously helpful.Swyx [00:37:44]: Yeah. So I've been a production user of Preview and Mini since launch. I would say they're very, very obvious jobs in quality. So much so that they made clods on it. And they made the previous state-of-the-art look bad. It's really that stark, that difference. The number one thing, just feedback or feature requests, is people want control on the budget. Because right now, in 0.1, it kind of decides its own thinking budget. But sometimes you know how hard the problem is. And you want to actually tell the model, spend two minutes on this. Or spend some dollar amount. Maybe it's time you miss dollars. I don't know what the budget is. That makes a lot of sense.Lin [00:38:27]: So we actually thought about that requirement. And it should be, at some point, we need to support that. Not initially. But that makes a lot of sense.Swyx [00:38:38]: Okay. So that was a fascinating overview of just the things that you're working on. First of all, I realized that... I don't know if I've ever given you this feedback. But I think you guys are one of the reasons I agreed to advise you. Because I think when you first met me, I was kind of dubious. I was like... Who are you? There's Replicate. There's Together. There's Laptop. There's a whole bunch of other players. You're in very, very competitive fields. Like, why will you win? And the reason I actually changed my mind was I saw you guys shipping. I think your surface area is very big. The team is not that big. No. We're only 40 people. Yeah. And now here you are trying to compete with OpenAI and everyone else. What is the secret?Lin [00:39:21]: I think the team. The team is the secret.Swyx [00:39:23]: Oh boy. So there's no thing I can just copy. You just... No.Lin [00:39:30]: I think we all come from a very aligned culture. Because most of our team came from meta.Swyx [00:39:38]: Yeah.Lin [00:39:38]: And many startups. So we really believe in results. One is result. And second is customer. We're very customer obsessed. And we don't want to drive adoption for the sake of adoption. We really want to make sure we understand we are delivering a lot of business values to the customer. And we really value their feedback. So we would wake up midnight and deploy some model for them. Shuffle some capacity for them. And yeah, over the weekend, no brainer.Swyx [00:40:15]: So yeah.Lin [00:40:15]: So that's just how we work as a team. And the caliber of the team is really, really high as well. So as plug-in, we're hiring. We're expanding very, very fast. So if we are passionate about working on the most cutting-edge technology in the general space, come talk with us. Yeah.Swyx [00:40:38]: Let's talk a little bit about that customer journey. I think one of your more famous customers is Cursor. We were the first podcast to have Cursor on. And then obviously since then, they have blown up. Cause and effect are not related. But you guys especially worked on a fast supply model where you were one of the first people to work on speculative decoding in a production setting. Maybe just talk about what was the behind the scenes of working with Cursor?Lin [00:41:03]: I will say Cursor is a very, very unique team. I think the unique part is the team has very high technical caliber. There's no question about it. But they have decided, although many companies building coding co-pilot, they will say, I'm going to build a whole entire stack because I can. And they are unique in the sense they seek partnership. Not because they cannot. They're fully capable, but they know where to focus. That to me is amazing. And of course, they want to find a bypass partner. So we spent some time working together. They are pushing us very aggressively because for them to deliver high caliber product experience, they need the latency. They need the interactive, but also high quality at the same time. So actually, we expanded our product feature quite a lot as we support Cursor. And they are growing so fast. And we massively scaled quickly across multiple regions. And we developed a pretty high intense inference stack, almost like similar to what we do for Meta. I think that's a very, very interesting engagement. And through that, there's a lot of trust being built. They realize, hey, this is a team they can really partner with. And they can go big with. That comes back to, hey, we're really customer obsessed. And all the engineers working with them, there's just enormous amount of time syncing together with them and discussing. And we're not big on meetings, but we are like stack channel always on. Yeah, so you almost feel like working as one team. So I think that's really highlighted.Swyx [00:42:38]: Yeah. For those who don't know, so basically Cursor is a VS Code fork. But most of the time, people will be using closed models. Like I actually use a lot of SONET. So you're not involved there, right? It's not like you host SONET or you have any partnership with it. You're involved where Cursor is small, or like their house brand models are concerned, right?Lin [00:42:58]: I don't know what I can say, but the things they haven't said.Swyx [00:43:04]: Very obviously, the drop down is 4.0, but in Cursor, right? So I assume that the Cursor side is the Fireworks side. And then the other side, they're calling out the other. Just kind of curious. And then, do you see any more opportunity on the... You know, I think you made a big splash with 1,000 tokens per second. That was because of speculative decoding. Is there more to push there?Lin [00:43:25]: We push a lot. Actually, when I mentioned Fire Optimizer, right? So as in, we have a unique automation stack that is one size fits one. We actually deployed to Cursor earlier on. Basically optimized for their specific workload. And that's a lot of juice to extract out of there. And we see success in that product. It actually can be widely adopted. So that's why we started a separate product line called Fire Optimizer. So speculative decoding is just one approach. And speculative decoding here is not static. We actually wrote a blog post about it. There's so many different ways to do speculative decoding. You can pair a small model with a large model in the same model family. Or you can have equal pads and so on. There are different trade-offs which approach you take. It really depends on your workload. And then with your workload, we can align the Eagle heads or Medusa heads or a small big model pair much better to extract the best latency reduction. So all of that is part of the Fire Optimizer offering.Alessio [00:44:23]: I know you mentioned some of the other inference providers. I think the other question that people always have is around benchmarks. So you get different performance on different platforms. How should people think about... People are like, hey, Lama 3.2 is X on MMLU. But maybe using speculative decoding, you go down a different path. Maybe some providers run a quantized model. How should people think about how much they should care about how you're actually running the model? What's the delta between all the magic that you do and what a raw model...Lin [00:44:57]: Okay, so there are two big development cycles. One is experimentation, where they need fast iteration. They don't want to think about quality, and they just want to experiment with product experience and so on. So that's one. And then it looks good, and they want to post-product market with scaling. And the quality is really important. And latency and all the other things are becoming important. During the experimentation phase, it's just pick a good model. Don't worry about anything else. Make sure you even generate the right solution to your product. And that's the focus. And then post-product market fit, then that's kind of the three-dimensional optimization curve start to kick in across quality, latency, cost, where you should land. And to me, it's purely a product decision. To many products, if you choose a lower quality, but better speed and lower cost, but it doesn't make a difference to the product experience, then you should do it. So that's why I think inference is part of the validation. The validation doesn't stop at offline eval. The validation will go through A-B testing, through inference. And that's where we offer various different configurations for you to test which is the best setting. So this is the traditional product evaluation. So product evaluation should also include your new model versions and different model setup into the consideration.Swyx [00:46:22]: I want to specifically talk about what happens a few months ago with some of your major competitors. I mean, all of this is public. What is your take on what happens? And maybe you want to set the record straight on how Fireworks does quantization because I think a lot of people may have outdated perceptions or they didn't read the clarification post on your approach to quantization.Lin [00:46:44]: First of all, it's always a surprise to us that without any notice, we got called out.Swyx [00:46:51]: Specifically by name, which is normally not what...Lin [00:46:54]: Yeah, in a public post. And have certain interpretation of our quality. So I was really surprised. And it's not a good way to compete, right? We want to compete fairly. And oftentimes when one vendor gives out results, the interpretation of another vendor is always extremely biased. So we actually refrain ourselves to do any of those. And we happily partner with third parties to do the most fair evaluation. So we're very surprised. And we don't think that's a good way to figure out the competition landscape. So then we react. I think when it comes to quantization, the interpretation, we wrote actually a very thorough blog post. Because again, no one says it's all. We have various different quantization schemes. We can quantize very different parts of the model from ways to activation to cross-TPU communication. They can use different quantization schemes or consistent across the board. And again, it's a trade-off. It's a trade-off across this three-dimensional quality, latency, and cost. And for our customer, we actually let them find the best optimized point. And we have a very thorough evaluation process to pick that point. But for self-serve, there's only one point to pick. There's no customization available. So of course, it depends on what we talk with many customers. We have to pick one point. And I think the end result, like AA published, later on AA published a quality measure. And we actually looked really good. So that's why what I mean is, I will leave the evaluation of quality or performance to third party and work with them to find the most fair benchmark. And I think that's a good approach, a methodology. But I'm not a part of an approach of calling out specific namesSwyx [00:48:55]: and critique other competitors in a very biased way. Databases happens as well. I think you're the more politically correct one. And then Dima is the more... Something like this. It's you on Twitter.Lin [00:49:11]: It's like the Russian... We partner. We play different roles.Swyx [00:49:20]: Another one that I wanted to... I'm just the last one on the competition side. There's a perception of price wars in hosting open source models. And we talked about the competitiveness in the market. Do you aim to make margin on open source models? Oh, absolutely, yes.Lin [00:49:38]: So, but I think it really... When we think about pricing, it's really need to coordinate with the value we're delivering. If the value is limited, or there are a lot of people delivering the same value, there's no differentiation. There's only one way to go. It's going down. So through competition. If I take a big step back, there is pricing from... We're more compared with close model providers, APIs, right? The close model provider, their cost structure is even more interesting because we don't bear any training costs. And we focus on inference optimization, and that's kind of where we continue to add a lot of product value. So that's how we think about product. But for the close source API provider, model provider, they bear a lot of training costs. And they need to amortize the training costs into the inference. So that created very interesting dynamics of, yeah, if we match pricing there, and I think how they are going to make money is very, very interesting.Swyx [00:50:37]: So for listeners, opening eyes 2024, $4 billion in revenue, $3 billion in compute training, $2 billion in compute inference, $1 billion in research compute amortization, and $700 million in salaries. So that is like...Swyx [00:50:59]: I mean, a lot of R&D.Lin [00:51:01]: Yeah, so I think matter is basically like, make it zero. So that's a very, very interesting dynamics we're operating within. But coming back to inference, so we are, again, as I mentioned, our product is, we are a platform. We're not just a single model as a service provider as many other inference providers, like they're providing a single model. We have our optimizer to highly customize towards your inference workload. We have a compound AI system where significantly simplify your interaction to high quality and low latency, low cost. So those are all very different from other providers.Alessio [00:51:38]: What do people not know about the work that you do? I guess like people are like, okay, Fireworks, you run model very quickly. You have the function model. Is there any kind of like underrated part of Fireworks that more people should try?Lin [00:51:51]: Yeah, actually, one user post on x.com, he mentioned, oh, actually, Fireworks can allow me to upload the LoRa adapter to the service model at the same cost and use it at same cost. Nobody has provided that. That's because we have a very special, like we rolled out multi-LoRa last year, actually. And we actually have this function for a long time. And many people has been using it, but it's not well known that, oh, if you find your model, you don't need to use on demand. If you find your model is LoRa, you can upload your LoRa adapter and we deploy it as if it's a new model. And then you use, you get your endpoint and you can use that directly, but at the same cost as the base model. So I'm happy that user is marketing it for us. He discovered that feature, but we have that for last year. So I think to feedback to me is, we have a lot of very, very good features, as Sean just mentioned. I'm the advisor to the company,Swyx [00:52:57]: and I didn't know that you had speculative decoding released.Lin [00:53:02]: We have prompt catching way back last year also. We have many, yeah. So I think that is one of the underrated feature. And if they're developers, you are using our self-serve platform, please try it out.Swyx [00:53:16]: The LoRa thing is interesting because I think you also, the reason people add additional costs to it, it's not because they feel like charging people. Normally in normal LoRa serving setups, there is a cost to dedicating, loading those weights and dedicating a machine to that inference. How come you can't avoid it?Lin [00:53:36]: Yeah, so this is kind of our technique called multi-LoRa. So we basically have many LoRa adapters share the same base model. And basically we significantly reduce the memory footprint of serving. And the one base model can sustain a hundred to a thousand LoRa adapters. And then basically all these different LoRa adapters can share the same, like direct the same traffic to the same base model where base model is dominating the cost. So that's how we advertise that way. And that's how we can manage the tokens per dollar, million token pricing, the same as base model.Swyx [00:54:13]: Awesome. Is there anything that you think you want to request from the community or you're looking for model-wise or tooling-wise that you think like someone should be working on in this?Lin [00:54:23]: Yeah, so we really want to get a lot of feedback from the application developers who are starting to build on JNN or on the already adopted or starting about thinking about new use cases and so on to try out Fireworks first. And let us know what works out really well for you and what is your wishlist and what sucks, right? So what is not working out for you and we would like to continue to improve. And for our new product launches, typically we want to launch to a small group of people. Usually we launch on our Discord first to have a set of people use that first. So please join our Discord channel. We have a lot of communication going on there. Again, you can also give us feedback. We'll have a starting office hour for you to directly talk with our DevRel and engineers to exchange more long notes.Alessio [00:55:17]: And you're hiring across the board?Lin [00:55:18]: We're hiring across the board. We're hiring front-end engineers, infrastructure cloud, infrastructure engineers, back-end system optimization engineers, applied researchers, like researchers who have done post-training, who have done a lot of fine-tuning and so on.Swyx [00:55:34]: That's it. Thank you. Thanks for having us. Get full access to Latent Space at www.latent.space/subscribe
Order and disorder, a freeform haze of garbage guitars, shorted electronics, found detritus, collage, linear songs, sounds from strange lands. Contact me at btradio85@gmail.com. ALRUNE ROD - Du Taler Og Sir - Sonet Årene 1969-72 (Sonet, 1998)FOREVER ON HOLDSPARKS - Your Call Is Very Important To Us - Li'l Beethoven (Palm Pictures, 2002)SON OF DRIBBLE - Sick In the Hills - Poking a Hole in a Bag of Tears (Minimum Table Stacks, 2024)TERRY STAMP - Roadcrew Blues - Blue Redondo (1978, re; Just Add Water, 2024)THE GORIES - Cry Girl (BC, 2024)THE EX - Great! - 7" (Ex Records, 2024)RUDIMENTARI PENI - Il Papus Puss - Pope Adrian 37th Psychristiatric (Outer Himalayan, 1995)THE PROLETARIAT - Embraced - Soma Holiday (1983, re: SS, 2016)STEREO JOY - Mind Imperfection - 10 Minutes With Stereo Joy (Dirtbag, 2022)MARIA BONITA - Rezo El Rosario - V/A: Back Up Dos: Mexican Tecno Pop 1982-1989 (Dark Entries, 2024)LES DOUBLE SIX - Sweets - Les Double Six (Open, 1962)ALUK TODOLO - ●●• - Lux (The Ajna Offensive, 2024)RAPIDE ET FURIEUX - Rouille - Premières Démos (BC, 2024)CHICO MELLO / HELINHO BRANDÃO - Dança - Chico Mello, Helinho Brandão (1984, re: Black Truffle, 2024)CUMBIA MACUCHA - Cumbia de Los Bee Gees - V/A: Super Disco Pirata - De Tepito Para El Mundo 1965-1980 (Analog Africa, 2024)JOE GOLDMARK - The Way - All Hat, No Cattle (Hightone, 1999)KEITH HUDSON - Playing It Right Dub - Playing It Cool & Playing It Right(1981 re: Basic Replay, 2003)MARSHALL APPLEWHITE - Just For You Baby - Just For You Baby (Clan Destine, 2024)AUTECHRE - Gnit - Tri Repetae (Warp, 1995)SANDY BULL - No Deposit, No Return Blues - Still Valentine's Day 1969 (No Quarter, 2024)DJBLACKMETA - Landing Fog - Blackout (BC, 2024)J. GUY LAUGHLIN / BBOB RAINEY - Curiouser 04 - Curiouser (BC, 2024)CORSANO BAIZA WATT TRIO - Metamorphosis - s/t (Yucca Alta, 2024)CIRCLE VS. CIRCLE - Sick Child - Supermassive (Ektro, 2024)
Send Everyday AI and Jordan a text messageWin a free year of ChatGPT or other prizes! Find out out.Perplexity is a game-changer for scaling your business. However, you might have some misconceptions about this AI powerhouse that are preventing you from maximizing its potential. We're here to share the essential 101 on Perplexity and highlight five key things you should know about it.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on PerplexityRelated Episodes: Ep 271: OpenAI Releases GPT-4o: 12 things you need to knowEp 301: Anthropic Claude 3.5 Sonnet – How it compares to ChatGPT's GPT-4oUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Overview of Perplexity2. Differences Between Free and Paid Perplexity3. Use Cases of Perplexity4. View and Edit Sources Using Perplexity5. Future Implications of PerplexityTimestamps:02:30 Perplexity overview08:52 Many hours spent doing work on the open Internet.13:56 Using perplexity to create custom instructions, responses.15:16 Save time by curating and publishing content.19:43 Decent options, limited internet connection, paid plans.23:57 Live explanation and request for audience input.25:02 Analyze company's and 3 competitors' insights30:07 Check sneaker dunk, remove old or irrelevant sources.32:36 Nike Air Jordans: SWOT analysis39:08 Limit search to 5 websites at a time.42:09 GPT-4o update competes with perplexity.44:11 GPT-4o from OpenAI provides valuable customization options.48:07 Website blocks scrapers to protect information.50:16 Fewer clicks on websites, future in AI.Keywords:Jordan Wilson, Perplexity, paid version, free version, advanced models, GPT-4, OpenAI, Claude 3.5 SONET, AI images, SWOT analysis, Nike, Adidas Yeezy, Under Armour Curry, New Balance Kawhi, athletic shoe brands, fertility clinics, SEO, Chat GPT4, Perplexity features, large language models, research tools, online news scraping, controversy, search engine, Microsoft Copilot, Everyday AI, FBI, Russian bot farm, antitrust issues, Perplexity as answers engine. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/ Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Piše Majda Travnik Vode, bereta Aleksander Golja in Eva Longyka Marušič, prevedli Alojz Gradnik, Brane Senegačnik in Matej Venier. Razkošna dvojezična izdaja sonetov velike italijanske renesančne pesnice Vittorie Colonna, ki je izšla zgolj dve leti po izidu integralnega prevoda Petrarcovega Canzoniera, na svojevrsten način potrjuje, da zgodovinski dialog z veliko poezijo ni nikoli zaključen, posredno pa tudi, da še danes nista mrtva ne sonet ne petrarkizem. Če so provansalski trubadurji, Petrarca in njegovi posnemovalci, petrarkisti, odločilno vplivali na naš način doživljanja in ubesedovanja ljubezenskega čustva s povzdigovanjem ljubljene osebe, njene lepote in kreposti ter z opisovanjem omahovanja zaljubljenca med trpljenjem in blaženostjo, se ne zdi nič manj očitno dejstvo, da se je evropski način čustvovanja od nekdaj najbolj organsko pretakal skozi sonet. Na temeljno zaznamovanost evropske lirike s sonetno formo in njeno neminljivost pri nas najbolj vztrajno opozarja poznavalec in tudi sam mojster soneta Boris A. Novak. V monografiji Sonet iz leta 2004 je izrisal teorijo in zgodovino te pesniške oblike in podal ugotovitev, da je »sonet od svojega nastanka v 13. stoletju vse do danes nedvomno krona evropske poezije.« Po njegovih besedah prvenstvo med vsemi pesniškimi oblikami sonetu prinaša njegova »zgoščena oblika, kjer mora biti vsaka beseda pretehtana, medsebojni odnosi med besedami pa morajo vzpostavljati kompleksno pomensko mrežo.« Čeprav to še zdaleč ni edina možnost, se je pri nas po Prešernovem zgledu uveljavila predvsem najbolj kanonična in zahtevna oblika soneta: dve kvartini, dve tercini, jambski enajsterec in oklepajoče ženske rime. Ključna lastnost soneta je tako zgoščena zvočnost, ki nastane kot posledica izjemno utesnjene oblike, in ki s tem paradoksalno stopnjuje tudi čustveno energijo in težo eksistencialnega sporočila, položenega v pesem. Po drugi strani pa sonet ni nikoli poznal nikakršnih motivno-tematskih omejitev in lahko enako učinkovito oklepa tako energijo religioznih čustev kot humorja, satire itn. Medtem ko so druge srednjeveške oblike izumrle, je sonet preživel vse zgodovinske svetovnonazorske in estetske pretrese in postal svojevrsten podaljšek in katalizator novoveške evropske subjektivitete – kot bi ta v njem našla svoje naravno okolje in primeren okvir, skozi katerega lahko na potencirano dramatičen in visoko estetiziran način pretoči svojo senzibilnost, od sublimnega do banalnega in distanciranega. Zdi se, da je podobno zavedanje o moči soneta prevevalo tudi dobo, v kateri je ustvarjala Vittoria Colonna, saj je samo v 16. stoletju nastalo na sto tisoče petrarkističnih sonetov. Čeprav se tudi njeni soneti ne motivno ne oblikovno niso izognili Petrarcovemu vplivu, Vittorio Colonna od nepregledne množice Petrarcovih epigonov ločujeta nenavadno močan avtorski pečat in avtentičnost eksistencialne izkušnje, vtisnjene v sonetno formo. Čisti in iskreni glas pesničinega lirskega jaza presune in očara takoj na začetku: »Na tisti dan, ko mi v srce vstopila / je ljubljena podoba, kot da imela / bo mnogo let tu dragi dom, žarela - / »Je človek? Je božanstvo?« sem dvomila. // In v enem hipu duša podarila / ji sladko je svobodo, zagorela / v pozabi sebe je tedaj vesela, / za zmeraj njeni volji se uklonila.« V teh verzih lahko jasno opazujemo, kako k učinku neposredne izpovednosti, s katerim se pesnica v hipu približa sodobnemu bralcu, pripomore stroga sonetna forma, ki pesničin navdih brzda in mu preprečuje, da močna čustva, ki generirajo njene verze v času, ko je pesniško otroštvo čustev že davno minilo, nikoli ne zdrsnejo v patos, kliše, pozo ali zgolj artizem, ampak gre ves čas skrajno zares. Osrednje čustveno gibalo, okrog katerega so začeli nastajati soneti Vittorie Colonna, pesnice, ki je pisala približno petindvajset let, pa ni toliko bivanjska groza kot skorajda neutolažljiva žalost zaradi izgube ljubljenega moža. Pesnica je namreč začela pisati ljubezensko liriko, ki kasneje sublimira v vzvišeno duhovno poezijo, šele po soprogovi smrti. Vittoria Colonna se je rodila v plemiški družini blizu Rima in njena poroka s Ferrantejem d'Avalosom, pescarskim markizom, je bila dogovorjena že v otroštvu. Poročila sta se, ko je bilo Vittorii 19 let. Njen mož je služil kot visoki častnik v habsburški vojski v okviru Svetega rimskega cesarstva in umrl leta 1525 za posledicami ran sicer zmagovite bitke pri Pavii, v kateri so Habsburžani pridobili nadzor nad severno Italijo. Vittoria je nato kot vdova – njene pesmi so znane tudi kot »vdovske pesmi« – večinoma živela po samostanih, vendar se ni nikoli zaobljubila. Bila je naklonjena reformacijskim gibanjem, ki so se hitro širila po Evropi, obiskovala je na primer krožek spiritualov. Umrla je leta 1547, ko so se že razplamtela protireformacijska preganjanja; ravno tega leta se je moral v izgnanstvo umakniti Primož Trubar. Vittoria Colonna kot umetnica ni bila ambiciozna kot njen veliki predhodnik Petrarca in je svoje delo le nerada posredovala javnosti. Ena redkih izjem je rokopisna zbirka 103 duhovnih sonetov, ki jih je osebno podarila Michelangelu Bounarottiju. Z njim jo je povezovalo tesno prijateljstvo, in tudi on ji je posvetil več sonetov. Kljub takšni drži in odmaknjenemu življenju je pesnica že za življenja široko zaslovela, iz njene korespondence pa je razvidno, da si je dopisovala s številnimi najvplivnejšimi osebnostmi svojega časa, pridigarji, literati in slikarji. Nekaj časa je bil njen mentor Pietro Bembo, najvišja jezikoslovna in literarna avtoriteta takratnega časa, Ludovico Ariosto pa je v svojem epu Besneči Orlando o njenem pesništvu zapisal: »Slog mili, ki ne zmorem ga preseči.« Opus Vittorie Colonna je ohranjen v 60 rokopisih, poleg tega so do sedaj odkrili tudi 22 tiskanih izdaj z njenimi pesmimi, objavljenih pred letom 1547. Za pesničinim hrbtom pa je na dvorih in v pesniških in učenjaških krogih od Italije do Francije krožilo še na desetine neavtoriziranih rokopisnih zbirk. V Sloveniji smo veliko pesnico prvič spoznali leta 1940, ko je Alojz Gradnik za antologijo Italijanska lirika prevedel šest njenih sonetov in ti so kot poklon Gradnikovi neminljivi prevajalski veščini vključeni tudi v pričujoči izbor. Knjiga skupno prinaša 112 sonetov, kar je približno četrtina pesničinega opusa. Soneti so razdeljeni v tri tematske sklope: Ljubezenski verzi, Duhovni verzi in Epistolarni verzi, poleg tega izbor spremljata tudi poglobljena biografska in literarno-teoretična študija profesorice Patrizie Farinelli in zanimivo daljše razmišljanje Braneta Senegačnika o temeljnih izzivih prevajanja pesniških besedil. Vittoria Colonna, ki se je pesniško izoblikovala v živahnem okolju neapeljskega petrarkizma, je po tematskem obratu od ljubezenske lirike k duhovni poeziji, ki se je zgodil okrog leta 1536, obveljala za začetnico nabožne poezije, pisane v petrarkistični tradiciji. Patrizia Farinelli ob tem piše, da je pesnica od poveličevanja moževe osebe in smrti prešla k poveličevanju Kristusa in odrešenjskega pomena njegove smrti. Oba tematska loka se pretresljivo stikata v motivu sonca: v ljubezenskih pesmih je sonce, pisano z malo začetnico, najpogostejša prispodoba za ljubljenega, v duhovni poeziji pa pesnica kot Sonce z veliko začetnico imenuje Kristusa v vlogi tolažnika in odrešenika. Vendar lahko iz vsebinske dramaturgije duhovnih pesmi razberemo, da pot od enega k drugemu »soncu« nikakor ni bila lahka, in tudi pozneje njena sicer iskrena in živa vera niha med občutji popolnega zaupanja, milosti, vere v posmrtno življenje in ponovno snidenje z ljubljenim do občutij najgloblje žalosti, popolne resignacije in celo misli na samomor: »Lastna bi roka od bolesti gnana / storila to, a želja, da sestala / bi skoraj tam se, jo je zadržala.« O bolečini, ki jo preživlja, skozi celotno zbirko pretresljivo pričajo številni verzi: »Odkar je sonce moje skrilo lice, / narave red se je premaknil z mesta, / če bol ne skriva čutom mi resnice.« Zaradi globoke kontemplativne intoniranosti in vztrajnega priklicevanja Božje navzočnosti duhovne pesmi Vittorie Colonna pogosto spominjajo na molitev in številne med njimi pravzaprav lahko beremo kot čudovite molitve v obliki sonetov. Tukaj še bolj kot v ljubezenskih pesmih, kjer pogosto posnema Petrarcovo besedišče in skladnjo, pride do izraza njena avtorska avtentičnost, saj v pesmi izvirno vnaša resnoben evangeljski ton ter temu primerno pozorno odmerja retorične učinke, zelo pogosta je na primer raba enjambementa. Sklenemo lahko, da se kot celota opus Vittorie Colonna v bralčevo zavest trajno vtisne zaradi svoje izpovedne iskrenosti, še bolj pa zaradi privzdignjenosti in nenavadnega dostojanstva sloga, ki njeno poezijo upravičeno uvršča med vrhove evropske renesančne lirike.
Send Everyday AI and Jordan a text messageWanna know a lil secret? Perplexity's a cheat code for growing your biz. Chances are, though, you've got a few things wrong about this AI powerhouse and you're not using it to the fullest. We dish the real 101 on Perplexity and the 5 things you need to know about it. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on PerplexityRelated Episodes: Ep 271: OpenAI Releases GPT-4o: 12 things you need to knowEp 301: Anthropic Claude 3.5 Sonnet – How it compares to ChatGPT's GPT-4oUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Overview of Perplexity2. Differences Between Free and Paid Perplexity3. Use Cases of Perplexity4. View and Edit Sources Using Perplexity5. Future Implications of PerplexityTimestamps:01:20 Daily AI news06:30 Perplexity overview08:52 Many hours spent doing work on the open Internet.13:56 Using perplexity to create custom instructions, responses.15:16 Save time by curating and publishing content.19:43 Decent options, limited internet connection, paid plans.23:57 Live explanation and request for audience input.25:02 Analyze company's and 3 competitors' insights30:07 Check sneaker dunk, remove old or irrelevant sources.32:36 Nike Air Jordans: SWOT analysis39:08 Limit search to 5 websites at a time.42:09 GPT-4o update competes with perplexity.44:11 GPT-4o from OpenAI provides valuable customization options.48:07 Website blocks scrapers to protect information.52:16 Fewer clicks on websites, future in AI.Keywords:Jordan Wilson, Perplexity, paid version, free version, advanced models, GPT-4, OpenAI, Claude 3.5 SONET, AI images, SWOT analysis, Nike, Adidas Yeezy, Under Armour Curry, New Balance Kawhi, athletic shoe brands, fertility clinics, SEO, Chat GPT4, Perplexity features, large language models, research tools, online news scraping, controversy, search engine, Microsoft Copilot, Everyday AI, FBI, Russian bot farm, antitrust issues, Perplexity as answers engine. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/
Wiele2Wiele het met die opgeknapte Kia Sonet, wat soos 'n paal bo water uitstaan in hierdie marksegment, gaan ry. Hulle het ook tyd met die BMW X2 sDrive 18i M Sport deurgebring. Daar is 'n motorwaardewenk, wenke oor motorfietsbande en hulle maak jou lus vir 'n ‘Ilse of Man TT'-toer vir 2025. Wiele2Wiele op Facebook · Wiele2Wiele op Maroela Media
Lindi Strydom, stigter van GROOTfm 90.5 se #HOOPstoot, gesels met Venita Engelbrecht en Sonet Stofberg oor HOOP! Sy het telefonies ingeskakel al die pad vanaf Somerset Wes, en inspireer ons met Sy Ewige waarheid waarin Sy en ons volhoubare hoop lê.
Hoof van die regsafdeling van FOR SA – ‘n organisasie wat advocate is vir die vryheid van geloof in Suid-Afrika. Liesl deel haar kennis en wysheid vir aktiewe burgerskap.
Trauma-dokter, Dr Jaco van Niekerk en sy vrou, Esmaralda, en befaamde paramedikus, Xander Loubser, kuier saam met Venita Engelbrecht en Sonet Stofberg, en gesels as deel van die GROOTfm 90.5 GROOT Biddag oor gesondheidsorg.
KIA Sonet, arranca en los $372,900 pesos para la versión LX. De ahí escala a los $402,900 para la versión EX y finalmente, la versión tope de gama se accede a ella si desembolsamos $452,900 pesos.
En esta emisión de Autos y más, arrancamos platicando del equipo ABT CUPRA, con su cuarta cita de la temporada del Campeonato del Mundo de Fórmula E, en Sao Paulo, Brasil, ciudad de la que son naturales los pilotos de la escudería, Lucas di Grassi y Nico Müller, Lucas luchará por sumar los primeros puntos de ABT CUPRA este año. También, habíamos de la llegada de Kia Sonet al país, estará disponible en los distribuidores de la marca a partir del 19 de marzo con un precio de $372,900 pesos. Además, les dimos un resumen de la unión de marcas chinas Bestune y JIM quienes llegan a tierras mexicanas y pretenden vender de cinco a ocho mil unidades de los modelos que de la mano de Shanghai Auto Assembly Group (SAAG) el brazo de lujo de Grupo FAW. Autos y más ahora está en todos lados, en la radio, en la tv, en el podcast y en todas las redes sociales. No dejes de escuchar la transmisión en vivo porque tendremos muchos regalos, recuerda sintonizar de lunes a viernes de 8 a 9 pm y sábados de 10 am a 12 pm por tu estación favorita MVS Noticias en el 102.5 de tu FM.See omnystudio.com/listener for privacy information.
The Kia Sonet, the indulgent small SUV, as Kartikeya Singhee likes to call it has been updated with small design changes, improved drivability, better suspension tuning and a more comfortable interior. On MotorInc First, Kartikeya has the full details on why the Sonet is even more impressive now. ~ MotorInc First is a discussion between experts Kartikeya Singhee and Shumi (Shubhabrata Marmar), co-founders of MotorInc on new vehicles. Each episode will discuss one vehicle in great detail, covering the experience of driving or riding it, as well as what it means for the industry. ~ CHAPTERS 00:00 Bhopal & Kia Sonet 01:55 What's Changed 04:37 What Needed To Be Fixed 07:34 Cabin Updates 09:34 More Equipment 10:30 Styling Updates 11:36 The Kia App 15:15 Better Back Seat 18:45 Driving The Petrol Sonet 22:51 Sonet Off-Road 23:59 More Driveable 25:53 Driving The Diesel Sonet 31:00 Kartikeya's Picks 32:53 Small Details 34:52 New ADAS Systems 39:08 Bose Sound System 39:31 vs Tata Nexon 42:11 Driving Modes 43:11 Kartik's Caveats 44:06 Quick Summary 48:33 Crash Test Ratings 55:52 Closing Comments ~ #MotorInc #MotorIncFirst #FirstDrive #FirstImpressions #FirstLook #KiaSonet #Sonet #SUV #compactSUV #TataNexon
Welcome to The SaaS CFO Podcast! In today's episode, our host Ben sits down with special guest Dharmendra Mohan, co-founder and CEO of Sonet.IO. Dharmendra shares his background in R&D, including his experience at Symantec and Bluecode Systems before embarking on the journey of starting Sonnet. He explains the tipping point that led him to realize the potential of the cloud and the need for a better solution to connect remote workers to applications securely and hassle-free. Dharmendra dives deep into the products and services offered by Sonet, highlighting their remote work platform and how it benefits various industries, particularly in the tech and services sectors. He also discusses the company's go-to-market strategy, recent funding round, and plans for expansion. Tune in to this insightful episode to learn more about the innovative work being done at Sonet.IO. Show Notes: [00:02:18] Sonnet: Remote work platform, secure connection. [00:04:07] Securing and monitoring data for maximum protection. [00:07:28] Dual strategy: content creation and outreach methods. [00:10:11] Critical to have right product-market fit, cash-efficient, profitable, sustainable business. Prototype, customer feedback, execution important. [00:13:38] Constantly upgrading platform with secure, mobile accessibility. Links:SaaS Fundraising Stories: https://www.thesaasnews.com/news/sonet-io-raises-6-million-in-seed-round Dharmendra Mohan's LinkedIn: https://www.linkedin.com/in/dharmendra-mohan/ Sonet.IO's LinkedIn: https://www.linkedin.com/company/sonet-io/ To know more about Ben check out the links below: Subscribe to Ben's daily metrics newsletter: https://saasmetricsschool.beehiiv.com/subscribe Subscribe to Ben's SaaS newsletter: https://mailchi.mp/df1db6bf8bca/the-saas-cfo-sign-up-landing-page SaaS Metrics courses here: https://www.thesaasacademy.com/ Join Ben's SaaS community here: https://www.thesaasacademy.com/offers/ivNjwYDx/checkout Follow Ben on LinkedIn: https://www.linkedin.com/in/benrmurray
Vi har fantastiske nyheter i dag! All verdens synd er sonet og båret bort! Men hvorfor måtte Jesus dø for at det kunne skje? Hva er det med at synd må sones? Og hva med den synden vi enda ikke har begått? Tre stoler illustrerer det godt.
FILMBRANSCHPODDEN - A PART OF ACASTING VEM: Jakob Abrahamsson YRKE: VD för tre bolag inom filmbranschen SÄSONG: 1 EPISOD: 7 EN PODD AV: Simon Kölle www.linktr.ee/simonkolle Om episoden: Jakob Abrahamsson är VD för Distributionsbolaget NonStop Entertainment, VD för Bio & Bistro Capitol och VD för produktionsbolaget Mylla Films. Med stor kärlek till film och ett entreprenöriellt sinne berättar Jakob om sin mångsidiga karriär och hur det är att driva sina tre bolag. När episoden spelades in var Anette Novak VD på Svenska filminstitutet. SPONSRAD AV: Ritualen - www.ritualen.com NÄMNER BLAND ANNAT: Bio Capitol, NonStop Entertainment, Mylla Films, Otänkbart att jobba med film, Hitchcock, hyrde filmer, Casablanca, Västertorp, Bodils Video, Silverscreen, Fruängen, Video Nord, Hornstull, Walter video, video invest, KTH, Filmvetenskap, Stockholm Filmskola, Regi, Stockholm Filmfestival, Tidnigen Cinema, Midnattsvisningar, Hong Kong-film, Ringu 1 och 2, Popcorn festivalen, Abbe, Patrik Andersson, Anna Lindström på Lucky Dogs, Nicola på Njuta, Programansvarig, Tonårskomedier, thrillers, drama, B-action, mix av filmer, Tarantino, anti-censur, Skärholmens loppmarknad, Laserdisc, Carl Göran Andersson, Ignas Scheynius, Turner Broadcasting, Warner, Kabelkanal, ComHem, Abonemang, Discovery, HBO, Searching for Sugar Man, Malik Bendjelloul, Möten i London, HR, Ekonomi, Underchefer, Köpa NonStop, Flytta ut samma dag, Jag är Ingrid, Fin start, Tvod, EST, VOD, SVOD, Hyr och köp-video, Streaming, Netflix, AVOD, FVOD, Sälja till TV, Köpa rättigheter, Tidsbestämda rättigheter, Värderingen, Discounted cashflow, Köpa sälja bolag, filmhistoria, Grosshandlare i filmrättigheter, Göteborg Film festival, Sundance, Nordic Film Market, American Film Market, Paris Rendezvous, Berlin Film Festival, European Film Market, MIPTV, International Television Market, Cannes, London Screenings, Venedig film festival, Toronto Film festival, MIPCOM, Ekosystemet inom Indiependent film, Röda Mattan, Pre Sales, mer om filmfestivaler, Buda på projekt, Svårt att göra film i Sverige utan att ha många med dig, Nordisk film och tv fond, Finansiering av film, Pitchar, EPK, digibeta-Kassett, Stärka kreatörer, Mylla och NonStop, Scanbox, Aurora, flera hattar, NonStop sist i kön för Mylla, MG, Mestadels roligt, VD för Bio Capitol, Permitering, B-Reel, Lisa Langseth, Pernilla August, TV-serier, Midsommar, Ari Aster, Datapunkter, Estimat, väldigt hemliga i Sverige med data och tittarsiffror, FilmWeb, Dungeons and Dragons, Filmbudget, Box Office, Streamingplattformer hemlar, Status på svenska filmbranschen, Kalibrerat om branschen,Allmän kostnadskontrollvåg, VIAPLAY, Beställare kommer beställa mindre, pendeln håller långsamt på att svänga tillbaka till långfilm, Barbiefilmen, självförtroende, nedåtgående spiral, Sonet, Josef Fares, Johan Falk, Göta Kanal, Sällskapsresan, Joker, Strul, En runda till, Solsidan, Ove, Felix Herngren, Lasse Åberg, Feed, Bränn alla mina brev, Scandinavian Content Group, Svensk genre, Avgrunden, Katastroffilm, Vinterberg, Ruben Östlund, Anna Croneman, Riktning och initiativ och mod, Top Gun 2, Genreberättande, Låt den rätte komma in, Efti, Carl Molinder, Gräns, Art House, Nordic Noir, FLX, Gräva där man står, Internationell marknad i bakhuvudet, Dalarna, Norberg, Mormor, Harold and Maude, The Graduate, Cinemateket, If you want to sing out, sänka priset på medlemskort, Mission impossible, Klassiker och specialvisningar, After Sun, Kärlek och Avund, War Pony, Disco Boy, Bruce Willis, Art House, Snabba Cash, Streamingintäkter, Ledarskap, Förtroende, Tillit, Trygghet, Hållbarhet, This is England, Mark Herbert, Shane Meadows, Warp X, Inside Pictures, David Cronenberg, Crimes of the Future, Naked Lunch, The name of the game, Julia Short, UIP, Polygram, 12 apornas armé, Mentor, att ha egen bio, Svenska filminstitutet, Anna Serner, Anette Novak, Öronbedövande tystnad, en kraft för kontinuitet och reaktivitet, SVT Drama, Mikael Marcimain, Tyst från kulturpolitiken, Werner Herzhog, Egghead Republic, Galna på bästa sätt, Per Faxneld, Ockultism, public intellectual, It Follows, Den där Mary
In today's episode, we have a special guest joining us: Dharmendra Mohan, the founder of Sonet.io. Dharmendra shares with us his insights and experiences in the world of cybersecurity and remote work solutions. He highlights the challenges faced by both large enterprises and end users when it comes to network security. As remote work and distributed applications become the norm, Dharmendra explains the need for a new approach to architecture and problem-solving. He dives deep into Sonet.io's agentless architecture, which allows for the seamless onboarding of users and access to servers through browsers. Dharmendra also delves into the evolving landscape of sales strategies, discussing the concept of "near bound" and the effectiveness of content-based engagement. Join us as we explore Dharmendra's journey in building Sonet.io, the importance of customer feedback, and the company's mission to solve remote work challenges. This episode is packed with valuable insights for CISOs, CIOs, and IT leaders looking to create a secure and hassle-free environment for their remote workforce. Let's dive in!Sonet.ioDharmendra Mohan on LinkedInHey, it's Andrew here … before we get going… if you are a sales leader, you are probably under pressure right now to use your headcount on quota-carrying positions BUT you intuitively....the capabilities of a world-class enablement team without having to use precious headcount AND with a pricing model which makes sense for startups. If this is intriguing, get in touch with me at andrew@unstoppable.do … now let's get going with the episode. Music start and then me asking for a share with a friend and rate/reviewSupport the showFollow me on LinkedIn for regular posts about growing your cybersecurity startupWant to grow your revenue faster? Check out my consulting and trainingNeed ideas about how to grow your pipeline? Sign up for my newsletter.
30. června 1912 se narodil Viktor Fischl - český, židovský a izraelský básník, prozaik, překladatel a publicista. Básně jsou ze sbírky Anglické sonety, vydal Melantrich v roce 1946. Podcast "Báseň na každý den" poslouchejte na Spotify, Apple, Google, YouRadio, České Podcasty nebo Audiolibrix. Domovská stránka podcastu je na https://www.poetickyklub.cz. Odebírejte novinky Poetického klubu do e-mailu, přihlásit se můžete zde.
ADAM BAPTISTE: is a singer, songwriter, producer, musician, writer, entrepreneur, natural healer whose father is from Trinidad and his mother is from Sweden, and he was just the right age to ride the first waves of hip hop that flooded the streets of Sweden in the 80's. Adam Baptiste, has a diverse educational background, having studied business and entrepreneurship at Stockholm International Trade School, personal training at SAFE Sweden, and life coaching in Dubai. Adam is also a certified Co-Active Coach and is currently studying web development. In addition to his musical pursuits, Adam has also leveraged his skills and knowledge to build an NFT smart contract platform on the Ripple ledger, showcasing his innovative spirit and entrepreneurial drive. As a Swedish rapper Adam Baptiste, aka ADL, aka Absent Minded solo hip hop music career took off when paired with producer Vladi C. Both the Absent Minded project and ADL are considered pioneers of hip-hop and urban music in Sweden, and the 1996 Absent Minded album Extreme Paranoia in Stocktown is considered a classic of the era. Prior to Absent Minded, ADL had already had success as frontman for the band Stonefunkers, one of the earliest urban-style Swedish groups. The band toured Europe while signed to Warner Music Group. ADL left the band in 1996 to become a solo rapper. As Absent Minded, with producer Vladi C, the group released three singles and one full-length album, Extreme Paranoia in Stocktown, through record labels Polydor, Breakin' Bread, and Sonet. The single Alright gained international attention when used in a Hugo Boss ad campaign, and Absent Minded toured through Europe with Fugees in 1996. In 2012 he announced that he was going to quit the music industry partly due to his religion but also because he felt disconnected to it as it had become more and more fake and less and less authentic. However he has continued to make music now and then but on his own terms. He's also been writing lyrics for others. Songs written by Adam Baptiste includes I Like How It Feels, Walking on Air, Crash & Burn, Ready For The Good Life and Boys Will Be Boys. "I Like How It Feels" is a song by Spanish recording artist Enrique Iglesias and "Walking on Air" is a song recorded by American singer Katy Perry and for her fourth studio album, Prism (2013), is included as its fourth track. His Genres: Hip-Hop/Rap, Electronic, R&B Soul, Dance, Pop - His featured soundtracks in the important video if it's time surrounding sustainability “We Don't Have Time” with Greta Thunberg, for A SONG THAT GOES OUT TO PLANET EARTH was a major impact of its day. Follow him: @adam_baptiste_ and https://www.linkedin.com/in/adam-baptiste-82341a69/ Episode sponsored by: The Emancipation Support Committee @tt_esc Art on flyer by: @voodofe Art, clothing, footwear +++ @thespotforart Music by: JLC Media @jacylamarcampbell --- Support this podcast: https://podcasters.spotify.com/pod/show/ozzie-stewart/support
The Shred is a weekly roundup of who's raised funds, who's been acquired and who's on the move in the world of recruitment. The Shred is brought to you today by Jobcase.
Hvordan kan det skje at en mann får livet ødelagt av en feil dom? Hør episoden i appen NRK Radio
Sara-Jayne King speaks to Daily Maverick motoring journalist Melinda Ferguson every Saturday morning to review some of the latest cars on the road and keep us up to date with motor industry news. This week's car is the Kia Sonet TurboSee omnystudio.com/listener for privacy information.
In GROOTpoot gesels ons met Dr Yolandi Rautenbach, senior lektor in patologie by die Onderstepoort Veearts-fakulteit en hoof van die Onderstepoort Bloedbank. Luister hier om meer uit te vind oor die onderskeie bloedgroepe in diere, en hoe jou troeteldier kan kwalifiseer om ‘n skenker te word.
In GROOTpoot gesels ons met Dr Yolandi Rautenbach, senior lektor in patologie by die Onderstepoort Veearts-fakulteit en hoof van die Onderstepoort Bloedbank. Luister hier om meer uit te vind oor die onderskeie bloedgroepe in diere, en hoe jou troeteldier kan kwalifiseer om ‘n skenker te word.
Phillip van Heerden, dieregedragskenner van ‘n ‘Wagging Success' gesels oor ‘n paar algemene wanpersepsies rakende gedragsprobleme in honde.
Phillip van Heerden, dieregedragskenner van ‘n ‘Wagging Success' gesels oor ‘n paar algemene wanpersepsies rakende gedragsprobleme in honde.
Liryczny pamiętnik Adama Mickiewicza z podróży po Krymie już w najbliższą niedzielę (29.05) zabrzmi w Programie 2 Polskiego Radia w formie słuchowiska. - Podróż będzie egzotyczna, orientalna, na pewno będzie ucztą poetycką - zapewniał podczas audycji "O wszystkim z kulturą" reżyser Dariusz Błaszczyk.
In this episode we are going to look at Wide Area Network (WAN) Operations.We will be discussing WAN Standards, WANs in the OSI Model, Common WAN Terminology, WAN Devices, Serial Communication, Circuit-Switched Communication, Packet-Switched Communications, and finally SDH, SONET, and DWDM.Thank you so much for listening to this episode of my series on Enterprise Networking, Security, and Automation for the Cisco Certified Network Associate (CCNA).Once again, I'm Kevin and this is KevTechify. Let's get this adventure started.All my details and contact information can be found on my website, https://KevTechify.com-------------------------------------------------------Cisco Certified Network Associate (CCNA)Enterprise Networking, Security, and Automation v3Episode 7 - WAN ConceptsPart B - WAN OperationsPodcast Number: 36-------------------------------------------------------Equipment I like.Home Lab ►► https://kit.co/KevTechify/home-labNetworking Tools ►► https://kit.co/KevTechify/networking-toolsStudio Equipment ►► https://kit.co/KevTechify/studio-equipment
Sonet Stofberg praat oor die Net-skepping 'Tendensmens' op Moedertaaldag.
Sonet Stofberg praat oor die Net-skepping 'Tendensmens' op Moedertaaldag.
Sonet Stofberg gesels met Ronel Rademeyer van die Funanani Trust.
Sonet Stofberg gesels met Ronel Rademeyer van die Funanani Trust.
Sales of sport-utility vehicles or SUVs have risen worldwide in the last few years. In 2019, 47.4% of vehicles sold were SUVs while sedans stood at 22.1%. And between 2010 and 2019, China saw the share of SUVs in car sales jumping from 14% to 44%. Indians are also buying SUVs like never before and this is not going to stop anytime soon. In 2015, the share of hatchbacks in India's total passenger vehicle sales was at 49%, compared to SUVs' 14%. Compare that to 2021. SUVs contributed around 38%, growing from 29% in 2020. This is almost equal to the hatchback segment, which now commands a 40% share in the country's passenger vehicle sales. Hyundai Motor India has maintained its pole position in the segment. The company has five models in its SUV portfolio and gets half its volumes from them. Its sister unit Kia, relying on the popularity of the Seltos and the Sonet, said it would exclusively focus on the SUV segment. With their focus on SUVs, the Korean carmakers have gained a foothold in the space. Led by strong SUV sales, Tata Motors overtook Hyundai Motor India last month to become the second-largest seller of passenger vehicles in the domestic market. Tata Motors has raised the bar in this space, with the mix of SUVs in its portfolio increasing to 52% in 2021, against 37% in 2020. One of the major reasons that explains the shift to SUVs is their commanding road presence as well as their elevated driving position which gives better control to the drivers. The perception that SUVs are a status symbol is also drawing customers to this segment. Their high ground clearance lets the driver negotiate Indian roads better and smaller SUVs which are seeing robust demand make parking less of an inconvenience. And with enough options emerging in the compact SUV segment, with prices starting as low as Rs 5.5 lakh, customers looking to buy a premium hatchback or an entry-level sedan see better value for money in a sub 4-metre SUV. This could be a reason why compact and full size SUVs are now roughly seeing equal amounts of sales, when a decade ago all of the SUV sales in India were in the full-size variety. (chart above, data outdated but pertinent) This segment has seen more than 50 launches in the last three years, more than sedans and hatchbacks combined. India's largest carmaker Maruti Suzuki has been too slow to take advantage of this opportunity. It only has the Vitara Brezza in the sub-4 metre segment and the S-Cross, which is perceived as a crossover by most buyers. As a result of successful launches by its competitors riding the SUV trend, Maruti's overall PV market share has fallen from about 50% to 40% in a year. Carmakers have already lined-up new launches for the year. And with Maruti confirming that SUV would be the company's focus segment in 2022, a battle of epic proportions on the cards in India's fastest-growing passenger vehicle segment. Watch video
Julelåt-laging er i gang og det kommer en oppdatering om hvordan det ligger an. Underveis i innspillingen blir bassene overrasket med julegave av Acast, hvor det blant annet er aquavit som de velger å drikke underveis i innspillingen. Morten må legge seg flat for noe han sa i forrige podcast som den siste uken har ført til at Safari spiste en veldig dårlig norsk rett. I Emils sannhet eller svada må han svare for om han har sonet tid fordi han ikke ville snitche på en kompis. Hosted on Acast. See acast.com/privacy for more information.
This episode is also available as a blog post: https://uandiautomobiles.com/kia-sonet-vs-hyundai-venue-which-is-better/
Après une première partie de carrière dans le conseil media et la publicité digitale, Benjamin Sonet souhaitait revenir à Bordeaux, sa ville d'origine et mettre au service des domaines viticoles, ses compétences en marketing digital. En 2015, Il constate que le monde du vin est quasi absent du monde numériquel et des réseaux sociaux et crée en 2016 avec son associé Bernard Camus, My Balthazar, une plateforme qui collecte des données et qui mesure l'activité digitale des châteaux sur les réseaux sociaux. Dans ce 20ème épisode de
Podcasty Radia Wnet / Warszawa 87,8 FM | Kraków 95,2 FM | Wrocław 96,8 FM / Białystok 103,9 FM
Robert Czyżewski opowiada o przetłumaczonych na język ukraiński Sonetów Krymskich Adama Mickiewicza. Wydanie zawiera w sobie pokaźną liczbę zdjęć. Pozycja będzie dostarczona bibliotekom. Głównym odbiorcą Sonetów mają być Tatarzy krymscy, dla których poezja polskiego poety ma duże znaczenie. --- Send in a voice message: https://anchor.fm/radiownet/message
De trådte sine barnesko på legendariske Club 7, der de hang med Terje Rypdal og Fleetwood Mac. De fikk platekontrakt av Arne Bendiksen og ga ut Norges første dobbeltalbum, ei skive som ble kåret til tidenes beste norske rockeplate i 2007. I 2015 feiret Junipher Greene 50-årsjubileum, og Jernverket fikk besøk i studio. Junipher Greene forteller blant annet om trommesett på kjelker og tunghørte gamlinger, en opptreden på Norges første friluftskonsert på St. Hanshaugen, overgangen fra 4/4 til 7/8, angivelig stjeling av gitarer og nattlige studioinnspillinger, furtne fyrer i Deep Purple, blomsterkasting på konsert i Polen, turné og fjernsynsopptredener i Elfenbenskysten, en full Christine Perfect på Rondo og 130 målte desibel på Studentersamfundet i Trondheim. I november 2021 er det 50 år siden "Friendship" ble utgitt. Det arbeides med en eksklusiv samleboks for anledningen, men bandet er avhengige av et stort nok antall bestillinger til at det skal lønne seg. Les mer på Junipher Greenes Facebook-gruppe eller send epost til Sten Olav Helgeland ved spørsmål. Hør Junipher Greenes favorittlåter ved å følge Jernverket på Spotify. Spilleliste: Jethro Tull - A Song for Jeffrey John Mayall & the Bluesbreakers w/Eric Clapton - Have You Heard Frank Zappa - Peaches en regalia Junipher Greene - Take the Road Across the Bridge The Paul Butterfield Blues Band - Born in Chicago The Animals - We've Gotta Get Out of This Place Leif Ove Andsnes - V. Kjempeviseslåtten Op. 22 (Harald Sæverud) Jimi Hendrix - I Don't Live Today Traffic - Paper Sun Them - Gloria Fleetwood Mac - Oh Well Støtt Jernverket økonomisk via Patreon eller Vipps-nummer 567438. Det er også hyggelig om du legger igjen en anmeldelse der du lytter.
The bro's discuss how owning a car has suddenly become a necessity during the pandemic. We discuss renting vs owning a car and finally we shift our focus to the new Kia Sonet which comes loaded with some exciting new features inside the cabin and under the hood as well.