POPULARITY
CN: Erzählungen von Rassismuserfahrungen**********"Wenn das so weitergeht, bin ich weg aus Deutschland!" Das sagt Deniz. Der Grund: Die Behandlung von Menschen mit Migrationsgeschichte und der Rassismus in Deutschland. Wie mit dem Gefühl von Unsicherheit und Angst im eigenen Land umgehen?**********Ihr hört: Gesprächspartner: Deniz, 22, Kreditberater uas Bayern, überlegt konkret Deutschland zu verlassen Gesprächspartnerin: Amanda Nentwig-Utzig, psychologische Psychotherapeutin Gesprächspartner: Dr. Elias Steinhilper, politischer Soziologe und wissenschaftlicher Mitarbeiter am DeZIM (Deutsches Zentrum für Integrations- und Migrationsforschung) Autor*in und Host: Shalin Rogall Redaktion: Sarah Brendel, Anne Bohlmann, Clara Neubert, Friederike Seeger Produktion: Jan Morgenstern**********Quellen:Zajak, Sabrina; Best, Fabio; Pickel, Gert; Quent, Matthias; Römer, Friederike; Steinhilper, Elias; Zick, Andreas (2024):: Ablehnung, Angst und Abwanderungspläne: Die gesellschaftlichen Folgen des Aufstiegs der AfD. Berlin: Deutsches Zentrum für Integrations- und Migrationsforschung (DeZIM) **********Empfehlungen aus dieser Folge:exit RACISM rassismuskritisch denken lernen Urheberin/ Autorin: Tupoka Ogette Unrast Verlag**********Den Artikel zum Stück findet ihr hier.**********Ihr könnt uns auch auf diesen Kanälen folgen: TikTok auf&ab , TikTok wie_geht und Instagram .**********Meldet euch!Ihr könnt das Team von Facts & Feelings über WhatsApp erreichen.Uns interessiert: Was beschäftigt euch? Habt ihr ein Thema, über das wir unbedingt in der Sendung und im Podcast sprechen sollen?Schickt uns eine Sprachnachricht oder schreibt uns per 0160-91360852 oder an factsundfeelings@deutschlandradio.de.Wichtig: Wenn ihr diese Nummer speichert und uns eine Nachricht schickt, akzeptiert ihr unsere Regeln zum Datenschutz und bei WhatsApp die Datenschutzrichtlinien von WhatsApp.
durée : 00:54:35 - Questions politiques - par : Carine BECARD, Fabienne Le Moal - Mayotte, réforme des retraites, budget... Yaël Braun-Pivet, présidente de l'Assemblée nationale est l'invitée de Questions politiques
durée : 00:54:35 - Questions politiques - par : Carine BECARD, Fabienne Le Moal - Mayotte, réforme des retraites, budget... Yaël Braun-Pivet, présidente de l'Assemblée nationale est l'invitée de Questions politiques
Applications for the 2025 AI Engineer Summit are up, and you can save the date for AIE Singapore in April and AIE World's Fair 2025 in June.Happy new year, and thanks for 100 great episodes! Please let us know what you want to see/hear for the next 100!Full YouTube Episode with Slides/ChartsLike and subscribe and hit that bell to get notifs!Timestamps* 00:00 Welcome to the 100th Episode!* 00:19 Reflecting on the Journey* 00:47 AI Engineering: The Rise and Impact* 03:15 Latent Space Live and AI Conferences* 09:44 The Competitive AI Landscape* 21:45 Synthetic Data and Future Trends* 35:53 Creative Writing with AI* 36:12 Legal and Ethical Issues in AI* 38:18 The Data War: GPU Poor vs. GPU Rich* 39:12 The Rise of GPU Ultra Rich* 40:47 Emerging Trends in AI Models* 45:31 The Multi-Modality War* 01:05:31 The Future of AI Benchmarks* 01:13:17 Pionote and Frontier Models* 01:13:47 Niche Models and Base Models* 01:14:30 State Space Models and RWKB* 01:15:48 Inference Race and Price Wars* 01:22:16 Major AI Themes of the Year* 01:22:48 AI Rewind: January to March* 01:26:42 AI Rewind: April to June* 01:33:12 AI Rewind: July to September* 01:34:59 AI Rewind: October to December* 01:39:53 Year-End Reflections and PredictionsTranscript[00:00:00] Welcome to the 100th Episode![00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host Swyx for the 100th time today.[00:00:12] swyx: Yay, um, and we're so glad that, yeah, you know, everyone has, uh, followed us in this journey. How do you feel about it? 100 episodes.[00:00:19] Alessio: Yeah, I know.[00:00:19] Reflecting on the Journey[00:00:19] Alessio: Almost two years that we've been doing this. We've had four different studios. Uh, we've had a lot of changes. You know, we used to do this lightning round. When we first started that we didn't like, and we tried to change the question. The answer[00:00:32] swyx: was cursor and perplexity.[00:00:34] Alessio: Yeah, I love mid journey. It's like, do you really not like anything else?[00:00:38] Alessio: Like what's, what's the unique thing? And I think, yeah, we, we've also had a lot more research driven content. You know, we had like 3DAO, we had, you know. Jeremy Howard, we had more folks like that.[00:00:47] AI Engineering: The Rise and Impact[00:00:47] Alessio: I think we want to do more of that too in the new year, like having, uh, some of the Gemini folks, both on the research and the applied side.[00:00:54] Alessio: Yeah, but it's been a ton of fun. I think we both started, I wouldn't say as a joke, we were kind of like, Oh, we [00:01:00] should do a podcast. And I think we kind of caught the right wave, obviously. And I think your rise of the AI engineer posts just kind of get people. Sombra to congregate, and then the AI engineer summit.[00:01:11] Alessio: And that's why when I look at our growth chart, it's kind of like a proxy for like the AI engineering industry as a whole, which is almost like, like, even if we don't do that much, we keep growing just because there's so many more AI engineers. So did you expect that growth or did you expect that would take longer for like the AI engineer thing to kind of like become, you know, everybody talks about it today.[00:01:32] swyx: So, the sign of that, that we have won is that Gartner puts it at the top of the hype curve right now. So Gartner has called the peak in AI engineering. I did not expect, um, to what level. I knew that I was correct when I called it because I did like two months of work going into that. But I didn't know, You know, how quickly it could happen, and obviously there's a chance that I could be wrong.[00:01:52] swyx: But I think, like, most people have come around to that concept. Hacker News hates it, which is a good sign. But there's enough people that have defined it, you know, GitHub, when [00:02:00] they launched GitHub Models, which is the Hugging Face clone, they put AI engineers in the banner, like, above the fold, like, in big So I think it's like kind of arrived as a meaningful and useful definition.[00:02:12] swyx: I think people are trying to figure out where the boundaries are. I think that was a lot of the quote unquote drama that happens behind the scenes at the World's Fair in June. Because I think there's a lot of doubt or questions about where ML engineering stops and AI engineering starts. That's a useful debate to be had.[00:02:29] swyx: In some sense, I actually anticipated that as well. So I intentionally did not. Put a firm definition there because most of the successful definitions are necessarily underspecified and it's actually useful to have different perspectives and you don't have to specify everything from the outset.[00:02:45] Alessio: Yeah, I was at um, AWS reInvent and the line to get into like the AI engineering talk, so to speak, which is, you know, applied AI and whatnot was like, there are like hundreds of people just in line to go in.[00:02:56] Alessio: I think that's kind of what enabled me. People, right? Which is what [00:03:00] you kind of talked about. It's like, Hey, look, you don't actually need a PhD, just, yeah, just use the model. And then maybe we'll talk about some of the blind spots that you get as an engineer with the earlier posts that we also had on on the sub stack.[00:03:11] Alessio: But yeah, it's been a heck of a heck of a two years.[00:03:14] swyx: Yeah.[00:03:15] Latent Space Live and AI Conferences[00:03:15] swyx: You know, I was, I was trying to view the conference as like, so NeurIPS is I think like 16, 17, 000 people. And the Latent Space Live event that we held there was 950 signups. I think. The AI world, the ML world is still very much research heavy. And that's as it should be because ML is very much in a research phase.[00:03:34] swyx: But as we move this entire field into production, I think that ratio inverts into becoming more engineering heavy. So at least I think engineering should be on the same level, even if it's never as prestigious, like it'll always be low status because at the end of the day, you're manipulating APIs or whatever.[00:03:51] swyx: But Yeah, wrapping GPTs, but there's going to be an increasing stack and an art to doing these, these things well. And I, you know, I [00:04:00] think that's what we're focusing on for the podcast, the conference and basically everything I do seems to make sense. And I think we'll, we'll talk about the trends here that apply.[00:04:09] swyx: It's, it's just very strange. So, like, there's a mix of, like, keeping on top of research while not being a researcher and then putting that research into production. So, like, people always ask me, like, why are you covering Neuralibs? Like, this is a ML research conference and I'm like, well, yeah, I mean, we're not going to, to like, understand everything Or reproduce every single paper, but the stuff that is being found here is going to make it through into production at some point, you hope.[00:04:32] swyx: And then actually like when I talk to the researchers, they actually get very excited because they're like, oh, you guys are actually caring about how this goes into production and that's what they really really want. The measure of success is previously just peer review, right? Getting 7s and 8s on their um, Academic review conferences and stuff like citations is one metric, but money is a better metric.[00:04:51] Alessio: Money is a better metric. Yeah, and there were about 2200 people on the live stream or something like that. Yeah, yeah. Hundred on the live stream. So [00:05:00] I try my best to moderate, but it was a lot spicier in person with Jonathan and, and Dylan. Yeah, that it was in the chat on YouTube.[00:05:06] swyx: I would say that I actually also created.[00:05:09] swyx: Layen Space Live in order to address flaws that are perceived in academic conferences. This is not NeurIPS specific, it's ICML, NeurIPS. Basically, it's very sort of oriented towards the PhD student, uh, market, job market, right? Like literally all, basically everyone's there to advertise their research and skills and get jobs.[00:05:28] swyx: And then obviously all the, the companies go there to hire them. And I think that's great for the individual researchers, but for people going there to get info is not great because you have to read between the lines, bring a ton of context in order to understand every single paper. So what is missing is effectively what I ended up doing, which is domain by domain, go through and recap the best of the year.[00:05:48] swyx: Survey the field. And there are, like NeurIPS had a, uh, I think ICML had a like a position paper track, NeurIPS added a benchmarks, uh, datasets track. These are ways in which to address that [00:06:00] issue. Uh, there's always workshops as well. Every, every conference has, you know, a last day of workshops and stuff that provide more of an overview.[00:06:06] swyx: But they're not specifically prompted to do so. And I think really, uh, Organizing a conference is just about getting good speakers and giving them the correct prompts. And then they will just go and do that thing and they do a very good job of it. So I think Sarah did a fantastic job with the startups prompt.[00:06:21] swyx: I can't list everybody, but we did best of 2024 in startups, vision, open models. Post transformers, synthetic data, small models, and agents. And then the last one was the, uh, and then we also did a quick one on reasoning with Nathan Lambert. And then the last one, obviously, was the debate that people were very hyped about.[00:06:39] swyx: It was very awkward. And I'm really, really thankful for John Franco, basically, who stepped up to challenge Dylan. Because Dylan was like, yeah, I'll do it. But He was pro scaling. And I think everyone who is like in AI is pro scaling, right? So you need somebody who's ready to publicly say, no, we've hit a wall.[00:06:57] swyx: So that means you're saying Sam Altman's wrong. [00:07:00] You're saying, um, you know, everyone else is wrong. It helps that this was the day before Ilya went on, went up on stage and then said pre training has hit a wall. And data has hit a wall. So actually Jonathan ended up winning, and then Ilya supported that statement, and then Noam Brown on the last day further supported that statement as well.[00:07:17] swyx: So it's kind of interesting that I think the consensus kind of going in was that we're not done scaling, like you should believe in a better lesson. And then, four straight days in a row, you had Sepp Hochreiter, who is the creator of the LSTM, along with everyone's favorite OG in AI, which is Juergen Schmidhuber.[00:07:34] swyx: He said that, um, we're pre trading inside a wall, or like, we've run into a different kind of wall. And then we have, you know John Frankel, Ilya, and then Noam Brown are all saying variations of the same thing, that we have hit some kind of wall in the status quo of what pre trained, scaling large pre trained models has looked like, and we need a new thing.[00:07:54] swyx: And obviously the new thing for people is some make, either people are calling it inference time compute or test time [00:08:00] compute. I think the collective terminology has been inference time, and I think that makes sense because test time, calling it test, meaning, has a very pre trained bias, meaning that the only reason for running inference at all is to test your model.[00:08:11] swyx: That is not true. Right. Yeah. So, so, I quite agree that. OpenAI seems to have adopted, or the community seems to have adopted this terminology of ITC instead of TTC. And that, that makes a lot of sense because like now we care about inference, even right down to compute optimality. Like I actually interviewed this author who recovered or reviewed the Chinchilla paper.[00:08:31] swyx: Chinchilla paper is compute optimal training, but what is not stated in there is it's pre trained compute optimal training. And once you start caring about inference, compute optimal training, you have a different scaling law. And in a way that we did not know last year.[00:08:45] Alessio: I wonder, because John is, he's also on the side of attention is all you need.[00:08:49] Alessio: Like he had the bet with Sasha. So I'm curious, like he doesn't believe in scaling, but he thinks the transformer, I wonder if he's still. So, so,[00:08:56] swyx: so he, obviously everything is nuanced and you know, I told him to play a character [00:09:00] for this debate, right? So he actually does. Yeah. He still, he still believes that we can scale more.[00:09:04] swyx: Uh, he just assumed the character to be very game for, for playing this debate. So even more kudos to him that he assumed a position that he didn't believe in and still won the debate.[00:09:16] Alessio: Get rekt, Dylan. Um, do you just want to quickly run through some of these things? Like, uh, Sarah's presentation, just the highlights.[00:09:24] swyx: Yeah, we can't go through everyone's slides, but I pulled out some things as a factor of, like, stuff that we were going to talk about. And we'll[00:09:30] Alessio: publish[00:09:31] swyx: the rest. Yeah, we'll publish on this feed the best of 2024 in those domains. And hopefully people can benefit from the work that our speakers have done.[00:09:39] swyx: But I think it's, uh, these are just good slides. And I've been, I've been looking for a sort of end of year recaps from, from people.[00:09:44] The Competitive AI Landscape[00:09:44] swyx: The field has progressed a lot. You know, I think the max ELO in 2023 on LMSys used to be 1200 for LMSys ELOs. And now everyone is at least at, uh, 1275 in their ELOs, and this is across Gemini, Chadjibuti, [00:10:00] Grok, O1.[00:10:01] swyx: ai, which with their E Large model, and Enthopic, of course. It's a very, very competitive race. There are multiple Frontier labs all racing, but there is a clear tier zero Frontier. And then there's like a tier one. It's like, I wish I had everything else. Tier zero is extremely competitive. It's effectively now three horse race between Gemini, uh, Anthropic and OpenAI.[00:10:21] swyx: I would say that people are still holding out a candle for XAI. XAI, I think, for some reason, because their API was very slow to roll out, is not included in these metrics. So it's actually quite hard to put on there. As someone who also does charts, XAI is continually snubbed because they don't work well with the benchmarking people.[00:10:42] swyx: Yeah, yeah, yeah. It's a little trivia for why XAI always gets ignored. The other thing is market share. So these are slides from Sarah. We have it up on the screen. It has gone from very heavily open AI. So we have some numbers and estimates. These are from RAMP. Estimates of open AI market share in [00:11:00] December 2023.[00:11:01] swyx: And this is basically, what is it, GPT being 95 percent of production traffic. And I think if you correlate that with stuff that we asked. Harrison Chase on the LangChain episode, it was true. And then CLAUD 3 launched mid middle of this year. I think CLAUD 3 launched in March, CLAUD 3. 5 Sonnet was in June ish.[00:11:23] swyx: And you can start seeing the market share shift towards opening, uh, towards that topic, uh, very, very aggressively. The more recent one is Gemini. So if I scroll down a little bit, this is an even more recent dataset. So RAM's dataset ends in September 2 2. 2024. Gemini has basically launched a price war at the low end, uh, with Gemini Flash, uh, being basically free for personal use.[00:11:44] swyx: Like, I think people don't understand the free tier. It's something like a billion tokens per day. Unless you're trying to abuse it, you cannot really exhaust your free tier on Gemini. They're really trying to get you to use it. They know they're in like third place, um, fourth place, depending how you, how you count.[00:11:58] swyx: And so they're going after [00:12:00] the Lower tier first, and then, you know, maybe the upper tier later, but yeah, Gemini Flash, according to OpenRouter, is now 50 percent of their OpenRouter requests. Obviously, these are the small requests. These are small, cheap requests that are mathematically going to be more.[00:12:15] swyx: The smart ones obviously are still going to OpenAI. But, you know, it's a very, very big shift in the market. Like basically 2023, 2022, To going into 2024 opening has gone from nine five market share to Yeah. Reasonably somewhere between 50 to 75 market share.[00:12:29] Alessio: Yeah. I'm really curious how ramped does the attribution to the model?[00:12:32] Alessio: If it's API, because I think it's all credit card spin. . Well, but it's all, the credit card doesn't say maybe. Maybe the, maybe when they do expenses, they upload the PDF, but yeah, the, the German I think makes sense. I think that was one of my main 2024 takeaways that like. The best small model companies are the large labs, which is not something I would have thought that the open source kind of like long tail would be like the small model.[00:12:53] swyx: Yeah, different sizes of small models we're talking about here, right? Like so small model here for Gemini is AB, [00:13:00] right? Uh, mini. We don't know what the small model size is, but yeah, it's probably in the double digits or maybe single digits, but probably double digits. The open source community has kind of focused on the one to three B size.[00:13:11] swyx: Mm-hmm . Yeah. Maybe[00:13:12] swyx: zero, maybe 0.5 B uh, that's moon dream and that is small for you then, then that's great. It makes sense that we, we have a range for small now, which is like, may, maybe one to five B. Yeah. I'll even put that at, at, at the high end. And so this includes Gemma from Gemini as well. But also includes the Apple Foundation models, which I think Apple Foundation is 3B.[00:13:32] Alessio: Yeah. No, that's great. I mean, I think in the start small just meant cheap. I think today small is actually a more nuanced discussion, you know, that people weren't really having before.[00:13:43] swyx: Yeah, we can keep going. This is a slide that I smiley disagree with Sarah. She's pointing to the scale SEAL leaderboard. I think the Researchers that I talked with at NeurIPS were kind of positive on this because basically you need private test [00:14:00] sets to prevent contamination.[00:14:02] swyx: And Scale is one of maybe three or four people this year that has really made an effort in doing a credible private test set leaderboard. Llama405B does well compared to Gemini and GPT 40. And I think that's good. I would say that. You know, it's good to have an open model that is that big, that does well on those metrics.[00:14:23] swyx: But anyone putting 405B in production will tell you, if you scroll down a little bit to the artificial analysis numbers, that it is very slow and very expensive to infer. Um, it doesn't even fit on like one node. of, uh, of H100s. Cerebras will be happy to tell you they can serve 4 or 5B on their super large chips.[00:14:42] swyx: But, um, you know, if you need to do anything custom to it, you're still kind of constrained. So, is 4 or 5B really that relevant? Like, I think most people are basically saying that they only use 4 or 5B as a teacher model to distill down to something. Even Meta is doing it. So with Lama 3. [00:15:00] 3 launched, they only launched the 70B because they use 4 or 5B to distill the 70B.[00:15:03] swyx: So I don't know if like open source is keeping up. I think they're the, the open source industrial complex is very invested in telling you that the, if the gap is narrowing, I kind of disagree. I think that the gap is widening with O1. I think there are very, very smart people trying to narrow that gap and they should.[00:15:22] swyx: I really wish them success, but you cannot use a chart that is nearing 100 in your saturation chart. And look, the distance between open source and closed source is narrowing. Of course it's going to narrow because you're near 100. This is stupid. But in metrics that matter, is open source narrowing?[00:15:38] swyx: Probably not for O1 for a while. And it's really up to the open source guys to figure out if they can match O1 or not.[00:15:46] Alessio: I think inference time compute is bad for open source just because, you know, Doc can donate the flops at training time, but he cannot donate the flops at inference time. So it's really hard to like actually keep up on that axis.[00:15:59] Alessio: Big, big business [00:16:00] model shift. So I don't know what that means for the GPU clouds. I don't know what that means for the hyperscalers, but obviously the big labs have a lot of advantage. Because, like, it's not a static artifact that you're putting the compute in. You're kind of doing that still, but then you're putting a lot of computed inference too.[00:16:17] swyx: Yeah, yeah, yeah. Um, I mean, Llama4 will be reasoning oriented. We talked with Thomas Shalom. Um, kudos for getting that episode together. That was really nice. Good, well timed. Actually, I connected with the AI meta guy, uh, at NeurIPS, and, um, yeah, we're going to coordinate something for Llama4. Yeah, yeah,[00:16:32] Alessio: and our friend, yeah.[00:16:33] Alessio: Clara Shi just joined to lead the business agent side. So I'm sure we'll have her on in the new year.[00:16:39] swyx: Yeah. So, um, my comment on, on the business model shift, this is super interesting. Apparently it is wide knowledge that OpenAI wanted more than 6. 6 billion dollars for their fundraise. They wanted to raise, you know, higher, and they did not.[00:16:51] swyx: And what that means is basically like, it's very convenient that we're not getting GPT 5, which would have been a larger pre train. We should have a lot of upfront money. And [00:17:00] instead we're, we're converting fixed costs into variable costs, right. And passing it on effectively to the customer. And it's so much easier to take margin there because you can directly attribute it to like, Oh, you're using this more.[00:17:12] swyx: Therefore you, you pay more of the cost and I'll just slap a margin in there. So like that lets you control your growth margin and like tie your. Your spend, or your sort of inference spend, accordingly. And it's just really interesting to, that this change in the sort of inference paradigm has arrived exactly at the same time that the funding environment for pre training is effectively drying up, kind of.[00:17:36] swyx: I feel like maybe the VCs are very in tune with research anyway, so like, they would have noticed this, but, um, it's just interesting.[00:17:43] Alessio: Yeah, and I was looking back at our yearly recap of last year. Yeah. And the big thing was like the mixed trial price fights, you know, and I think now it's almost like there's nowhere to go, like, you know, Gemini Flash is like basically giving it away for free.[00:17:55] Alessio: So I think this is a good way for the labs to generate more revenue and pass down [00:18:00] some of the compute to the customer. I think they're going to[00:18:02] swyx: keep going. I think that 2, will come.[00:18:05] Alessio: Yeah, I know. Totally. I mean, next year, the first thing I'm doing is signing up for Devin. Signing up for the pro chat GBT.[00:18:12] Alessio: Just to try. I just want to see what does it look like to spend a thousand dollars a month on AI?[00:18:17] swyx: Yes. Yes. I think if your, if your, your job is a, at least AI content creator or VC or, you know, someone who, whose job it is to stay on, stay on top of things, you should already be spending like a thousand dollars a month on, on stuff.[00:18:28] swyx: And then obviously easy to spend, hard to use. You have to actually use. The good thing is that actually Google lets you do a lot of stuff for free now. So like deep research. That they just launched. Uses a ton of inference and it's, it's free while it's in preview.[00:18:45] Alessio: Yeah. They need to put that in Lindy.[00:18:47] Alessio: I've been using Lindy lately. I've been a built a bunch of things once we had flow because I liked the new thing. It's pretty good. I even did a phone call assistant. Um, yeah, they just launched Lindy voice. Yeah, I think once [00:19:00] they get advanced voice mode like capability today, still like speech to text, you can kind of tell.[00:19:06] Alessio: Um, but it's good for like reservations and things like that. So I have a meeting prepper thing. And so[00:19:13] swyx: it's good. Okay. I feel like we've, we've covered a lot of stuff. Uh, I, yeah, I, you know, I think We will go over the individual, uh, talks in a separate episode. Uh, I don't want to take too much time with, uh, this stuff, but that suffice to say that there is a lot of progress in each field.[00:19:28] swyx: Uh, we covered vision. Basically this is all like the audience voting for what they wanted. And then I just invited the best people I could find in each audience, especially agents. Um, Graham, who I talked to at ICML in Vienna, he is currently still number one. It's very hard to stay on top of SweetBench.[00:19:45] swyx: OpenHand is currently still number one. switchbench full, which is the hardest one. He had very good thoughts on agents, which I, which I'll highlight for people. Everyone is saying 2025 is the year of agents, just like they said last year. And, uh, but he had [00:20:00] thoughts on like eight parts of what are the frontier problems to solve in agents.[00:20:03] swyx: And so I'll highlight that talk as well.[00:20:05] Alessio: Yeah. The number six, which is the Hacken agents learn more about the environment, has been a Super interesting to us as well, just to think through, because, yeah, how do you put an agent in an enterprise where most things in an enterprise have never been public, you know, a lot of the tooling, like the code bases and things like that.[00:20:23] Alessio: So, yeah, there's not indexing and reg. Well, yeah, but it's more like. You can't really rag things that are not documented. But people know them based on how they've been doing it. You know, so I think there's almost this like, you know, Oh, institutional knowledge. Yeah, the boring word is kind of like a business process extraction.[00:20:38] Alessio: Yeah yeah, I see. It's like, how do you actually understand how these things are done? I see. Um, and I think today the, the problem is that, Yeah, the agents are, that most people are building are good at following instruction, but are not as good as like extracting them from you. Um, so I think that will be a big unlock just to touch quickly on the Jeff Dean thing.[00:20:55] Alessio: I thought it was pretty, I mean, we'll link it in the, in the things, but. I think the main [00:21:00] focus was like, how do you use ML to optimize the systems instead of just focusing on ML to do something else? Yeah, I think speculative decoding, we had, you know, Eugene from RWKB on the podcast before, like he's doing a lot of that with Fetterless AI.[00:21:12] swyx: Everyone is. I would say it's the norm. I'm a little bit uncomfortable with how much it costs, because it does use more of the GPU per call. But because everyone is so keen on fast inference, then yeah, makes sense.[00:21:24] Alessio: Exactly. Um, yeah, but we'll link that. Obviously Jeff is great.[00:21:30] swyx: Jeff is, Jeff's talk was more, it wasn't focused on Gemini.[00:21:33] swyx: I think people got the wrong impression from my tweet. It's more about how Google approaches ML and uses ML to design systems and then systems feedback into ML. And I think this ties in with Lubna's talk.[00:21:45] Synthetic Data and Future Trends[00:21:45] swyx: on synthetic data where it's basically the story of bootstrapping of humans and AI in AI research or AI in production.[00:21:53] swyx: So her talk was on synthetic data, where like how much synthetic data has grown in 2024 in the pre training side, the post training side, [00:22:00] and the eval side. And I think Jeff then also extended it basically to chips, uh, to chip design. So he'd spend a lot of time talking about alpha chip. And most of us in the audience are like, we're not working on hardware, man.[00:22:11] swyx: Like you guys are great. TPU is great. Okay. We'll buy TPUs.[00:22:14] Alessio: And then there was the earlier talk. Yeah. But, and then we have, uh, I don't know if we're calling them essays. What are we calling these? But[00:22:23] swyx: for me, it's just like bonus for late in space supporters, because I feel like they haven't been getting anything.[00:22:29] swyx: And then I wanted a more high frequency way to write stuff. Like that one I wrote in an afternoon. I think basically we now have an answer to what Ilya saw. It's one year since. The blip. And we know what he saw in 2014. We know what he saw in 2024. We think we know what he sees in 2024. He gave some hints and then we have vague indications of what he saw in 2023.[00:22:54] swyx: So that was the Oh, and then 2016 as well, because of this lawsuit with Elon, OpenAI [00:23:00] is publishing emails from Sam's, like, his personal text messages to Siobhan, Zelis, or whatever. So, like, we have emails from Ilya saying, this is what we're seeing in OpenAI, and this is why we need to scale up GPUs. And I think it's very prescient in 2016 to write that.[00:23:16] swyx: And so, like, it is exactly, like, basically his insights. It's him and Greg, basically just kind of driving the scaling up of OpenAI, while they're still playing Dota. They're like, no, like, we see the path here.[00:23:30] Alessio: Yeah, and it's funny, yeah, they even mention, you know, we can only train on 1v1 Dota. We need to train on 5v5, and that takes too many GPUs.[00:23:37] Alessio: Yeah,[00:23:37] swyx: and at least for me, I can speak for myself, like, I didn't see the path from Dota to where we are today. I think even, maybe if you ask them, like, they wouldn't necessarily draw a straight line. Yeah,[00:23:47] Alessio: no, definitely. But I think like that was like the whole idea of almost like the RL and we talked about this with Nathan on his podcast.[00:23:55] Alessio: It's like with RL, you can get very good at specific things, but then you can't really like generalize as much. And I [00:24:00] think the language models are like the opposite, which is like, you're going to throw all this data at them and scale them up, but then you really need to drive them home on a specific task later on.[00:24:08] Alessio: And we'll talk about the open AI reinforcement, fine tuning, um, announcement too, and all of that. But yeah, I think like scale is all you need. That's kind of what Elia will be remembered for. And I think just maybe to clarify on like the pre training is over thing that people love to tweet. I think the point of the talk was like everybody, we're scaling these chips, we're scaling the compute, but like the second ingredient which is data is not scaling at the same rate.[00:24:35] Alessio: So it's not necessarily pre training is over. It's kind of like What got us here won't get us there. In his email, he predicted like 10x growth every two years or something like that. And I think maybe now it's like, you know, you can 10x the chips again, but[00:24:49] swyx: I think it's 10x per year. Was it? I don't know.[00:24:52] Alessio: Exactly. And Moore's law is like 2x. So it's like, you know, much faster than that. And yeah, I like the fossil fuel of AI [00:25:00] analogy. It's kind of like, you know, the little background tokens thing. So the OpenAI reinforcement fine tuning is basically like, instead of fine tuning on data, you fine tune on a reward model.[00:25:09] Alessio: So it's basically like, instead of being data driven, it's like task driven. And I think people have tasks to do, they don't really have a lot of data. So I'm curious to see how that changes, how many people fine tune, because I think this is what people run into. It's like, Oh, you can fine tune llama. And it's like, okay, where do I get the data?[00:25:27] Alessio: To fine tune it on, you know, so it's great that we're moving the thing. And then I really like he had this chart where like, you know, the brain mass and the body mass thing is basically like mammals that scaled linearly by brain and body size, and then humans kind of like broke off the slope. So it's almost like maybe the mammal slope is like the pre training slope.[00:25:46] Alessio: And then the post training slope is like the, the human one.[00:25:49] swyx: Yeah. I wonder what the. I mean, we'll know in 10 years, but I wonder what the y axis is for, for Ilya's SSI. We'll try to get them on.[00:25:57] Alessio: Ilya, if you're listening, you're [00:26:00] welcome here. Yeah, and then he had, you know, what comes next, like agent, synthetic data, inference, compute, I thought all of that was like that.[00:26:05] Alessio: I don't[00:26:05] swyx: think he was dropping any alpha there. Yeah, yeah, yeah.[00:26:07] Alessio: Yeah. Any other new reps? Highlights?[00:26:10] swyx: I think that there was comparatively a lot more work. Oh, by the way, I need to plug that, uh, my friend Yi made this, like, little nice paper. Yeah, that was really[00:26:20] swyx: nice.[00:26:20] swyx: Uh, of, uh, of, like, all the, he's, she called it must read papers of 2024.[00:26:26] swyx: So I laid out some of these at NeurIPS, and it was just gone. Like, everyone just picked it up. Because people are dying for, like, little guidance and visualizations And so, uh, I thought it was really super nice that we got there.[00:26:38] Alessio: Should we do a late in space book for each year? Uh, I thought about it. For each year we should.[00:26:42] Alessio: Coffee table book. Yeah. Yeah. Okay. Put it in the will. Hi, Will. By the way, we haven't introduced you. He's our new, you know, general organist, Jamie. You need to[00:26:52] swyx: pull up more things. One thing I saw that, uh, Okay, one fun one, and then one [00:27:00] more general one. So the fun one is this paper on agent collusion. This is a paper on steganography.[00:27:06] swyx: This is secret collusion among AI agents, multi agent deception via steganography. I tried to go to NeurIPS in order to find these kinds of papers because the real reason Like NeurIPS this year has a lottery system. A lot of people actually even go and don't buy tickets because they just go and attend the side events.[00:27:22] swyx: And then also the people who go and end up crowding around the most popular papers, which you already know and already read them before you showed up to NeurIPS. So the only reason you go there is to talk to the paper authors, but there's like something like 10, 000 other. All these papers out there that, you know, are just people's work that they, that they did on the air and they failed to get attention for one reason or another.[00:27:42] swyx: And this was one of them. Uh, it was like all the way at the back. And this is a deep mind paper that actually focuses on collusion between AI agents, uh, by hiding messages in the text that they generate. Uh, so that's what steganography is. So a very simple example would be the first letter of every word.[00:27:57] swyx: If you Pick that out, you know, and the code sends a [00:28:00] different message than that. But something I've always emphasized is to LLMs, we read left to right. LLMs can read up, down, sideways, you know, in random character order. And it's the same to them as it is to us. So if we were ever to get You know, self motivated, underlined LLMs that we're trying to collaborate to take over the planet.[00:28:19] swyx: This would be how they do it. They spread messages among us in the messages that we generate. And he developed a scaling law for that. So he marked, I'm showing it on screen right now, the emergence of this phenomenon. Basically, for example, for Cypher encoding, GPT 2, Lama 2, mixed trial, GPT 3. 5, zero capabilities, and sudden 4.[00:28:40] swyx: And this is the kind of Jason Wei type emergence properties that people kind of look for. I think what made this paper stand out as well, so he developed the benchmark for steganography collusion, and he also focused on shelling point collusion, which is very low coordination. For agreeing on a decoding encoding format, you kind of need to have some [00:29:00] agreement on that.[00:29:00] swyx: But, but shelling point means like very, very low or almost no coordination. So for example, if I, if I ask someone, if the only message I give you is meet me in New York and you're not aware. Or when you would probably meet me at Grand Central Station. That is the Grand Central Station is a shelling point.[00:29:16] swyx: And it's probably somewhere, somewhere during the day. That is the shelling point of New York is Grand Central. To that extent, shelling points for steganography are things like the, the, the common decoding methods that we talked about. It will be interesting at some point in the future when we are worried about alignment.[00:29:30] swyx: It is not interesting today, but it's interesting that DeepMind is already thinking about this.[00:29:36] Alessio: I think that's like one of the hardest things about NeurIPS. It's like the long tail. I[00:29:41] swyx: found a pricing guy. I'm going to feature him on the podcast. Basically, this guy from NVIDIA worked out the optimal pricing for language models.[00:29:51] swyx: It's basically an econometrics paper at NeurIPS, where everyone else is talking about GPUs. And the guy with the GPUs is[00:29:57] Alessio: talking[00:29:57] swyx: about economics instead. [00:30:00] That was the sort of fun one. So the focus I saw is that model papers at NeurIPS are kind of dead. No one really presents models anymore. It's just data sets.[00:30:12] swyx: This is all the grad students are working on. So like there was a data sets track and then I was looking around like, I was like, you don't need a data sets track because every paper is a data sets paper. And so data sets and benchmarks, they're kind of flip sides of the same thing. So Yeah. Cool. Yeah, if you're a grad student, you're a GPU boy, you kind of work on that.[00:30:30] swyx: And then the, the sort of big model that people walk around and pick the ones that they like, and then they use it in their models. And that's, that's kind of how it develops. I, I feel like, um, like, like you didn't last year, you had people like Hao Tian who worked on Lava, which is take Lama and add Vision.[00:30:47] swyx: And then obviously actually I hired him and he added Vision to Grok. Now he's the Vision Grok guy. This year, I don't think there was any of those.[00:30:55] Alessio: What were the most popular, like, orals? Last year it was like the [00:31:00] Mixed Monarch, I think, was like the most attended. Yeah, uh, I need to look it up. Yeah, I mean, if nothing comes to mind, that's also kind of like an answer in a way.[00:31:10] Alessio: But I think last year there was a lot of interest in, like, furthering models and, like, different architectures and all of that.[00:31:16] swyx: I will say that I felt the orals, oral picks this year were not very good. Either that or maybe it's just a So that's the highlight of how I have changed in terms of how I view papers.[00:31:29] swyx: So like, in my estimation, two of the best papers in this year for datasets or data comp and refined web or fine web. These are two actually industrially used papers, not highlighted for a while. I think DCLM got the spotlight, FineWeb didn't even get the spotlight. So like, it's just that the picks were different.[00:31:48] swyx: But one thing that does get a lot of play that a lot of people are debating is the role that's scheduled. This is the schedule free optimizer paper from Meta from Aaron DeFazio. And this [00:32:00] year in the ML community, there's been a lot of chat about shampoo, soap, all the bathroom amenities for optimizing your learning rates.[00:32:08] swyx: And, uh, most people at the big labs are. Who I asked about this, um, say that it's cute, but it's not something that matters. I don't know, but it's something that was discussed and very, very popular. 4Wars[00:32:19] Alessio: of AI recap maybe, just quickly. Um, where do you want to start? Data?[00:32:26] swyx: So to remind people, this is the 4Wars piece that we did as one of our earlier recaps of this year.[00:32:31] swyx: And the belligerents are on the left, journalists, writers, artists, anyone who owns IP basically, New York Times, Stack Overflow, Reddit, Getty, Sarah Silverman, George RR Martin. Yeah, and I think this year we can add Scarlett Johansson to that side of the fence. So anyone suing, open the eye, basically. I actually wanted to get a snapshot of all the lawsuits.[00:32:52] swyx: I'm sure some lawyer can do it. That's the data quality war. On the right hand side, we have the synthetic data people, and I think we talked about Lumna's talk, you know, [00:33:00] really showing how much synthetic data has come along this year. I think there was a bit of a fight between scale. ai and the synthetic data community, because scale.[00:33:09] swyx: ai published a paper saying that synthetic data doesn't work. Surprise, surprise, scale. ai is the leading vendor of non synthetic data. Only[00:33:17] Alessio: cage free annotated data is useful.[00:33:21] swyx: So I think there's some debate going on there, but I don't think it's much debate anymore that at least synthetic data, for the reasons that are blessed in Luna's talk, Makes sense.[00:33:32] swyx: I don't know if you have any perspectives there.[00:33:34] Alessio: I think, again, going back to the reinforcement fine tuning, I think that will change a little bit how people think about it. I think today people mostly use synthetic data, yeah, for distillation and kind of like fine tuning a smaller model from like a larger model.[00:33:46] Alessio: I'm not super aware of how the frontier labs use it outside of like the rephrase, the web thing that Apple also did. But yeah, I think it'll be. Useful. I think like whether or not that gets us the big [00:34:00] next step, I think that's maybe like TBD, you know, I think people love talking about data because it's like a GPU poor, you know, I think, uh, synthetic data is like something that people can do, you know, so they feel more opinionated about it compared to, yeah, the optimizers stuff, which is like,[00:34:17] swyx: they don't[00:34:17] Alessio: really work[00:34:18] swyx: on.[00:34:18] swyx: I think that there is an angle to the reasoning synthetic data. So this year, we covered in the paper club, the star series of papers. So that's star, Q star, V star. It basically helps you to synthesize reasoning steps, or at least distill reasoning steps from a verifier. And if you look at the OpenAI RFT, API that they released, or that they announced, basically they're asking you to submit graders, or they choose from a preset list of graders.[00:34:49] swyx: Basically It feels like a way to create valid synthetic data for them to fine tune their reasoning paths on. Um, so I think that is another angle where it starts to make sense. And [00:35:00] so like, it's very funny that basically all the data quality wars between Let's say the music industry or like the newspaper publishing industry or the textbooks industry on the big labs.[00:35:11] swyx: It's all of the pre training era. And then like the new era, like the reasoning era, like nobody has any problem with all the reasoning, especially because it's all like sort of math and science oriented with, with very reasonable graders. I think the more interesting next step is how does it generalize beyond STEM?[00:35:27] swyx: We've been using O1 for And I would say like for summarization and creative writing and instruction following, I think it's underrated. I started using O1 in our intro songs before we killed the intro songs, but it's very good at writing lyrics. You know, I can actually say like, I think one of the O1 pro demos.[00:35:46] swyx: All of these things that Noam was showing was that, you know, you can write an entire paragraph or three paragraphs without using the letter A, right?[00:35:53] Creative Writing with AI[00:35:53] swyx: So like, like literally just anything instead of token, like not even token level, character level manipulation and [00:36:00] counting and instruction following. It's, uh, it's very, very strong.[00:36:02] swyx: And so no surprises when I ask it to rhyme, uh, and to, to create song lyrics, it's going to do that very much better than in previous models. So I think it's underrated for creative writing.[00:36:11] Alessio: Yeah.[00:36:12] Legal and Ethical Issues in AI[00:36:12] Alessio: What do you think is the rationale that they're going to have in court when they don't show you the thinking traces of O1, but then they want us to, like, they're getting sued for using other publishers data, you know, but then on their end, they're like, well, you shouldn't be using my data to then train your model.[00:36:29] Alessio: So I'm curious to see how that kind of comes. Yeah, I mean, OPA has[00:36:32] swyx: many ways to publish, to punish people without bringing, taking them to court. Already banned ByteDance for distilling their, their info. And so anyone caught distilling the chain of thought will be just disallowed to continue on, on, on the API.[00:36:44] swyx: And it's fine. It's no big deal. Like, I don't even think that's an issue at all, just because the chain of thoughts are pretty well hidden. Like you have to work very, very hard to, to get it to leak. And then even when it leaks the chain of thought, you don't know if it's, if it's [00:37:00] The bigger concern is actually that there's not that much IP hiding behind it, that Cosign, which we talked about, we talked to him on Dev Day, can just fine tune 4.[00:37:13] swyx: 0 to beat 0. 1 Cloud SONET so far is beating O1 on coding tasks without, at least O1 preview, without being a reasoning model, same for Gemini Pro or Gemini 2. 0. So like, how much is reasoning important? How much of a moat is there in this, like, All of these are proprietary sort of training data that they've presumably accomplished.[00:37:34] swyx: Because even DeepSeek was able to do it. And they had, you know, two months notice to do this, to do R1. So, it's actually unclear how much moat there is. Obviously, you know, if you talk to the Strawberry team, they'll be like, yeah, I mean, we spent the last two years doing this. So, we don't know. And it's going to be Interesting because there'll be a lot of noise from people who say they have inference time compute and actually don't because they just have fancy chain of thought.[00:38:00][00:38:00] swyx: And then there's other people who actually do have very good chain of thought. And you will not see them on the same level as OpenAI because OpenAI has invested a lot in building up the mythology of their team. Um, which makes sense. Like the real answer is somewhere in between.[00:38:13] Alessio: Yeah, I think that's kind of like the main data war story developing.[00:38:18] The Data War: GPU Poor vs. GPU Rich[00:38:18] Alessio: GPU poor versus GPU rich. Yeah. Where do you think we are? I think there was, again, going back to like the small model thing, there was like a time in which the GPU poor were kind of like the rebel faction working on like these models that were like open and small and cheap. And I think today people don't really care as much about GPUs anymore.[00:38:37] Alessio: You also see it in the price of the GPUs. Like, you know, that market is kind of like plummeted because there's people don't want to be, they want to be GPU free. They don't even want to be poor. They just want to be, you know, completely without them. Yeah. How do you think about this war? You[00:38:52] swyx: can tell me about this, but like, I feel like the, the appetite for GPU rich startups, like the, you know, the, the funding plan is we will raise 60 million and [00:39:00] we'll give 50 of that to NVIDIA.[00:39:01] swyx: That is gone, right? Like, no one's, no one's pitching that. This was literally the plan, the exact plan of like, I can name like four or five startups, you know, this time last year. So yeah, GPU rich startups gone.[00:39:12] The Rise of GPU Ultra Rich[00:39:12] swyx: But I think like, The GPU ultra rich, the GPU ultra high net worth is still going. So, um, now we're, you know, we had Leopold's essay on the trillion dollar cluster.[00:39:23] swyx: We're not quite there yet. We have multiple labs, um, you know, XAI very famously, you know, Jensen Huang praising them for being. Best boy number one in spinning up 100, 000 GPU cluster in like 12 days or something. So likewise at Meta, likewise at OpenAI, likewise at the other labs as well. So like the GPU ultra rich are going to keep doing that because I think partially it's an article of faith now that you just need it.[00:39:46] swyx: Like you don't even know what it's going to, what you're going to use it for. You just, you just need it. And it makes sense that if, especially if we're going into. More researchy territory than we are. So let's say 2020 to 2023 was [00:40:00] let's scale big models territory because we had GPT 3 in 2020 and we were like, okay, we'll go from 1.[00:40:05] swyx: 75b to 1. 8b, 1. 8t. And that was GPT 3 to GPT 4. Okay, that's done. As far as everyone is concerned, Opus 3. 5 is not coming out, GPT 4. 5 is not coming out, and Gemini 2, we don't have Pro, whatever. We've hit that wall. Maybe I'll call it the 2 trillion perimeter wall. We're not going to 10 trillion. No one thinks it's a good idea, at least from training costs, from the amount of data, or at least the inference.[00:40:36] swyx: Would you pay 10x the price of GPT Probably not. Like, like you want something else that, that is at least more useful. So it makes sense that people are pivoting in terms of their inference paradigm.[00:40:47] Emerging Trends in AI Models[00:40:47] swyx: And so when it's more researchy, then you actually need more just general purpose compute to mess around with, uh, at the exact same time that production deployments of the old, the previous paradigm is still ramping up,[00:40:58] swyx: um,[00:40:58] swyx: uh, pretty aggressively.[00:40:59] swyx: So [00:41:00] it makes sense that the GPU rich are growing. We have now interviewed both together and fireworks and replicates. Uh, we haven't done any scale yet. But I think Amazon, maybe kind of a sleeper one, Amazon, in a sense of like they, at reInvent, I wasn't expecting them to do so well, but they are now a foundation model lab.[00:41:18] swyx: It's kind of interesting. Um, I think, uh, you know, David went over there and started just creating models.[00:41:25] Alessio: Yeah, I mean, that's the power of prepaid contracts. I think like a lot of AWS customers, you know, they do this big reserve instance contracts and now they got to use their money. That's why so many startups.[00:41:37] Alessio: Get bought through the AWS marketplace so they can kind of bundle them together and prefer pricing.[00:41:42] swyx: Okay, so maybe GPU super rich doing very well, GPU middle class dead, and then GPU[00:41:48] Alessio: poor. I mean, my thing is like, everybody should just be GPU rich. There shouldn't really be, even the GPU poorest, it's like, does it really make sense to be GPU poor?[00:41:57] Alessio: Like, if you're GPU poor, you should just use the [00:42:00] cloud. Yes, you know, and I think there might be a future once we kind of like figure out what the size and shape of these models is where like the tiny box and these things come to fruition where like you can be GPU poor at home. But I think today is like, why are you working so hard to like get these models to run on like very small clusters where it's like, It's so cheap to run them.[00:42:21] Alessio: Yeah, yeah,[00:42:22] swyx: yeah. I think mostly people think it's cool. People think it's a stepping stone to scaling up. So they aspire to be GPU rich one day and they're working on new methods. Like news research, like probably the most deep tech thing they've done this year is Distro or whatever the new name is.[00:42:38] swyx: There's a lot of interest in heterogeneous computing, distributed computing. I tend generally to de emphasize that historically, but it may be coming to a time where it is starting to be relevant. I don't know. You know, SF compute launched their compute marketplace this year, and like, who's really using that?[00:42:53] swyx: Like, it's a bunch of small clusters, disparate types of compute, and if you can make that [00:43:00] useful, then that will be very beneficial to the broader community, but maybe still not the source of frontier models. It's just going to be a second tier of compute that is unlocked for people, and that's fine. But yeah, I mean, I think this year, I would say a lot more on device, We are, I now have Apple intelligence on my phone.[00:43:19] swyx: Doesn't do anything apart from summarize my notifications. But still, not bad. Like, it's multi modal.[00:43:25] Alessio: Yeah, the notification summaries are so and so in my experience.[00:43:29] swyx: Yeah, but they add, they add juice to life. And then, um, Chrome Nano, uh, Gemini Nano is coming out in Chrome. Uh, they're still feature flagged, but you can, you can try it now if you, if you use the, uh, the alpha.[00:43:40] swyx: And so, like, I, I think, like, you know, We're getting the sort of GPU poor version of a lot of these things coming out, and I think it's like quite useful. Like Windows as well, rolling out RWKB in sort of every Windows department is super cool. And I think the last thing that I never put in this GPU poor war, that I think I should now, [00:44:00] is the number of startups that are GPU poor but still scaling very well, as sort of wrappers on top of either a foundation model lab, or GPU Cloud.[00:44:10] swyx: GPU Cloud, it would be Suno. Suno, Ramp has rated as one of the top ranked, fastest growing startups of the year. Um, I think the last public number is like zero to 20 million this year in ARR and Suno runs on Moto. So Suno itself is not GPU rich, but they're just doing the training on, on Moto, uh, who we've also talked to on, on the podcast.[00:44:31] swyx: The other one would be Bolt, straight cloud wrapper. And, and, um, Again, another, now they've announced 20 million ARR, which is another step up from our 8 million that we put on the title. So yeah, I mean, it's crazy that all these GPU pores are finding a way while the GPU riches are also finding a way. And then the only failures, I kind of call this the GPU smiling curve, where the edges do well, because you're either close to the machines, and you're like [00:45:00] number one on the machines, or you're like close to the customers, and you're number one on the customer side.[00:45:03] swyx: And the people who are in the middle. Inflection, um, character, didn't do that great. I think character did the best of all of them. Like, you have a note in here that we apparently said that character's price tag was[00:45:15] Alessio: 1B.[00:45:15] swyx: Did I say that?[00:45:16] Alessio: Yeah. You said Google should just buy them for 1B. I thought it was a crazy number.[00:45:20] Alessio: Then they paid 2. 7 billion. I mean, for like,[00:45:22] swyx: yeah.[00:45:22] Alessio: What do you pay for node? Like, I don't know what the game world was like. Maybe the starting price was 1B. I mean, whatever it was, it worked out for everybody involved.[00:45:31] The Multi-Modality War[00:45:31] Alessio: Multimodality war. And this one, we never had text to video in the first version, which now is the hottest.[00:45:37] swyx: Yeah, I would say it's a subset of image, but yes.[00:45:40] Alessio: Yeah, well, but I think at the time it wasn't really something people were doing, and now we had VO2 just came out yesterday. Uh, Sora was released last month, last week. I've not tried Sora, because the day that I tried, it wasn't, yeah. I[00:45:54] swyx: think it's generally available now, you can go to Sora.[00:45:56] swyx: com and try it. Yeah, they had[00:45:58] Alessio: the outage. Which I [00:46:00] think also played a part into it. Small things. Yeah. What's the other model that you posted today that was on Replicate? Video or OneLive?[00:46:08] swyx: Yeah. Very, very nondescript name, but it is from Minimax, which I think is a Chinese lab. The Chinese labs do surprisingly well at the video models.[00:46:20] swyx: I'm not sure it's actually Chinese. I don't know. Hold me up to that. Yep. China. It's good. Yeah, the Chinese love video. What can I say? They have a lot of training data for video. Or a more relaxed regulatory environment.[00:46:37] Alessio: Uh, well, sure, in some way. Yeah, I don't think there's much else there. I think like, you know, on the image side, I think it's still open.[00:46:45] Alessio: Yeah, I mean,[00:46:46] swyx: 11labs is now a unicorn. So basically, what is multi modality war? Multi modality war is, do you specialize in a single modality, right? Or do you have GodModel that does all the modalities? So this is [00:47:00] definitely still going, in a sense of 11 labs, you know, now Unicorn, PicoLabs doing well, they launched Pico 2.[00:47:06] swyx: 0 recently, HeyGen, I think has reached 100 million ARR, Assembly, I don't know, but they have billboards all over the place, so I assume they're doing very, very well. So these are all specialist models, specialist models and specialist startups. And then there's the big labs who are doing the sort of all in one play.[00:47:24] swyx: And then here I would highlight Gemini 2 for having native image output. Have you seen the demos? Um, yeah, it's, it's hard to keep up. Literally they launched this last week and a shout out to Paige Bailey, who came to the Latent Space event to demo on the day of launch. And she wasn't prepared. She was just like, I'm just going to show you.[00:47:43] swyx: So they have voice. They have, you know, obviously image input, and then they obviously can code gen and all that. But the new one that OpenAI and Meta both have but they haven't launched yet is image output. So you can literally, um, I think their demo video was that you put in an image of a [00:48:00] car, and you ask for minor modifications to that car.[00:48:02] swyx: They can generate you that modification exactly as you asked. So there's no need for the stable diffusion or comfy UI workflow of like mask here and then like infill there in paint there and all that, all that stuff. This is small model nonsense. Big model people are like, huh, we got you in as everything in the transformer.[00:48:21] swyx: This is the multimodality war, which is, do you, do you bet on the God model or do you string together a whole bunch of, uh, Small models like a, like a chump. Yeah,[00:48:29] Alessio: I don't know, man. Yeah, that would be interesting. I mean, obviously I use Midjourney for all of our thumbnails. Um, they've been doing a ton on the product, I would say.[00:48:38] Alessio: They launched a new Midjourney editor thing. They've been doing a ton. Because I think, yeah, the motto is kind of like, Maybe, you know, people say black forest, the black forest models are better than mid journey on a pixel by pixel basis. But I think when you put it, put it together, have you tried[00:48:53] swyx: the same problems on black forest?[00:48:55] Alessio: Yes. But the problem is just like, you know, on black forest, it generates one image. And then it's like, you got to [00:49:00] regenerate. You don't have all these like UI things. Like what I do, no, but it's like time issue, you know, it's like a mid[00:49:06] swyx: journey. Call the API four times.[00:49:08] Alessio: No, but then there's no like variate.[00:49:10] Alessio: Like the good thing about mid journey is like, you just go in there and you're cooking. There's a lot of stuff that just makes it really easy. And I think people underestimate that. Like, it's not really a skill issue, because I'm paying mid journey, so it's a Black Forest skill issue, because I'm not paying them, you know?[00:49:24] Alessio: Yeah,[00:49:25] swyx: so, okay, so, uh, this is a UX thing, right? Like, you, you, you understand that, at least, we think that Black Forest should be able to do all that stuff. I will also shout out, ReCraft has come out, uh, on top of the image arena that, uh, artificial analysis has done, has apparently, uh, Flux's place. Is this still true?[00:49:41] swyx: So, Artificial Analysis is now a company. I highlighted them I think in one of the early AI Newses of the year. And they have launched a whole bunch of arenas. So, they're trying to take on LM Arena, Anastasios and crew. And they have an image arena. Oh yeah, Recraft v3 is now beating Flux 1. 1. Which is very surprising [00:50:00] because Flux And Black Forest Labs are the old stable diffusion crew who left stability after, um, the management issues.[00:50:06] swyx: So Recurve has come from nowhere to be the top image model. Uh, very, very strange. I would also highlight that Grok has now launched Aurora, which is, it's very interesting dynamics between Grok and Black Forest Labs because Grok's images were originally launched, uh, in partnership with Black Forest Labs as a, as a thin wrapper.[00:50:24] swyx: And then Grok was like, no, we'll make our own. And so they've made their own. I don't know, there are no APIs or benchmarks about it. They just announced it. So yeah, that's the multi modality war. I would say that so far, the small model, the dedicated model people are winning, because they are just focused on their tasks.[00:50:42] swyx: But the big model, People are always catching up. And the moment I saw the Gemini 2 demo of image editing, where I can put in an image and just request it and it does, that's how AI should work. Not like a whole bunch of complicated steps. So it really is something. And I think one frontier that we haven't [00:51:00] seen this year, like obviously video has done very well, and it will continue to grow.[00:51:03] swyx: You know, we only have Sora Turbo today, but at some point we'll get full Sora. Oh, at least the Hollywood Labs will get Fulsora. We haven't seen video to audio, or video synced to audio. And so the researchers that I talked to are already starting to talk about that as the next frontier. But there's still maybe like five more years of video left to actually be Soda.[00:51:23] swyx: I would say that Gemini's approach Compared to OpenAI, Gemini seems, or DeepMind's approach to video seems a lot more fully fledged than OpenAI. Because if you look at the ICML recap that I published that so far nobody has listened to, um, that people have listened to it. It's just a different, definitely different audience.[00:51:43] swyx: It's only seven hours long. Why are people not listening? It's like everything in Uh, so, so DeepMind has, is working on Genie. They also launched Genie 2 and VideoPoet. So, like, they have maybe four years advantage on world modeling that OpenAI does not have. Because OpenAI basically only started [00:52:00] Diffusion Transformers last year, you know, when they hired, uh, Bill Peebles.[00:52:03] swyx: So, DeepMind has, has a bit of advantage here, I would say, in, in, in showing, like, the reason that VO2, while one, They cherry pick their videos. So obviously it looks better than Sora, but the reason I would believe that VO2, uh, when it's fully launched will do very well is because they have all this background work in video that they've done for years.[00:52:22] swyx: Like, like last year's NeurIPS, I already was interviewing some of their video people. I forget their model name, but for, for people who are dedicated fans, they can go to NeurIPS 2023 and see, see that paper.[00:52:32] Alessio: And then last but not least, the LLMOS. We renamed it to Ragops, formerly known as[00:52:39] swyx: Ragops War. I put the latest chart on the Braintrust episode.[00:52:43] swyx: I think I'm going to separate these essays from the episode notes. So the reason I used to do that, by the way, is because I wanted to show up on Hacker News. I wanted the podcast to show up on Hacker News. So I always put an essay inside of there because Hacker News people like to read and not listen.[00:52:58] Alessio: So episode essays,[00:52:59] swyx: I remember [00:53:00] purchasing them separately. You say Lanchain Llama Index is still growing.[00:53:03] Alessio: Yeah, so I looked at the PyPy stats, you know. I don't care about stars. On PyPy you see Do you want to share your screen? Yes. I prefer to look at actual downloads, not at stars on GitHub. So if you look at, you know, Lanchain still growing.[00:53:20] Alessio: These are the last six months. Llama Index still growing. What I've basically seen is like things that, One, obviously these things have A commercial product. So there's like people buying this and sticking with it versus kind of hopping in between things versus, you know, for example, crew AI, not really growing as much.[00:53:38] Alessio: The stars are growing. If you look on GitHub, like the stars are growing, but kind of like the usage is kind of like flat. In the last six months, have they done some[00:53:4
Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all our LS supporters who helped fund the venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Since Nathan Lambert ( Interconnects ) joined us for the hit RLHF 201 episode at the start of this year, it is hard to overstate how much Open Models have exploded this past year. In 2023 only five names were playing in the top LLM ranks, Mistral, Mosaic's MPT, TII UAE's Falcon, Yi from Kai-Fu Lee's 01.ai, and of course Meta's Llama 1 and 2. This year a whole cast of new open models have burst on the scene, from Google's Gemma and Cohere's Command R, to Alibaba's Qwen and Deepseek models, to LLM 360 and DCLM and of course to the Allen Institute's OLMo, OL MOE, Pixmo, Molmo, and Olmo 2 models. We were honored to host Luca Soldaini, one of the research leads on the Olmo series of models at AI2.Pursuing Open Model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe, California and the White House. We also were honored to hear from and Sophia Yang, head of devrel at Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track!Full Talk on YouTubePlease like and subscribe!Timestamps* 00:00 Welcome to Latent Space Live * 00:12 Recap of 2024: Best Moments and Keynotes * 01:22 Explosive Growth of Open Models in 2024 * 02:04 Challenges in Open Model Research * 02:38 Keynote by Luca Soldani: State of Open Models * 07:23 Significance of Open Source AI Licenses * 11:31 Research Constraints and Compute Challenges * 13:46 Fully Open Models: A New Trend * 27:46 Mistral's Journey and Innovations * 32:57 Interactive Demo: Lachat Capabilities * 36:50 Closing Remarks and NetworkingTranscriptSession3Audio[00:00:00] AI Charlie: Welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the latent space network to cover each field.[00:00:28] AI Charlie: 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our next keynote covers the state of open models in 2024, with Luca Soldani and Nathan Lambert of the Allen Institute for AI, with a special appearance from Dr. Sophia Yang of Mistral. Our first hit episode of 2024 was with Nathan Lambert on RLHF 201 back in January.[00:00:57] AI Charlie: Where he discussed both reinforcement learning for language [00:01:00] models and the growing post training and mid training stack with hot takes on everything from constitutional AI to DPO to rejection sampling and also previewed the sea change coming to the Allen Institute. And to Interconnects, his incredible substack on the technical aspects of state of the art AI training.[00:01:18] AI Charlie: We highly recommend subscribing to get access to his Discord as well. It is hard to overstate how much open models have exploded this past year. In 2023, only five names were playing in the top LLM ranks. Mistral, Mosaics MPT, and Gatsby. TII UAE's Falcon, Yi, from Kaifu Lee's 01. ai, And of course, Meta's Lama 1 and 2.[00:01:43] AI Charlie: This year, a whole cast of new open models have burst on the scene. From Google's Jemma and Cohere's Command R, To Alibaba's Quen and DeepSeq models, to LLM360 and DCLM, and of course, to the Allen Institute's OLMO, [00:02:00] OLMOE, PIXMO, MOLMO, and OLMO2 models. Pursuing open model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe.[00:02:14] AI Charlie: California and the White House. We also were honored to hear from Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track. As always, don't forget to check the show notes for the YouTube link to their talk, as well as their slides. Watch out and take care.[00:02:35] Luca Intro[00:02:35] Luca Soldaini: Cool. Yeah, thanks for having me over. I'm Luca. I'm a research scientist at the Allen Institute for AI. I threw together a few slides on sort of like a recap of like interesting themes in open models for, for 2024. Have about maybe 20, 25 minutes of slides, and then we can chat if there are any questions.[00:02:57] Luca Soldaini: If I can advance to the next slide. [00:03:00] Okay, cool. So I did the quick check of like, to sort of get a sense of like, how much 2024 was different from 2023. So I went on Hugging Face and sort of get, tried to get a picture of what kind of models were released in 2023 and like, what do we get in 2024?[00:03:16] Luca Soldaini: 2023 we get, we got things like both LLAMA 1 and 2, we got Mistral, we got MPT, Falcon models, I think the YI model came in at the end. Tail end of the year. It was a pretty good year. But then I did the same for 2024. And it's actually quite stark difference. You have models that are, you know, reveling frontier level.[00:03:38] Luca Soldaini: Performance of what you can get from closed models from like Quen, from DeepSeq. We got Llama3. We got all sorts of different models. I added our own Olmo at the bottom. There's this growing group of like, Fully open models that I'm going to touch on a little bit later. But you know, just looking at the slides, it feels like 2024 [00:04:00] was just smooth sailing, happy knees, much better than previous year.[00:04:04] Luca Soldaini: And you know, you can plot you can pick your favorite benchmark Or least favorite, I don't know, depending on what point you're trying to make. And plot, you know, your closed model, your open model and sort of spin it in ways that show that, oh, you know open models are much closer to where closed models are today versus to Versus last year where the gap was fairly significant.[00:04:29] Luca Soldaini: So one thing that I think I don't know if I have to convince people in this room, but usually when I give this talks about like open models, there is always like this background question in, in, in people's mind of like, why should we use open models? APIs argument, you know, it's, it's. Just an HTTP request to get output from a, from one of the best model out there.[00:04:53] Luca Soldaini: Why do I have to set up infra and use local models? And there are really like two answer. There is the more [00:05:00] researchy answer for this, which is where it might be. Background lays, which is just research. If you want to do research on language models, research thrives on, on open models, there is like large swath of research on modeling, on how these models behave on evaluation and inference on mechanistic interpretability that could not happen at all if you didn't have open models they're also for AI builders, they're also like.[00:05:30] Luca Soldaini: Good use cases for using local models. You know, you have some, this is like a very not comprehensive slides, but you have things like there are some application where local models just blow closed models out of the water. So like retrieval, it's a very clear example. We might have like constraints like Edge AI applications where it makes sense.[00:05:51] Luca Soldaini: But even just like in terms of like stability, being able to say this model is not changing under the hood. It's, there's plenty of good cases for, [00:06:00] for open models. And the community is just not models. Is I stole this slide from one of the Quent2 announcement blog posts. But it's super cool to see like how much tech exists around open models and serving them on making them efficient and hosting them.[00:06:18] Luca Soldaini: It's pretty cool. And so. It's if you think about like where the term opens come from, comes from like the open source really open models meet the core tenants of, of open, of open source specifically when it comes around collaboration, there is truly a spirit, like through these open models, you can build on top of other people.[00:06:41] Luca Soldaini: innovation. We see a lot of these even in our own work of like, you know, as we iterate in the various versions of Alma it's not just like every time we collect from scratch all the data. No, the first step is like, okay, what are the cool data sources and datasets people have put [00:07:00] together for language model for training?[00:07:01] Luca Soldaini: Or when it comes to like our post training pipeline We one of the steps is you want to do some DPO and you use a lot of outputs of other models to improve your, your preference model. So it's really having like an open sort of ecosystem benefits and accelerates the development of open models.[00:07:23] The Definition of Open Models[00:07:23] Luca Soldaini: One thing that we got in 2024, which is not a specific model, but I thought it was really significant, is we first got we got our first open source AI definition. So this is from the open source initiative they've been generally the steward of a lot of the open source licenses when it comes to software and so they embarked on this journey in trying to figure out, okay, How does a license, an open source license for a model look like?[00:07:52] Luca Soldaini: Majority of the work is very dry because licenses are dry. So I'm not going to walk through the license step by [00:08:00] step, but I'm just going to pick out one aspect that is very good and then one aspect that personally feels like it needs improvement on the good side. This this open source AI license actually.[00:08:13] Luca Soldaini: This is very intuitive. If you ever build open source software and you have some expectation around like what open source looks like for software for, for AI, sort of matches your intuition. So, the weights need to be fairly available the code must be released with an open source license and there shouldn't be like license clauses that block specific use cases.[00:08:39] Luca Soldaini: So. Under this definition, for example, LLAMA or some of the QUEN models are not open source because the license says you can't use this model for this or it says if you use this model you have to name the output this way or derivative needs to be named that way. Those clauses don't meet open source [00:09:00] definition and so they will not be covered.[00:09:02] Luca Soldaini: The LLAMA license will not be covered under the open source definition. It's not perfect. One of the thing that, um, internally, you know, in discussion with with OSI, we were sort of disappointed is around the language. For data. So you might imagine that an open source AI model means a model where the data is freely available.[00:09:26] Luca Soldaini: There were discussion around that, but at the end of the day, they decided to go with a softened stance where they say a model is open source if you provide sufficient detail information. On how to sort of replicate the data pipeline. So you have an equivalent system, sufficient, sufficiently detailed.[00:09:46] Luca Soldaini: It's very, it's very fuzzy. Don't like that. An equivalent system is also very fuzzy. And this doesn't take into account the accessibility of the process, right? It might be that you provide enough [00:10:00] information, but this process costs, I don't know, 10 million to do. Now the open source definition. Like, any open source license has never been about accessibility, so that's never a factor in open source software, how accessible software is.[00:10:14] Luca Soldaini: I can make a piece of open source, put it on my hard drive, and never access it. That software is still open source, the fact that it's not widely distributed doesn't change the license, but practically there are expectations of like, what we want good open sources to be. So, it's, It's kind of sad to see that the data component in this license is not as, as, Open as some of us would like would like it to be.[00:10:40] Challenges for Open Models[00:10:40] Luca Soldaini: and I linked a blog post that Nathan wrote on the topic that it's less rambly and easier to follow through. One thing that in general, I think it's fair to say about the state of open models in 2024 is that we know a lot more than what we knew in, [00:11:00] in 2023. Like both on the training data, like And the pre training data you curate on like how to do like all the post training, especially like on the RL side.[00:11:10] Luca Soldaini: You know, 2023 was a lot of like throwing random darts at the board. I think 2024, we have clear recipes that, okay, don't get the same results as a closed lab because there is a cost in, in actually matching what they do. But at least we have a good sense of like, okay, this is, this is the path to get state of the art language model.[00:11:31] Luca Soldaini: I think that one thing that it's a downside of 2024 is that I think we are more research constrained in 2023. It feels that, you know, the barrier for compute that you need to, to move innovation along as just being right rising and rising. So like, if you go back to this slide, there is now this, this cluster of models that are sort of released by the.[00:11:57] Luca Soldaini: Compute rich club. Membership is [00:12:00] hotly debated. You know, some people don't want to be. Called the rich because it comes to expectations. Some people want to be called rich, but I don't know, there's debate, but like, these are players that have, you know, 10, 000, 50, 000 GPUs at minimum. And so they can do a lot of work and a lot of exploration and improving models that it's not very accessible.[00:12:21] Luca Soldaini: To give you a sense of like how I personally think about. Research budget for each part of the, of the language model pipeline is like on the pre training side, you can maybe do something with a thousand GPUs, really you want 10, 000. And like, if you want real estate of the art, you know, your deep seek minimum is like 50, 000 and you can scale to infinity.[00:12:44] Luca Soldaini: The more you have, the better it gets. Everyone on that side still complains that they don't have enough GPUs. Post training is a super wide sort of spectrum. You can do as little with like eight GPUs as long as you're able to [00:13:00] run, you know, a good version of, say, a LLAMA model, you can do a lot of work there.[00:13:05] Luca Soldaini: You can scale a lot of the methodology, just like scales with compute, right? If you're interested in you know, your open replication of what OpenAI's O1 is you're going to be on the 10K spectrum of our GPUs. Inference, you can do a lot with very few resources. Evaluation, you can do a lot with, well, I should say at least one GPUs if you want to evaluate GPUs.[00:13:30] Luca Soldaini: Open models but in general, like if you are, if you care a lot about intervention to do on this model, which it's my prefer area of, of research, then, you know, the resources that you need are quite, quite significant. Yeah. One other trends that has emerged in 2024 is this cluster of fully open models.[00:13:54] Luca Soldaini: So Omo the model that we built at ai, two being one of them and you know, it's nice [00:14:00] that it's not just us. There's like a cluster of other mostly research efforts who are working on this. And so it's good to to give you a primer of what like fully open means. So fully open, the easy way to think about it is instead of just releasing a model checkpoint that you run, you release a full recipe so that other people working on it.[00:14:24] Luca Soldaini: Working on that space can pick and choose whatever they want from your recipe and create their own model or improve on top of your model. You're giving out the full pipeline and all the details there instead of just like the end output. So I pull up the screenshot from our recent MOE model.[00:14:43] Luca Soldaini: And like for this model, for example, we released the model itself. Data that was trained on, the code, both for training and inference all the logs that we got through the training run, as well as every intermediate checkpoint and like the fact that you release different part of the pipeline [00:15:00] allows others to do really cool things.[00:15:02] Luca Soldaini: So for example, this tweet from early this year from folks in news research they use our pre training data to do a replication of the BitNet paper in the open. So they took just a Really like the initial part of a pipeline and then the, the thing on top of it. It goes both ways.[00:15:21] Luca Soldaini: So for example, for the Olmo2 model a lot of our pre trained data for the first stage of pre training was from this DCLM initiative that was led by folks Ooh, a variety of ins a variety of institutions. It was a really nice group effort. But you know, for When it was nice to be able to say, okay, you know, the state of the art in terms of like what is done in the open has improved.[00:15:46] AI2 Models - Olmo, Molmo, Pixmo etc[00:15:46] Luca Soldaini: We don't have to like do all this work from scratch to catch up the state of the art. We can just take it directly and integrate it and do our own improvements on top of that. I'm going to spend a few minutes doing like a [00:16:00] shameless plug for some of our fully open recipes. So indulge me in this.[00:16:05] Luca Soldaini: So a few things that we released this year was, as I was mentioning, there's OMOE model which is, I think still is state of the art MOE model in its size class. And it's also. Fully open, so every component of this model is available. We released a multi modal model called Molmo. Molmo is not just a model, but it's a full recipe of how you go from a text only model to a multi modal model, and we apply this recipe on top of Quent checkpoints, on top of Olmo checkpoints, as well as on top of OlmoE.[00:16:37] Luca Soldaini: And I think there'd be a replication doing that on top of Mistral as well. The post training side we recently released 2. 0. 3. Same story. This is a recipe on how you go from a base model to A state of the art post training model. We use the Tulu recipe on top of Olmo, on top of Llama, and then there's been open replication effort [00:17:00] to do that on top of Quen as well.[00:17:02] Luca Soldaini: It's really nice to see like, you know, when your recipe sort of, it's kind of turnkey, you can apply it to different models and it kind of just works. And finally, the last thing we released this year was Olmo 2, which so far is the best state of the art. Fully open language model a Sera combines aspect from all three of these previous models.[00:17:22] Luca Soldaini: What we learn on the data side from MomoE and what we learn on like making models that are easy to adapt from the Momo project and the Tulu project. I will close with a little bit of reflection of like ways this, this ecosystem of open models like it's not all roses. It's not all happy. It feels like day to day, it's always in peril.[00:17:44] Luca Soldaini: And, you know, I talked a little bit about like the compute issues that come with it. But it's really not just compute. One thing that is on top of my mind is due to like the environment and how you know, growing feelings about like how AI is treated. [00:18:00] It's actually harder to get access to a lot of the data that was used to train a lot of the models up to last year.[00:18:06] Luca Soldaini: So this is a screenshot from really fabulous work from Shane Longpre who's, I think is in Europe about Just access of like diminishing access to data for language model pre training. So what they did is they went through every snapshot of common crawl. Common crawl is this publicly available scrape of the, of a subset of the internet.[00:18:29] Luca Soldaini: And they looked at how For any given website whether a website that was accessible in say 2017, what, whether it was accessible or not in 2024. And what they found is as a reaction to like the close like of the existence of closed models like OpenAI or Cloud GPT or Cloud a lot of content owners have blanket Blocked any type of crawling to your website.[00:18:57] Luca Soldaini: And this is something that we see also internally at [00:19:00] AI2. Like one project that we started this year is we wanted to, we wanted to understand, like, if you're a good citizen of the internet and you crawl following sort of norms and policy that have been established in the last 25 years, what can you crawl?[00:19:17] Luca Soldaini: And we found that there's a lot of website where. The norms of how you express preference of whether to crawl your data or not are broken. A lot of people would block a lot of crawling, but do not advertise that in RobustDXT. You can only tell that they're crawling, that they're blocking you in crawling when you try doing it.[00:19:37] Luca Soldaini: Sometimes you can't even crawl the robots. txt to, to check whether you're allowed or not. And then a lot of websites there's, there's like all these technologies that historically have been, have existed to make websites serving easier such as Cloudflare or DNS. They're now being repurposed for blocking AI or any type of crawling [00:20:00] in a way that is Very opaque to the content owners themselves.[00:20:04] Luca Soldaini: So, you know, you go to these websites, you try to access them and they're not available and you get a feeling it's like, Oh, someone changed, something changed on the, on the DNS side that it's blocking this and likely the content owner has no idea. They're just using a Cloudflare for better, you know, load balancing.[00:20:25] Luca Soldaini: And this is something that was sort of sprung on them with very little notice. And I think the problem is this, this blocking or ideas really, it impacts people in different ways. It disproportionately helps companies that have a headstart, which are usually the closed labs and it hurts incoming newcomer players where either have now to do things in a sketchy way or you're never going to get that content that the closed lab might have.[00:20:54] Luca Soldaini: So there's a lot, it was a lot of coverage. I'm going to plug Nathan's blog post again. That is, [00:21:00] that I think the title of this one is very succinct which is like, we're actually not, You know, before thinking about running out of training data, we're actually running out of open training data. And so if we want better open models they should be on top of our mind.[00:21:13] Regulation and Lobbying[00:21:13] Luca Soldaini: The other thing that has emerged is that there is strong lobbying efforts on trying to define any kind of, AI as like a new extremely risky and I want to be precise here. Like the problem is now, um, like the problem is not not considering the risk of this technology. Every technology has risks that, that should always be considered.[00:21:37] Luca Soldaini: The thing that it's like to me is sorry, is ingenious is like just putting this AI on a pedestal and calling it like, An unknown alien technology that has like new and undiscovered potentials to destroy humanity. When in reality, all the dangers I think are rooted in [00:22:00] dangers that we know from existing software industry or existing issues that come with when using software on on a lot of sensitive domains, like medical areas.[00:22:13] Luca Soldaini: And I also noticed a lot of efforts that have actually been going on and trying to make this open model safe. I pasted one here from AI2, but there's actually like a lot of work that has been going on on like, okay, how do you make, if you're distributing this model, Openly, how do you make it safe?[00:22:31] Luca Soldaini: How, what's the right balance between accessibility on open models and safety? And then also there's annoying brushing of sort of concerns that are then proved to be unfounded under the rug. You know, if you remember the beginning of this year, it was all about bio risk of these open models.[00:22:48] Luca Soldaini: The whole thing fizzled because as being Finally, there's been like rigorous research, not just this paper from Cohere folks, but it's been rigorous research showing [00:23:00] that this is really not a concern that we should be worried about. Again, there is a lot of dangerous use of AI applications, but this one was just like, A lobbying ploy to just make things sound scarier than they actually are.[00:23:15] Luca Soldaini: So I got to preface this part. It says, this is my personal opinion. It's not my employer, but I look at things like the SP 1047 from, from California. And I think we kind of dodged a bullet on, on this legislation. We, you know, the open source community, a lot of the community came together at the last, sort of the last minute and did a very good effort trying to explain all the negative impact of this bill.[00:23:43] Luca Soldaini: But There's like, I feel like there's a lot of excitement on building these open models or like researching on these open models. And lobbying is not sexy it's kind of boring but it's sort of necessary to make sure that this ecosystem can, can really [00:24:00] thrive. This end of presentation, I have Some links, emails, sort of standard thing in case anyone wants to reach out and if folks have questions or anything they wanted to discuss.[00:24:13] Luca Soldaini: Is there an open floor? I think we have Sophia[00:24:16] swyx: who wants to who one, one very important open model that we haven't covered is Mistral. Ask her on this slide. Yeah, yeah. Well, well, it's nice to have the Mistral person talk recap the year in Mistral. But while Sophia gets set up, does anyone have like, just thoughts or questions about the progress in this space?[00:24:32] Questions - Incentive Alignment[00:24:32] swyx: Do you always have questions?[00:24:34] Quesiton: I'm very curious how we should build incentives to build open models, things like Francois Chollet's ArcPrize, and other initiatives like that. What is your opinion on how we should better align incentives in the community so that open models stay open?[00:24:49] Luca Soldaini: The incentive bit is, like, really hard.[00:24:51] Luca Soldaini: Like, even It's something that I actually, even we think a lot about it internally because like building open models is risky. [00:25:00] It's very expensive. And so people don't want to take risky bets. I think the, definitely like the challenges like our challenge, I think those are like very valid approaches for it.[00:25:13] Luca Soldaini: And then I think in general, promoting, building, so, any kind of effort to participate in this challenge, in those challenges, if we can promote doing that on top of open models and sort of really lean into like this multiplier effect, I think that is a good way to go. If there were more money for that.[00:25:35] Luca Soldaini: For efforts like research efforts around open models. There's a lot of, I think there's a lot of investments in companies that at the moment are releasing their model in the open, which is really cool. But it's usually more because of commercial interest and not wanting to support this, this like open models in the longterm, it's a really hard problem because I think everyone is operating sort of [00:26:00] in what.[00:26:01] Luca Soldaini: Everyone is at their local maximum, right? In ways that really optimize their position on the market. Global maximum is harder to achieve.[00:26:11] Question2: Can I ask one question? No.[00:26:12] Luca Soldaini: Yeah.[00:26:13] Question2: So I think one of the gap between the closed and open source models is the mutability. So the closed source models like chat GPT works pretty good on the low resource languages, which is not the same on the open, open source models, right?[00:26:27] Question2: So is it in your plan to improve on that?[00:26:32] Luca Soldaini: I think in general,[00:26:32] Luca Soldaini: yes, is I think it's. I think we'll see a lot of improvements there in, like, 2025. Like, there's groups like, Procurement English on the smaller side that are already working on, like, better crawl support, multilingual support. I think what I'm trying to say here is you really want to be experts.[00:26:54] Luca Soldaini: who are actually in those countries that teach those languages to [00:27:00] participate in the international community. To give you, like, a very easy example I'm originally from Italy. I think I'm terribly equipped to build a model that works well in Italian. Because one of the things you need to be able to do is having that knowledge of, like, okay, how do I access, you know, how Libraries, or content that is from this region that covers this language.[00:27:23] Luca Soldaini: I've been in the US long enough that I no longer know. So, I think that's the efforts that folks in Central Europe, for example, are doing. Around like, okay, let's tap into regional communities. To get access you know, to bring in collaborators from those areas. I think it's going to be, like, very crucial for getting products there.[00:27:46] Mistral intro[00:27:46] Sophia Yang: Hi everyone. Yeah, I'm super excited to be here to talk to you guys about Mistral. A really short and quick recap of what we have done, what kind of models and products we have released in the [00:28:00] past year and a half. So most of you We have already known that we are a small startup funded about a year and a half ago in Paris in May, 2003, it was funded by three of our co founders, and in September, 2003, we released our first open source model, Mistral 7b yeah, how, how many of you have used or heard about Mistral 7b?[00:28:24] Sophia Yang: Hey, pretty much everyone. Thank you. Yeah, it's our Pretty popular and community. Our committee really loved this model, and in December 23, we, we released another popular model with the MLE architecture Mr. A X seven B and oh. Going into this year, you can see we have released a lot of things this year.[00:28:46] Sophia Yang: First of all, in February 2004, we released MrSmall, MrLarge, LeChat, which is our chat interface, I will show you in a little bit. We released an embedding model for, you [00:29:00] know, converting your text into embedding vectors, and all of our models are available. The, the big cloud resources. So you can use our model on Google cloud, AWS, Azure Snowflake, IBM.[00:29:16] Sophia Yang: So very useful for enterprise who wants to use our model through cloud. And in April and May this year, we released another powerful open source MOE model, AX22B. And we also released our first code. Code Model Coastal, which is amazing at 80 plus languages. And then we provided another fine tuning service for customization.[00:29:41] Sophia Yang: So because we know the community love to fine tune our models, so we provide you a very nice and easy option for you to fine tune our model on our platform. And also we released our fine tuning code base called Menstrual finetune. It's open source, so feel free to take it. Take a look and.[00:29:58] Sophia Yang: More models. [00:30:00] On July 2, November this year, we released many, many other models. First of all is the two new small, best small models. We have Minestra 3B great for Deploying on edge devices we have Minstrel 8B if you used to use Minstrel 7B, Minstrel 8B is a great replacement with much stronger performance than Minstrel 7B.[00:30:25] Sophia Yang: We also collaborated with NVIDIA and open sourced another model, Nemo 12B another great model. And Just a few weeks ago, we updated Mistral Large with the version 2 with the updated, updated state of the art features and really great function calling capabilities. It's supporting function calling in LatentNate.[00:30:45] Sophia Yang: And we released two multimodal models Pixtral 12b. It's this open source and Pixtral Large just amazing model for, models for not understanding images, but also great at text understanding. So. Yeah, a [00:31:00] lot of the image models are not so good at textual understanding, but pixel large and pixel 12b are good at both image understanding and textual understanding.[00:31:09] Sophia Yang: And of course, we have models for research. Coastal Mamba is built on Mamba architecture and MathRoll, great with working with math problems. So yeah, that's another model.[00:31:29] Sophia Yang: Here's another view of our model reference. We have several premier models, which means these models are mostly available through our API. I mean, all of the models are available throughout our API, except for Ministry 3B. But for the premier model, they have a special license. Minstrel research license, you can use it for free for exploration, but if you want to use it for enterprise for production use, you will need to purchase a license [00:32:00] from us.[00:32:00] Sophia Yang: So on the top row here, we have Minstrel 3b and 8b as our premier model. Minstrel small for best, best low latency use cases, MrLarge is great for your most sophisticated use cases. PixelLarge is the frontier class multimodal model. And, and we have Coastral for great for coding and then again, MrEmbedding model.[00:32:22] Sophia Yang: And The bottom, the bottom of the slides here, we have several Apache 2. 0 licensed open way models. Free for the community to use, and also if you want to fine tune it, use it for customization, production, feel free to do so. The latest, we have Pixtros 3 12b. We also have Mr. Nemo mum, Coastal Mamba and Mastro, as I mentioned, and we have three legacy models that we don't update anymore.[00:32:49] Sophia Yang: So we recommend you to move to our newer models if you are still using them. And then, just a few weeks ago, [00:33:00] we did a lot of, uh, improvements to our code interface, Lachette. How many of you have used Lachette? Oh, no. Only a few. Okay. I highly recommend Lachette. It's chat. mistral. ai. It's free to use.[00:33:16] Sophia Yang: It has all the amazing capabilities I'm going to show you right now. But before that, Lachette in French means cat. So this is actually a cat logo. If you You can tell this is the cat eyes. Yeah. So first of all, I want to show you something Maybe let's, let's take a look at image understanding.[00:33:36] Sophia Yang: So here I have a receipts and I want to ask, just going to get the prompts. Cool. So basically I have a receipt and I said I ordered I don't know. Coffee and the sausage. How much do I owe? Add a 18 percent tip. So hopefully it was able to get the cost of the coffee and the [00:34:00] sausage and ignore the other things.[00:34:03] Sophia Yang: And yeah, I don't really understand this, but I think this is coffee. It's yeah. Nine, eight. And then cost of the sausage, we have 22 here. And then it was able to add the cost, calculate the tip, and all that. Great. So, it's great at image understanding, it's great at OCR tasks. So, if you have OCR tasks, please use it.[00:34:28] Sophia Yang: It's free on the chat. It's also available through our API. And also I want to show you a Canvas example. A lot of you may have used Canvas with other tools before. But, With Lachat, it's completely free again. Here, I'm asking it to create a canvas that's used PyScript to execute Python in my browser.[00:34:51] Sophia Yang: Let's see if it works. Import this. Okay, so, yeah, so basically it's executing [00:35:00] Python here. Exactly what we wanted. And the other day, I was trying to ask Lachat to create a game for me. Let's see if we can make it work. Yeah, the Tetris game. Yep. Let's just get one row. Maybe. Oh no. Okay. All right. You get the idea. I failed my mission. Okay. Here we go. Yay! Cool. Yeah. So as you can see, Lachet can write, like, a code about a simple game pretty easily. And you can ask Lachet to explain the code. Make updates however you like. Another example. There is a bar here I want to move.[00:35:48] Sophia Yang: Okay, great, okay. And let's go back to another one. Yeah, we also have web search capabilities. Like, you can [00:36:00] ask what's the latest AI news. Image generation is pretty cool. Generate an image about researchers. Okay. In Vancouver? Yeah, it's Black Forest Labs flux Pro. Again, this is free, so Oh, cool.[00:36:19] Sophia Yang: I guess researchers here are mostly from University of British Columbia. That's smart. Yeah. So this is Laia ira. Please feel free to use it. And let me know if you have any feedback. We're always looking for improvement and we're gonna release a lot more powerful features in the coming years.[00:36:37] Sophia Yang: Thank you. Get full access to Latent Space at www.latent.space/subscribe
Alors que les éliminatoires pour la prochaine Coupe d'Afrique battent leur plein cette semaine, Mondial Sports prend le temps d'analyser une sélection souvent bien placée mais jamais récompensée : le Mali. Comment expliquer ce plafond de verre ? Le contexte autour du foot malien parasite-il les joueurs ? Pour répondre à ces questions, Sophiane Amazian est accompagné du spécialiste Patrick Juillard et deux anciens joueurs maliens : Momo Sissoko et Samba Sow !Le rendez-vous est pris : 16H10 temps universel en direct sur RFI !
Politische Entscheidungen zum Klimaschutz treffen in öffentlichen Diskursen immer mehr auf Ablehnung und Hass. Das liegt nicht zuletzt daran, dass Klimapolitik ein erklärtes Feindbild und bespieltes Kampffeld rechter und extrem rechter Kräfte ist. Mit dem Soziologen und Rechtsextremismusforscher Matthais Quent sprechen wir darüber, warum die rechte gerade gegen Klimaschutz mobil macht.
Quentíssima … SP / BRASIL
Sicherheit, Zuwanderung, Kriminalität: Die ständigen AfD-Angstszenarien hätten viele Menschen radikalisiert, meint der Soziologe Matthias Quent. Mit rationaler Politik seien sie kaum zu erreichen - letztes Mittel bleibe ein Parteiverbotsverfahren. Quent, Matthias www.deutschlandfunkkultur.de, Studio 9
Au cœur de la nuit, les auditeurs se livrent en toute liberté aux oreilles attentives et bienveillantes de Sana Blanger du lundi au jeudi et de Valérie Darmon du vendredi au dimanche. Pas de jugements ni de tabous, une conversation franche mais aussi des réponses aux questions que les auditeurs se posent. Un moment d'échange et de partage propice à la confidence pour repartir le cœur plus léger.
Barenberg, Jasper www.deutschlandfunk.de, Das war der Tag
Barenberg, Jasper www.deutschlandfunk.de, Das war der Tag
Futebol, informação, humor, opinião e corneta! Um programa de debate sobre tudo que envolve futebol de um jeito descontraído e animado.
Na coluna desta quinta-feira (27), o professor Deonísio da Silva fala da origem das palavras "canjica", "pamonha" e "quentão", além da expressão: “Pular a fogueira de São João”.
Quentão ( de cachaça e de vinho ) Ingredientes (6 porções) 1 e 1/2 xícara de açúcar 1 e 1/2 xícara de água 1 maçã (sem casca) 10 cravos-da-índia 2 unidades de canela em pau 10 rodelas de gengibre 1 laranja em rodelas 2 limões em rodelas cachaça 1 litro de cachaça Modo de preparo Modo de preparo : 30min 1 Derreta o açúcar em uma panela. 2 Em seguida, adicione a água. 3 Com a ajuda de uma colher de pau ou de silicone, desgrude o açúcar do fundo da panela e misture com a água. 4 Quando a mistura estiver homogênea, adicione a maçã. 5 Acrescente o cravo-da-índia, a canela em pau, o gengibre, a laranja e os limões à mistura. 6 Adicione 1 litro de cachaça e deixe ferver. Quentão ( de vinho ) Ingredientes (12 porções) 2 l de vinho tinto 1 copo de água 200 ml Meio copo de cachaça 1 e 1/2 copo de açúcar canela em pau 2 pauzinhos de canela 12 cravos (ou a gosto) 8 rodelinhas de gengibre (ou a gosto) Modo de preparo Modo de preparo : 20min 1 Misture todos os ingredientes ao fogo em uma panela. 2 Depois que levantar fervura, deixe por mais 10 minutos. 3 Está pronto é só servir. 4 Tomar a bebida quente. #culináriafaladacomnaluzica #receitadefamília #receitasculinariasparaouvir #quentaodevinho @quentaodecachaça @Naluzica @naluzinhaniki.56 @cozinha.compartilhada --- Send in a voice message: https://podcasters.spotify.com/pod/show/culinariafaladanaluzica/message
Discussion sportive avec Jean-François BarilPour de l'information concernant l'utilisation de vos données personnelles - https://omnystudio.com/policies/listener/fr
Em mais uma receita em homenagem ao mês de junho, Flavio Trombino fala sobre o quentão. Confira! Ingredientes: 600ML de cachaça 400ML de água 1 xícara e meia de açúcar 2 colheres de sopa de gengibre ralado Casca de uma laranja Casca de um limão Canela e cravo a gosto See omnystudio.com/listener for privacy information.
Quent, Matthias www.deutschlandfunk.de, Andruck - Das Magazin für Politische Literatur
Recevoir un éclair direct sur soi ne représente que 5% des blessures par la foudre. Quel est donc le foudroiement qui provoque le plus de blessés ? Ecoutez La Pluie et le beau temps avec Marina Giraudeau du 02 mai 2024
Aujourd'hui Zohra Bitan, Antoine Diers et Frédéric Farah débattent de l'actualité autour d'Alain Marschall et Olivier Truchot.
Julia and Mary Jo go head-to-head – and arm-to-arm for that matter – after they start co-coaching Quent and Randa's baseball team. Meanwhile Suzanne is protecting her car windshield from foul balls – and taking Charlene along for the ride. There's no getting around the fact it's our last episode with her. Meet us in the middle for a sidebar on parents who, uh, get a wee bit too into kids sports. Then come back Thursday to reminisce about Suzanne and Charlene.
Au cœur de la nuit, les auditeurs se livrent en toute liberté aux oreilles attentives et bienveillantes de Valérie Darmon. Pas de jugements ni de tabous, une conversation franche, mais aussi des réponses aux questions que les auditeurs se posent. Un moment d'échange et de partage propice à la confidence pour repartir le cœur plus léger.
What are the keys to providing excellent lifestyle management services to 25 different clients? In this episode of the Easemakers Podcast, Jennifer Quent, a director in Marcum's family office practice, shares what it's like to manage lifestyle and concierge services within a public accounting firm. Tune in to hear the range of services Jennifer and her team provide in any given week, her tips for staying organized, and the pros and cons of supporting a number of different families at once. Plus, in the lightning round, hear how Jennifer's experience as a professional ballet dancer prepared her for a career in private service.Subscribe to the Easemakers Podcast to hear from more experts in the private service industry, and join the Easemakers community to talk to other estate managers and PSPs on a regular basis. Enjoying the Easemakers Podcast? Leave us a rating and a review telling us about your favorite episodes and what you want to learn next!The Easemakers Podcast is presented by Nines, modern household management software and services built for private service professionals and the households the support.
durée : 00:05:20 - Avec sciences - par : Alexandra Delbot - Une nouvelle étude, la plus large menée jusqu'à présent, révèle qu'il existe bien une association entre quantité de spermatozoïdes et usage du téléphone, mais cela démontre en rien un lien de causalité entre les deux phénomènes.
Neste episódio juntamos três fofoqueiros sem vergonha pra falar da CIÊNCIA DA FOFOCA e te provar que fofocar é moral e faz bem pra saúde! Aprenda fatos aleatórios que você pode usar pra humilhar com argumentos fortíssimos a próxima pessoa que tentar te diminuir por ser um fofoqueiro casual ou profissional!Acesse o link do Vortex e se matricule na ÚLTIMA IMERSÃO DO ANO da Alura: https://alura.tv/vortex-imersaodev23 Host: Katiucha Barcelos. Instagram: @katbarcelos | Twitter/X: @katiucha Co-host: Pedro Pinheiro. Instagram: @ped_pinheiro | Twitter/X: @OdeioPePe Convidado: Príncipe Vidane. Instagram: @principevidane | Twitter/X: @principevidane | Twitch: twitch.tv/principevidane | Vocês também podem escutar o Vidane nos podcasts Mau Acompanhado e Pelada na Net Links comentados no episódio: Fofoca como importante ferramenta de sobrevivência humana: https://www.vice.com/en/article/ne9ae8/gossip-may-have-played-a-role-in-human-survival O cérebro humano foi programado para fofocar: https://www.npr.org/2011/05/20/136465083/psst-the-human-brain-is-wired-for-gossip Fofocas no Egito Antigo: https://abcnews.go.com/GMA/story?id=1044703 Egito Antigo fofocava sobre casos gays de figuras importantes: https://www.advocate.com/news/2005/08/13/ancient-egypt-gossiped-about-gays Corvos fofocam e guardam ressentimento: https://www.discovermagazine.com/planet-earth/grudge-holding-crows-pass-on-their-anger-to-family-and-friends === Produção: Thyara Castro Edição, pós-produção e mixagem de áudio: Roberto Oksman de Aragão Ilustração da capa: Brann SousaSee omnystudio.com/listener for privacy information.
Dans cet épisode spécial d'AMIES, enregistré en direct et en public au Neuchâtel International Fantastic Film Festival (NIFF), Anaïs et Marie ont décidé de s'attaquer à un mélange des genres, un véritable teen movie horrifique: Jennifer's Body, réalisé par Karyn Kusama.Dans Jennifer's Body, les deux personnages principaux sont Anita (Amanda Seyfried), soit «l'amie moche» de Jennifer (Megan Fox), une adolescente qui «bouffe des lycéens pour avoir une belle peau et de beaux cheveux». Leur amitié est au cœur de l'intrigue avec un lien à la fois toxique et surnaturel. Une relation un peu ambiguë néanmoins, qui cache un élément d'homoérotisme très fort...Chers fans d'AMIES, vous pourrez retrouver Anaïs et Marie cet été avec des hors-séries sur La Chronique des Bridgerton.Après la découverte de Friends par Anaïs puis de Twin Peaks par Marie, les deux amies vont, à tour de rôle, explorer deux genres du cinéma: les films d'horreur et les films romantiques. Cris, rires et larmes sont au programme.AMIES est un podcast d'Anaïs Bordages et Marie Telling produit par Slate Podcasts.Prise de son: NIFFDirection éditoriale: Christophe CarronProduction éditoriale, montage et réalisation: Aurélie RodriguesMusique: Victor BenhamouIllustration: Victor MantelSuivez Slate Podcasts sur Instagram et Facebook.
Phil LaMarr is an actor known for being one of the original cast members of MadTV, Pulp Fiction, and his voice acting roles in Samurai Jack, Futurama, Beavis and Butthead, Family Guy, Teen Titans Go! and a host of other animated series.Show NotesPhil Lamarr on IMDB - https://www.imdb.com/name/nm0482851/Phil Lamarr on Instagram - https://www.instagram.com/phillamarr/Phil Lamarr on TikTok - https://www.tiktok.com/@phillamarrFree Writing Webinar - https://michaeljamin.com/op/webinar-registration/Michael's Online Screenwriting Course - https://michaeljamin.com/courseFree Screenwriting Lesson - https://michaeljamin.com/freeJoin My Watchlist - https://michaeljamin.com/watchlistAutomated TranscriptionPhil LaMarr:I was developing an animated show based on a friend of mine's web comic called Goblins. Okay. And my partner, Matt King and I, we are both performers, but we adapted the comic into a script. And I called a bunch of my voice actor friends, cuz we were, we were gonna make a trailer, you know, to bring these, you know, comic characters to life Yeah. In animation. And it was funny cuz Matt and I are actors. We had, you know, written the script and we'd acted out these scenes. And so in our heads, we, we thought we knew exactly how they'd sound. But then we brought in amazing Billy West, Maurice LaMarr. Mm-Hmm. , Jim Cummings. Mm-Hmm. Steve Bloom, Jennifer. And it was funny because when they performed the scenes we had written, they took it to a whole other level. Right. Beyond what existed in our, in our heads. Right. Like, oh my God, they made it so much better than I even imagined it could be.Michael Jamin:You're listening to Screenwriters Need to Hear This with Michael Jamin.Hey everyone, it's Michael Jamin. Welcome back to Screenwriters. Need to hear this. I, another, another. Cool. I got another cool episode. I, I was so excited about this. I, I tri over my own words. I am here with actor writer Phil LaMarr and this guy. All right. So I'm on his IMDB page cuz he going through his credits. Phil, I'm not joking. It's taking me too long to scroll through IMD,B to get through all your credits. It's nuts how much you work. But, so I'm gonna give you real fast an introduction and then we'll talk more about, what're gonna talk about but okay. So this guy does a lot of, a ton of voiceovers. I guess I think we met on King of the Hill and I know we worked together on Glenn Glenn Martin DDS, but fu you know, him from Futurama.From Beavis and Butthead family guy the Great North. All every single adult animated show, a ton of kids shows Star Bob's Burgers. That's adult, of course. Rick and Morty Bob Burgers, Bob's Burger's movie as well. I mean, I'm going through all your stuff here. It's nuts. You were a writer performer on Mad TV for many years. Mm-Hmm. . And I think the pro, I'm sorry to say this, but the, the coolest role that everyone knows you, that you maybe you get recognized most from. Right. We, you know what it is, is you were, you were in Pulp Fiction and you had your head blown off in the back of the car. And I remember watching like, oh my God, they killed Phil Phil LaMarr:.Michael Jamin:I mean, how awesome was that role? Oh man. But so Phil, thank you for doing this. Welcome, welcome to this. I want to talk all about your amazing career. But now tell me, so how did you get into acting? When did you decide you wanted to be an actor?Phil LaMarr:Well, it's funny because there are a couple of double steps in terms of how I started being an actor. And when I decided to be an actor and when I got into voiceover, both my first time performing was in eighth grade. My school was doing a production of a book that I loved. I didn't consider myself a performer. Right. It was the phantom toll booth. Right. And there's this little character towards the end of the Phantom toll booth. The senses taker who will take your sense of purpose. Your sense of duty, but he can't take your sense of humor. Right. And I wanted that part. So that's why I went and auditioned. But I wound up getting cast as one of the leads.Michael Jamin:Wow. Okay. AndPhil LaMarr:Opened a show alone on stage under a spotlight doing a two minute monologue.Michael Jamin:Okay. AndPhil LaMarr:It flipped a switch in my head. I'm like, oh, I love this. You were, that's what, so I started, you know, being an actor because I liked to bookMichael Jamin:. Right. But then, but okay. But it's one thing to be acting in as a kid in eighth grade and then to commit your career to it. What, what, what happened next?Phil LaMarr:Well, and it's funny because I didn't consider that a career or what I was doing. It's just, it's fun. Yeah. I get to play well, and also I went to an all boys private school. Yeah. So the time you got to see girls was when you did a playMichael Jamin:. Okay. That makes, now you're, makes sense. Now we know why you're being an actor, .Phil LaMarr:And I wound up graduating and I applied to colleges that had, you know, drama programs, Northwestern nor Carnegie Mellon, Yale University. But I wound up deciding not to go to Carnegie Mellon and I went to Yale. I was like, no, no, I just want to go to college. And I did not decide to pursue acting as a career. I just majored in English. It was on the flight back home to LA I said, you know what, maybe I should pursue this acting thing. I mean, I enjoy it. And you know, some people say I'm pretty good at it. I mean, I either gotta do it now or wait till my mid forties when I have a midlife crisis. Yes.Michael Jamin:But this is Yale undergrad. Yes. Yale's really not for the grad school of the school of drama. But youPhil LaMarr:Go back to thing. Cause when you were an actor and you say you went to Yale, people assume, oh, like Moral Streep and Henry Wiggler. It's like, no, no. I didn't know thatMichael Jamin:. But so after you got outta college and you got outta, we went to Yale and there was some pressure on you to are they Princeton over there? We're gonna continue, we'll continue our, we'll set aside our differences long enough to have this conversation. But so, but after college you're like, okay, I got a big fancy Yale degree and I'm gonna become an actor.Phil LaMarr:Right. And, you know, had I decided to be a comedy writer with a Harvard degree, that would've beenMichael Jamin:Yes. That would make sense.Phil LaMarr:A career path that made sense. Right. As a Yale, there were no famous Yales as writers or producers or anything. There were a handful of, you know, drama school actors. Right. But again, I didn't go to that drama school. So I'm like, okay.Michael Jamin:Yeah. There's no connect. People talk about the connections. No, there's no connection. Just because you, there's no inroad. Just cuz you went to Yale, you know, to No,Phil LaMarr:Yeah. No. The the only famous undergraduate actors at that time in the eighties were two women who were famous before they came to Yale, Jennifer Beals and Jodi Foster.Michael Jamin:Right. Exactly. Exactly. All right. So then you made this commitment to, or this, this leap. How long your parents must have been thrilled , how long before you started getting work and how did you start getting work, getting work?Phil LaMarr:Well, and, and this is another one of the double steps, Uhhuh I, when I made this decision, I already had my SAG card.Michael Jamin:How did you get that?Phil LaMarr:Because back in high school, a friend of my mother's worked for NBC Uhhuh. And I think my mother had dragged her to see a couple of my plays. And so she said, Hey, we're doing this cartoon and we're gonna use real kids for the kids' voices. Which back in the eighties was a rare thing. Yeah. And she asked me to, to come in and audition for it. And I got a job on the Mr. T cartoon in the mid eighties.Michael Jamin:Oh, wow. AndPhil LaMarr:That got me my union card. Now I did not, again, did not consider this a career path. I it was just a cool summer job.Michael Jamin:Yeah. Now, the thing is, cause I hear this a lot. People say to me, yeah, I, I can do a million voices and you could do literally a million voices. I, how do I get into you know, voice acting? And it's like, they don't seem to put the connection that it's not enough that you do voices. You have to know how to act. You have to be a trained, you have to, you know, know, be if you're trained or even better. But you have to know how to perform and act. And so yeah.Phil LaMarr:That's, that's what I always tell people who ask me that question. I say, the first thing you need to know is voice acting the term is a misnomer because the acting comes before the voice.Michael Jamin:Yes. Yes.Phil LaMarr:You know, that's why you have amazing people like Cree Summer, who has a really distinctive speaking voice, but she has the acting ability. Right. To make every character completely different and real. It's the same thing like, you know, a a movie star, it's the same face, but it's always a different character.Michael Jamin:But there's something else that you bring, and I say this because you are a consummate pro. You are truly a pro. It's well for what you bring to that other actors, that non-voice actors, I guess, I don't know what you would call 'em, but have, but what I'm directing a voiceover actor, sometimes if they haven't done avo, a lot of voice acting, they don't realize they're using their face or their body . And, and you say, no, no, no. I, I see you're acting the part I see you're playing mad, but I have to hear it in my ear. And so I don't look at them when I'm directing. I wanna hear it. And Right. And so to talk about that a little bit.Phil LaMarr:Yes, yes. I remember, cuz I started out, you know, even though I had that job in high school, I did not consider it a voice acting career. It was just a, a goofy summer job on a cartoon that nobody I knew watched. So I came home after college and pursued on camera acting and stage mm-hmm. . And so a few years later, actually it was after a several years of Mad TV where we did Claymation pieces and it got me doing multiple characters on mic as opposed to just multiple characters on camera, which I was also doing on Mad tv. And I remember I decided to actively pursue the voice acting thing. Cuz at this point, you know, in the post, you know, early nineties era when cable blew up, voice acting became a job. Right. You know, cuz when we were kids, it was just something that six guys that Mel Blanc and five other dudes Right.Voiced every cartoon of our childhood. Right. You know, Mel Blanc, dos Butler, you know, that was it. But in the nineties, once Nickelodeon had 24 hours of children's programming, there was a lot more cartoon voices. And so like, oh, this could be a path now. And I remember one of my early sessions, I fell into my on camera acting face, face acting mm-hmm. . And they said, okay, Phil, stop. Try it again. Do that line again. Angrier, I did it again. They said, hold on, we're gonna play them both back. And they sounded exactly the same. And I realized what you just said. Right. Oh my God, I just made an angrier face.Michael Jamin:Right.Phil LaMarr:And that's one of the, you know, skills of voice acting the same way that you have singers, singers can, you know, put forth feeling or fun or whatever through their voice.Michael Jamin:Right.Phil LaMarr:You know, dancers do it through their bodies.Michael Jamin:Right.Phil LaMarr:You know. ButMichael Jamin:When you perform, let's say you're doing something on camera, how much thought do you give? Do you, is it, is it just second nature to go, okay, now I can use the rest of my body? Or how much thought do you have to go in between different, you know skill sets, I guess, you know?Phil LaMarr:Well, the, the good thing is, you know, you do have to, you know, get a switch in your head because when you're on stage, it's the exact same job bringing this script to life. But you have to do it with different tools. Right, right. And the same thing when you're doing it on camera. And the same thing when you're doing it on microphone. You have to, you have to gauge. Okay. Cuz you know, you read the script, you see the character, you embody it. Yeah. But then it's how do you communicate it to the audience?Michael Jamin:Right,Phil LaMarr:Right. You know, and it's funny because with voice acting, you know, we learned to run the character through our, our ears. You know, when you see in the old days, you see, you know, announcers doing this. Do you know what that is about? No.Michael Jamin:What what is that?Phil LaMarr:It's because all of us, you know, regular people hear our voices from inside our heads. Right. We're not hearing what other people hear. But when you do this, you are channeling your voice.Michael Jamin:That's whatPhil LaMarr:Mouth into your ear. So you hear what your voice sounds like outside your head.Michael Jamin:Oh, I see. I, that's so funny. I thought they were stopping their ear, but they're not. They're just re redirecting the voice Yeah. Into their ear. Yes. Oh wow. I had no idea.Phil LaMarr:So you can hear the subtlety, you know, because if, if you don't do something with your teeth, you don't hear that inside your head. Yeah. It's only what people hear. But that's something you might want with a character. Right. You know, I always, when I teach workshops, I always try to tell people, like, there are things we hear. There's, it's the same thing with your face. Mm-Hmm. when you want to, you know, express anger. You don't just do your face flat. You, you know. And it's the same thing with if, if there's something about a character, let's say I'm doing this character, but then I see the drawing and the guy's got a big beard. Oh, well let me make him sound, let me make him sound beier.Michael Jamin:Right. Right.Phil LaMarr:Which isn't necessarily true, just growing a beard doesn't change your voiceMichael Jamin:Uhhuh.Phil LaMarr:But there are things that when we hear something, we get the sense of it.Michael Jamin:Right. Do you have a preference now, Kami? Cuz do you have a preference? You work so much in voice acting, but do you have a, do you prefer that overlap? You know, like on camera?Phil LaMarr:No, it's funny cuz you know, at Comic-Con, people will ask, what's your, you walk in so many media, what's your favorite? And the truth of the matter is, and this is what I tell them, it's not about the media, it's about the quality.Michael Jamin:Quality. The writing or, or what Yes.Phil LaMarr:Uhhuh Well, the, the, the quality of the writing, the quality of the directing, the quality of the experience. Because to me, the, the cartoon Samurai Jack, which is I consider a work of art that has more in common with pulp fiction. Right. Than it does with, you know, pound puppies or some like goofy little Saturday morning cartoon that's more focused on selling toys than on actually putting out story.Michael Jamin:Yeah. Right, right. But in terms of voice, a I mean, you don't have to get into hair and makeup. You don't have to memorize anything. And that's a whole nother skill as well. Memorizing the, the, the text.Phil LaMarr:Well, but that, that's actually harder because when you work on stage or on camera mm-hmm. , you get time to rehearse.Michael Jamin:Right.Phil LaMarr:You get to practice with a director helping guide you, your people, someone watching you, and you build the character over time. And then you don't have to make it work till the camera says, till they say action.Michael Jamin:Right.Phil LaMarr:But when you're doing voiceover, you're handed a sheet of paper, you're reading words off a page, and you have to bring those to life instantly.Michael Jamin:Yeah, that's exactly. Now do you, cuz when we work together on, on Glen, well we did King Hill first, but on Glen Martin, just so people know you didn't audition, we just, we call you up. Hey, we book you Theor agent, and you come in, you show up, you, you got the job, and you show up. And I remember approaching you saying, okay, Phyllis, the character, I remember the character's name was Rasmus, and the only thing you knew about him was that he had a milky eye. He was like seventies. He had a milky eye. And I go, what voices did you bring ? And you, you, you gave me like three different voices. And I think I said that one a little more gravelly and boom, that was it. You jumped right into it. Exactly. That was it. You're ready to go. . And that was the benefit of direction you got go .Phil LaMarr:Right. See, and we did that in a minute and a half.Michael Jamin:Yeah.Phil LaMarr:Had we been working on a movie, I would've had to go in for wardrobe, had them try on seven different outfits, had them send you the pictures, , you know, over two weeks. Right. While I was memorizing all the lines for us to come to that conclusion.Michael Jamin:But on most of the voiceover judo, is that how it is? It's just basically they book you for the day and you know, unless you're a regular, they just book you, you come on in and you spend an hour or two, and then that's it. Is that how it works for you? Mostly?Phil LaMarr:Well, ho hopefully. I mean, most of the time you get the script ahead of time, so you get to read the story, know the context. Right. But that's just one episode. You don't have the entire, you know, arc of the story. You know, don't know everything about the, you know, if you're playing the villain about the, the hero. So you learn most of it when you come into the session,Michael Jamin:But then there's another thing that you have to bring to the table, which is a whole, like, you okay, you're an excellent actor, but you also have the, the, when you do these voices, they don't sound like they're coming from you. Like, they sound like they're coming from 10 different people. And so the, how do you, like how do you approach that? How do you making voices that don't sound anything like the, any, any other voice that you do.Phil LaMarr:Well, it varies. I mean, there are, it's funny because now over the years, you know, people will bring up some old character. And I realize, okay, that sounds a little similar to that other one. But I realize it's not about, I used to think when I was younger, starting in voice acting, I used to think it was about no, no. Every voice should not sound anything like the other one. Right. You know? But I realize it's more about embodying the character. And the thing is, you know, these characters are all different. So I need them to, I want them to sound different.Michael Jamin:Right. I don't mean like, like when I first got the King of the Hill, I was shocked when you hear the voices that you've been watching the show forever, and then you see the actress playing, you go, whoa, that voice is coming from that person. That, that doesn't sound anything close to their, like, there's a transformation that you're able to do with your voice by, like, that's a different skill. I mean, forget about even, yes, I know embodying the character, but you're really playing with your vocal chords in a way that almost seems impossible to someone like me.Phil LaMarr:Oh, thank you. Well, I mean, in, it's, it's a, it's a skill set that not everybody has. Like I said, some people just like when on Samurai Jack, I worked with Mako Iwatsu Uhhuh, you know, an older Japanese actor who was an icon. He had starred in movies, starred on Broadway, you know, his name was above the title on a Stephen Sondheim musical. Right. But he had a very distinctive, you know, heavy, very textured, heavily accented voice. And I figured, okay, he's just doing his voice. And I remember there was one episode where they cast him as a secondary character mm-hmm. in the episode. And I remember thinking to myself, oh, Jesus, what are they doing? Uhhuh, his voice is so dis. I mean, that's like casting the rock in two characters in a movie. Right. You know, like, nobody's gonna get fooled. But he blew my mind and taught me a masterclass because what he did was, he did not completely transform his voice, but he acted the second character from a completely different perspective. You know, Lowe's dead, you know, complete, he performed it completely differently than he performed Aku the villain, Uhhuh . And I, and when you watch the episode, you can't tell it's him.Michael Jamin:You can Right. You can't tell.Phil LaMarr:Now, part of that has to do with the art, you know, because you're change your changing your voice, but they're also changing the drawing.Michael Jamin:Yeah. That, that's true. But I wonder how much work do you on your own at home? Like, how much do you think about other voice? Do you pra you go, do you hear a voice and you go, Hey, that's an interesting thing. Maybe I should, you know, do you practice at all? Do you, I don't know. Are you, are you constantly trying to invent new, new voices for yourself?Phil LaMarr:Well, I'm, I'm not a singer, but I've always had an ear. Right. For speech. It, I do a lot of impressions. Uhhuh, , you know, comedically and sometimes just job wise. Actually, weirdly, 10th grade, my second year of acting, I got the part in our, one of our high school plays. We did a production of Play It again, Sam.Michael Jamin:Okay.Phil LaMarr:And in 10th grade, I played Humphrey Bogart .Michael Jamin:Okay.Phil LaMarr:And I spent the entire production trying to do my best impression of Humphrey Bogart. If that plane leaves and you are not on it, you'll regret it. Maybe not today, maybe not tomorrow, but soon. And for the rest of your life. And so I watched a lot of, you know, videotapes of Humphrey Bogart. And I, and I also had to learn how to do that impression and projectMichael Jamin:It Right.Phil LaMarr:In a, in a theater cuz there was no microphone. But I think maybe that helped start me right on the, you know, aping People's Voices thing. Which, when I started doing sketch comedy Right. I leaned into that too. Oh, I'm gonna do a Michael Jackson sketch. You know?Michael Jamin:Right. Cause you, so how is that you're talking about, so that, that brings us to Mad tv. So there goes your, I dunno, how, how did you get that that audition? What did you bring, what did you bring to that audition, you know, for yourself?Phil LaMarr:Well, I, when I was in college I was part of a improv comedy group that started and I loved it, you know, having been taught that the, you know, the key to drama is conflict, but then being introduced in your late teens, early twenties to this concept of Yes.Michael Jamin:And, and yes. And yeah.Phil LaMarr:You know, improv is collaborative theater, make your partner look good. Right. Work together, you know, all of this very positive energy. It's like, huh, wow. This isn't just about performance. This is a great life philosophy. Yeah. So after graduation, and I came home to LA and I started taking classes at the Groundlings Theater mm-hmm. , the sketch, comedy and improv group. And, and I did that not for the career, but because I wanted improv back in my life.Michael Jamin:Right.Phil LaMarr:And doing improv that led me into sketch comedy and writing.Michael Jamin:Right.Phil LaMarr:Because that's what the ground wings do. It's like, okay, that's a great improv. Write it down.Michael Jamin:Right. .Phil LaMarr:Yeah. Now do that character again. Come up with another scene for him.Michael Jamin:And so that's what you, you brought to the audition, like what, three different characters or something?Phil LaMarr:Y well, by the time Mad TV came around, I had been doing sitcoms, you know, from the early nineties to the mid nineties. This was 95. Right. So I went to audition for mad TV and the people at Fox had seen me guest on a bunch of shows. Right. And in fact, I went to audition for Mad TV in what they call second place because I had done a pilot for Fox right before Mad. So it's funny because I went in there thinking, no, this pilot is gonna, is amazing. We're gonna be the new Barney Miller. Alright, fine agents, I'll go for this sketch thing, whatever. I've been doing Sketch for six years, but whatever. And so I went in and they said, okay, bring in some, some of your characters.Michael Jamin:What Century is calling ah, . That's your phone from 1970, right?Phil LaMarr:?Michael Jamin:Or is it an alarm clock?Phil LaMarr:Ah, no, it's, I forgot toMichael Jamin:What's your phone? It's your iPhone.Phil LaMarr:It's my agent calling. Oh, you, you don't need to talk to them.Michael Jamin:That's Hollywood.Phil LaMarr:Yes.Michael Jamin:I can't believe your agent actually calls you. Mine doesn't call .Phil LaMarr:Alright, let me, let me go back.Michael Jamin:Yeah.Michael Jamin:We're gonna put all this in. This is all funny. .Phil LaMarr:Well anyway, I went to audition for Mad TV having done several years at the Groundlings and having been voted into the main company of the Groundlings, alongside Jennifer Coolidge. So youMichael Jamin:Were perform Oh, so you were, that's great. So you were performing regularly on stage. Yeah. Okay.Phil LaMarr:So, so sketch comedy was solidly in my backMichael Jamin:Pocket. Yeah.Phil LaMarr:And, you know, I'd been, you know, I'd finally started making a living as an actor. I didn't have to do my day job, you know, just doing guest spots and whatnot. And I went in there without any sense of desperation. I don't need this.Michael Jamin:Right. I'vePhil LaMarr:Already got this pilot. And they said, okay, bring us your characters and a couple of impressions and we'll show you a couple of our sketches. You know, so there were three steps to each audition, Uhhuh. And it's funny because later after I got the job, I talked to the showrunner and he said, oh man, you were so relaxed. We loved it.Michael Jamin:Oh wow.Phil LaMarr:You know, cuz I remember when we had a, a callback and there was somebody from the studio. This woman was sitting there like this. And I said, oh, I'm sorry. Did I wake youMichael Jamin:? And then wow. I mean, good for you. And then, but what became of that pilot, it didn't go to seriesPhil LaMarr:The other. No.Michael Jamin:Boy, had you known that ? IPhil LaMarr:Know. Well, and when we, when we got the call back from Mad tv, I'm like, what the heck? And might have said, yeah. Yeah. somebody at Fox said, don't worry about the second position.Michael Jamin:Right. Oh wow. Wow. . So, right. So you did that for a number of years. And then, and what, what along the way, when did pulp Fiction occur during this?Phil LaMarr:Actually I did Pulp Fiction before Mad tv.Michael Jamin:Okay.Phil LaMarr:It's funny cuz the first episode of Mad TV had a Pulp fiction parody in it. AndMichael Jamin:Did you play yourself?Phil LaMarr:Yes. They pitched me playing myself. OhMichael Jamin:My God, it was so fun. I mean it's such a classic role. I mean, do, do you, and does, do people want to talk to you about that all the time?Phil LaMarr:Not, not really. What I, I find that people only bring up Pulp Fiction around the time when a new Tarantino movie comes out.Michael Jamin:Okay.Phil LaMarr:But I mean, there are some people who, you know, are big fans of it. But the funniest thing is there will be a friend, somebody I've known for several years, but it's the first time they've watched Pulp Fiction since we met.Michael Jamin:Right. OhPhil LaMarr:My God, Phil. I didn't realize that was you.Michael Jamin:That's so great. I mean, so Right. Just to remind people again. So that was a scene was, it was Samuel Jackson and and John Travolta. They, yes. I guess the, the pla that plot line was a bunch of like straight-laced kind of college kids, kind of up, you know, they, you know, good kids who probably made one bad decision. Right. But they weren't troublemakers. They were good kids. And then they owed money and then, and then I guess they, you know, so they shoot, I guess they come into the apartment Right. And they they wind up shooting up the place and they take you, I guess they, they're gonna take you to the big guy, you're hostage and then he, you're in the back of the car and they got a gun trained on you and it hits a bump and they accidentally blow your head off . Right.Phil LaMarr:Well, well actually, the backstory that Quent and I talked about is that cuz my character is Marvin, he's the kid who gets his brains blown out in the back of the car. Right. but we decided that the story was Jules Uhhuh knew somebody who knew Marvin and arranged for Marvin to, that's why Marvin gets up and opens the door.Michael Jamin:Okay. AndPhil LaMarr:Lets them in. He's on their side.Michael Jamin:Oh, is that right? Is that, I should watch that again. I don't, I didn't pick that up at all.Phil LaMarr:And so he's not, they're not taking him as a hostage. Cause actually, Sam's like, how many, because John asked him how many are in there? It's like, well, there's, oh,Michael Jamin:There'sPhil LaMarr:Five plus our guy.Michael Jamin:Oh, I gotta watch that again. I missed that. Okay. It's been a while. Okay. So,Phil LaMarr:So the idea is that Jules knew somebody who knew one of the kids that took Marcellus briefcase. So he made a connection and was like, okay, we figured it out. He's our man inside is gonna open the door for us at 7 45. We're gonna come in, we're gonna get the briefcase. But of course, in my head, the idea is that Marvin didn't realize they were gonna kill everybody.Michael Jamin:Right. Right. He thought theyPhil LaMarr:Were just gonna take the briefcase.Michael Jamin:Right. So he'sPhil LaMarr:Freaked out.Michael Jamin:And so how many days is, were you, how many days of a shoot is that for you? Is that a week or what?Phil LaMarr:I spent about two weeks. There was the car scene and the apartment scene. But the, the most ironic thing was I shot my scene after they had shot the Harvey Kittel cleaning up my body scene.Michael Jamin:Right. So whenPhil LaMarr:I came onto set, everybody was looking at me like they recognized me because they had been see, looking at me dead for two months.Michael Jamin:. But how? Wait, but but when you say looking at you dead was, were there photos or something or what? No, no.Phil LaMarr:They built, they built a dummy. The dummy. Oh. Because there's a se there's a sequence where the Harvey guy tell character comes to clean up Yeah. And then carry the body out of the car into the Tarantino character's apartment. YouMichael Jamin:Know, that must been freaky. SoPhil LaMarr:Everybody been looking at this body in the trunk body, you know, and then when I walked on, they were like, it's, it's the same thing of like, when you walk into a room and you forget you're wearing a name tag.Michael Jamin:Yeah. Did you know how great that movie was gonna be at the time? Yes. I mean, you, you can tell. How can you tell? IPhil LaMarr:Couldn't tell how successful it was gonna be because, you know, reservoir Dogs was really good. Right. But it wasn't, you know, it was a big indieMichael Jamin:Movie. Yes.Phil LaMarr:Right. But when you read the script for Pulp FictionMichael Jamin:Uhhuh,Phil LaMarr:It leapt off the page.Michael Jamin:Right.Phil LaMarr:It's funny because like, when I went to audition for it, after meeting Quentin Tarantino, we did a Groundlings improv show.Michael Jamin:Oh, is that right? BecausePhil LaMarr:He's, he was friends with Julia Sweeney, who was a Groundlings alum. Right. And she invited him to come do a show. I was in the cast. Right. And when he was casting pulp Fiction, he was thinking about Marvin. He told the casting lady, Hey, there's this black guy at the Groundling, he's go find him.Michael Jamin:Right.Phil LaMarr:And I remember preparing for the audition, reading through the scene three times. It jumped into my, I w I had it, I was off book by the time I memorized. Because the way it's written, even though it's not everyday life, every line follows exactly what the one before it would say. And it feels natural, even though it is such a heightened world he's created.Michael Jamin:Yeah. He really is. I mean, you know, he's a master with, with words. He doesn't, does he, he doesn't, I can't imagine allow much improv. I mean, it seems like he knows what he wants, right?Phil LaMarr:Oh, yeah. No, no, no. Yeah. The, the script is like a Rosetta Stone. It is carved, yes. Actually, the, the only two things that changed in the script were one a line of Samuel Jackson's character about porkMichael Jamin:Uhhuh ,Phil LaMarr:Because originally they're talking about a pig and he is like, oh, that's the Kerry Grant of pigs. And Sam was like, no, Manam my guy. I don't think this guy would ever think Kerry Grant was cool.Michael Jamin:Right. So theyPhil LaMarr:Changed it to the, the reference to the the at Albert showMichael Jamin:Oh, oh green Acres. Green Acres, yeah. Yeah, yeah. Right.Phil LaMarr:Yeah. It's like the pig on Green AcresMichael Jamin:. And,Phil LaMarr:And the o and the other moment that changed from the script to what, what we shot was because of what a thought that John had.Michael Jamin:Uhhuh GunPhil LaMarr:Travolta. Yeah. Oh. Because, because this was a low budget indie movie. They made this movie with all those stars for only 8 million.Michael Jamin:Are you kidding me? Really?Phil LaMarr:Yeah. And part of that saving money was we rehearsed the entire movie on stage before we started shooting. Right. And I remember going to a sound stage at, at cul in Culver City on Sony and meeting John Travolta and Samuel L. Jackson for the first time in rehearsal.Michael Jamin:Right.Phil LaMarr:And I remember walking in there and it's like, Quinn's like, oh, hey Phil, this John Sam, this is Phil. And John Tra goes, oh geez, this is a guy. I had to kill this guy. The eyes is gonna hate me.Michael Jamin:That's a pretty good Travolta sound just like him. . Oh, thanks.Phil LaMarr:And he just, I thought he was just joking. But eventually he talked to Quintin. Cuz originally in the back of the car, the gun is supposed to go off accidentally. Yeah. And shoot Marvin in the throat.Michael Jamin:Okay.Phil LaMarr:And then he sits there g gurgling while they go back and forth bantering, oh, dad, what are we gonna do? Right. Well, we can't take him to the hospital. Well, I don't have nobody in the valley. Well, alright. Put him out of his misery. When I, on the count of three, I'll hit the horn. And so John's character was supposed to shoot me the second time on, and John said, no, no. Quentin Quinn. Quinn. If my character kills this kid on purpose, it's gonna ha people won't, won't like him. And he was right. It would've negatively affected his sequence with Umma Thurman.Michael Jamin:That's absolutely right. But do you think he was, Travolta was interested in protecting the character or protecting himself as an actor? You know, like how people saw him? What do you think?Phil LaMarr:I think it was, he had a connection to the audience, which I guess was mostly through him, but also through the character. Because I mean, I mean, I guess, you know, Quintin's could have just said No, no, the character's just, he's a nasty, you know, junky. Yes. He does nasty stuff. But I think John was like, no, no, no. This whole sequence with the girl, he's not nasty.Michael Jamin:Right. So, right. I see. And andPhil LaMarr:Quintin agreed with John Yeah. His take on the character.Michael Jamin:Yeah. That's so interesting.Phil LaMarr:Isn't thatMichael Jamin:Wild? Yeah, that is. See, it's so funny listening to you, you can so hear like how thoughtful you are about acting, how mu how much, how it's not, it's a craft, it's a, you know, you, I really hear that from you, how much you put how passionate you are about the craft of acne. Not just being on stage, not just you know, doing voices, but the craft of it. You know? Exactly. Yeah. How do, do you miss, or do you get a chance to perform on stage a lot? Because that was your original lovePhil LaMarr:Mm-Hmm. . Yes. Thankfully. I'm still holding on to my performance foundation. My friend Jordan Black, who is another Groundlings alum Uhhuh about what, 12 years ago now, created a group. And we do a show monthly live on stage, an improv show at the Groundlings Okay. Called the Black VersionMichael Jamin:Uhhuh. It's,Phil LaMarr:It's an all black cast, and we take a suggestion from the audience of a classic or iconic motion picture, and then we improv the black version of it. ButMichael Jamin:What if you're not familiar with the, the classic?Phil LaMarr:Well that's the tricky part is our director Karen Mariama mm-hmm. , who was one of my teachers at the Groundlings and is now one of my peers, has an encyclopedic knowledge mm-hmm. , she can take a movie from the black and white era and know the entire structure or something that dropped that dropped on Netflix last week. And she knows everythingMichael Jamin:But you, but if you don't know itPhil LaMarr:Well what we do, what she does is she, she, as the director, she guides the scenes Uhhuh . Okay. Alright. Phil, you are gonna play this, you know, like let's say we're doing the black version of Princess Bride. Phil, you'll, you are this you know, swordsman who is incredibly skilled audience, what do you think his name? Okay. In Negro Montoya, that's your name.Michael Jamin:That's funny. AndPhil LaMarr:Like she'll assign the characters Right. And then guide us from scene to scene. But, you know, our choices, you know like when we did the black version of Princess Bride, it was called her Mama and them, and Prince Humperdink was Prince Humpty Hump. Right. You know, and sometimes the choices will change the, the, you know line, line of the story. But she tries to keep us, you know, take us through the iconic scenes.Michael Jamin:Right. And this is once a month you do this.Phil LaMarr:Yes.Michael Jamin:Yeah. That's a big commitment.Phil LaMarr:Yeah. And for 12 years. Yeah.Michael Jamin:Yeah. I mean, you must, you probably took a break during the pandemic for a little bit. Yes,Phil LaMarr:Yes, yes, we did.Michael Jamin:But Wow.Phil LaMarr:And recently we've you know, we've built an audience and a reputation and we've started booking on the road. We've we've played the Kennedy Center in Washington DC twice now.Michael Jamin:So you take it on the, and, and how were you able to sell tickets on the road? I mean, so easily.Phil LaMarr:It's, I I think it's, it's the, the venues and also you know, somewhat just the, those of us in the group. I mean, Jordan was a writer on SNL and part of the guest cast on community Cedric Yarborough from Reno 9 1 1, and tons of other shows. SoMichael Jamin:Just your name. Just your name. So it's kind of just your names people like, Hey, we want, you know, we recognize these names, we wanna go see it. If you, you know this.Phil LaMarr:Yeah. I, I mean, I'm, I'm not exactly sure how we managed to sell out, youMichael Jamin:Know? That's amazing. All overPhil LaMarr:TheMichael Jamin:Place. That sounds like a lot of fun.Phil LaMarr:It's so much fun.Michael Jamin:Hey, it's Michael Jamin. If you like my videos and you want me to email them to you for free, join my watch list. Every Friday I send out my top three videos. These are for writers, actors, creative types. You can unsubscribe whenever you want. I'm not gonna spam you and it's absolutely free. Just go to michaeljamin.com/watchlist.Wow. I mean, is there a limit to how much you can, I mean, just organizing that to get everyone to get the time off. I mean, that's gotta be logisticallyPhil LaMarr:Gotta be hard. Yeah. The, the tours aren't that we don't do them that often because, you know, Gary Anthony Williams from, you know, Malcolm in the Middle and stuff, everybody in our cast works a lot. Yeah. So we can really only guarantee the show once a month. Right. but sometimes when we tour, not everybody goesMichael Jamin:Because Yeah, you have to, I mean, if someone books apart and you're shooting that at night, what, what are you gonna do? That's the way. Right.Phil LaMarr:Or you or you have to fly to Vancouver for six months.Michael Jamin:Yeah. Right. Right. And that's part of, that's, I mean, that's part of the, the plus of, of the do for you for doing a lot of voice acting is that, you know, you probably get to lead a pretty sane in life if for an actor it's, it can be very hard, you know, being onPhil LaMarr:Their Well, and, and it's also one of the wonderful things about the progress that has come since we started the show, because part of the reason Jordan created the show is because those of us in the improv world, you know, who are people of color, oftentimes spent the majority of our time being the one.Michael Jamin:Yeah.Phil LaMarr:But over the years, the, you know, the numbers, the diversity in the improv world, you know, expanded, it used to be a very suburban art form.Michael Jamin:Yeah.Phil LaMarr:But now, you know, I I I credit this mostly to Wayne Brady doing whose lives in anyway.Michael Jamin:Yeah. Right. Yeah. And so that really opens up more opportunities and more of what Yeah. That, that's, that's interesting that, you know, that really has changed a lot. How, how have you seen it change your opportunities in the past, I don't know, whatever, 20 years, 30 years, you know, however long?Phil LaMarr:Well, it's, it's, it's changed be in a lot of ways. One, when I got voted into the Groundlings in 1992, I was the first black person to get voted into the company in its 18 years of existence.Michael Jamin:You're kidding me. Yeah. That's crazy. That's crazy.Phil LaMarr:And now the pool of, you know black people, you know, who are Groundlings has expanded. It's not just one every 18 years.Michael Jamin:Yeah. Right. But, and in terms of more, you know, more opportunities for you even, you know, I mean, everything's, everything's really opened up for you. Right. I mean, I imagine Well,Phil LaMarr:Well, because we have, you know, the, those of us in entertainment have expanded. Yeah. You know, what we consider will work. You know, I was talking my son just graduated from NYU and one of his classmates is the son of the woman who directed the woman king. Okay. At Viola Davis, you know. Right. Action movie. Right. And I remember watching and thinking, oh my god, when I was 18, no studio in the world.Michael Jamin:Right. Would touch that. Right. Would'vePhil LaMarr:Would've, you know, green lit Yeah. A action movie, you know, about black women.Michael Jamin:Yeah. Right.Phil LaMarr:And, and the fact that, you know, it's out there now and is just another big movie. It's, it's not considered, you know you know, a once in a lifetime thing anymore. That's the progress and the fact that we have, you know, middle-aged women mm-hmm. leads of s of TV series. Yeah. You know, back in the old days, the only lead of a TV series was one beautiful person or one famous, you know, hilarious person. Yeah. But now they've opened it up.Michael Jamin:I wonder, is your son planning to going through the arts now that he graduated from nyu?Phil LaMarr:Yes. Yes. He's, he's musician. He oh, writes and sings and dances and raps and produces, and he's part of the Clive Davis recorded music program where they teach them music and the music business.Michael Jamin:Yeah. Wow.Phil LaMarr:One of his teachers was Clive Davis's daughter. Wow. Who's a lawyer.Michael Jamin:And do, I mean, it's, but it's, the music is different from what you do. I wonder, I wonder if you're able to, does it all feel like, I don't know how to help , you know? Yeah.Phil LaMarr:Yeah. There's a lot of that uhhuhMichael Jamin:Like,Phil LaMarr:Dad dead. Because when your kid goes into, you know, show business, you think, well, I've been in show business for 40 years, like, you haven't been in the music business. I'm like, you're right.Michael Jamin:That's true. So interesting. Wow. Wow. And, and, and so what about, I guess, you know what's next for you? Is you just, is it more of the same? Is there more, well, actually I know you have a pilot that you, that you were, you're working on, you know, you're getting into the writing side of the business. Yes.Phil LaMarr:More so. Yes. And that actually over the last couple of years has been a, a slight shift you know, having been performing. Yeah. You know, for so long now, since the eighties. I've also, and I've also been writing since the nineties when I started at the Groundlings. Right. I was writing sketches and I wrote on Mad tv. But just recently, earlier in this year, I took a job as a professional writer on a television show for the first time.Michael Jamin:Right.Phil LaMarr:And it was pretty wild to have 30 years of sitcoms under your belt and then suddenly see it from a completely different angle.Michael Jamin:And what, and what was your impression of that?Phil LaMarr:It, it was wild to cuz like you were talking about the way I look at acting and break it down. Yeah. And, you know, look at all the subtle distinctions. I had never looked at, you know, TV writing that way. Okay. But to suddenly be in a room with people who look at who see it that way for decades, you're like, oh wow. How do I feel like a rookie at 56?Michael Jamin:Yeah. Right. And so there's a lot of catching, a lot of catching up little Yeah. You know, that's so, and, and are, are you enjoying it as much or as much as you thought? Or what do you think?Phil LaMarr:Well it, the challenge part was, was a little bit, you know, tough. Yeah. But it was great to be working on a really good show with great, talented people and to be learning something new. It's like, yeah. Oh, like for me, like when we would write sketches at the Groundlings Uhhuh, you didn't think about anything about like, well, beginning, middle, and end. Right. Three minutes.Michael Jamin:Right, right.Phil LaMarr:You know, but now you have to think about, you know, character arcs and the, you know, okay, well if you introduce the character's father, we have to think about their entire family. Is the mother still a alive? You're like, oh, right. When you write a sketch, you don't have to think about,Michael Jamin:You don't think about any of that. Right. And when you, and when you're acting the part you, you know. Yeah. Yeah. And so it's, it's so interesting cause I always say like, acting and writing are really, they're two sides of the same coin. It really helps to study both whatever you want to do, study both. Exactly. it's all, and so yeah, that, that finding that emotional arc and, you know, it's all, it's all new for you, but yeah. I wonder, you know, but you're enjoying it.Phil LaMarr:Well and, and working alongside, I mean, cuz there were people who, you know, one guy at show run Will and Grace, another guy worked on Arrested Development. I mean like, you know, one guy was showrunner on five other shows to, to watch how they mm-hmm. . Cause for me, I would like, Hey, I would just pitch out a joke. I'm just gonna say something I think is funny. Right. But they had this like s you know, Superman MicroVision where they could take that joke and see Yeah. How it could affect the mm-hmm. the entire scene, the entire episode and the entire season.Michael Jamin:Yeah. Right. It's like, where does that, but offPhil LaMarr:The top of their head.Michael Jamin:Right. And where does it go? Where does that moment go into the script, into the, you know, is it act one or is it Act three? And so that Yes.Phil LaMarr:Yeah. That yes. I mean I'm sure you have that, that x-ray vision too. Yeah. Where you can look at a script and see the act structure Yeah. And you know, and or just even the structure of just the scene. Yeah. Like what does this character, where do they start and where do they finish?Michael Jamin:Yeah, that's right. Well we were, we ran a show for Mark Maron for four years and you know, he was one of the writers in it and he would pitch an idea, cause I wanna say this, and then we'd put up Neck one and then I remember at one point , we were talking about it and we said, mark, I don't think this can go in Act one. Is it okay if we put a neck three? And he'd say, oh, I don't care where you put it is. Right. long as in the script,Phil LaMarr:I'm just thinking about what the character would say.Michael Jamin:Yeah. That Right. I was like, was like, oh, that's a relief. I thought you were gonna get mad for, you know, you didn't care about that. So funny.Phil LaMarr:Right. Yeah. Just cuz as performers we are not looking at the app structure.Michael Jamin:Right, right. You know,Phil LaMarr:Most of us, I, I may imagine there are some people who do like, well I wanna build up from act two to act three, you know? Yeah. But most of us don't. We're just, what is the guy feeling in this scene right now?Michael Jamin:Right. And how to get to that, the truth of that, how difficult is it for you to make yourself vulnerable like that on stage to like, to go there, you know, whatever, maybe it's crying or whatever it is. How difficult it is for you just to allow yourself to go there?Phil LaMarr:Well, it's not necessarily easy. It's definitely something that I had to, you know, a skill set to build Uhhuh . You know, I was not one of those people when I started acting who could make themselves cry on cue, UhhuhMichael Jamin:,Phil LaMarr:You know. But I remember I had to do a scene on a, a Steven Boko show called Philly. And it's like, okay, well this character is really, you know, emotionally, you know, I gotta figure out how to make sure I'm putting that out there. Right. So I thought about something sad and let it, you know, something different than what the character was thinking about mm-hmm. . But it's again, like, you know, with the voice acting like what sounds bey you also have to think about your face, what looks Yeah. Sorrowful and how do you make yourself look sorrowful. Right. You know, although one of the things that helped me learn where to, to try to go was working on Pulp Fiction with Samuel L. Jackson.Michael Jamin:What he what? Go on. He gave you some great advice or what?Phil LaMarr:No, he just, what he showed because you would stand there offset talking to this cool old guy who was amazing, you know? Yeah. He's just talking about golfing or his daughter. But then when the camera started rolling Yeah. The person you were just talking to disappeared. Right on set. I looked over and I was looking into the eyes of someone completely different than Samuel L. Jackson. Right. And I remember standing there in my twenties thinking, oh my God, he transformed himself internally. And so that it shows externally. Yeah. That's like, I gotta learn how to do that.Michael Jamin:And then how did you learn how to do that?Phil LaMarr:Well, I, I'm still haven't gotten to his level , but what I learned is you have to figure out one, how you look and how you get, it's, it's like a map. Mm-Hmm. , you know you know, if you figure out how to guide your internal self to a place where your external self does what's on the page, that's what acting is. You know, otherwise you would just be reading words to be or not to be. That is the question. You know, it's not just about the words. It's how do you express the feeling? And Sam taught me there is a way where you don't have to do nine minutes of to get into character.Michael Jamin:Okay. IfPhil LaMarr:You know the root within yourself, you can do it like that. Right. So I, I realized it was about learning your internal, you know, where do, where do you put your sadness? Where do you put your anger and where's, what's the difference between your anger and this character's anger? Guide yourself there and then, you know, connect the two.Michael Jamin:And do you have moments where you feel like, I I didn't do it. I didn't get there. You know. Well,Phil LaMarr:I mean that's the, the one good thing about on camera work and what we were talking about about the rehearsal Uhhuh is you can find, take the time to find it, but yes, no, there's, there's always, you know, not every job is a home run. Mm-Hmm. , you're like, oh, I wish I had gone a little bit deeper with that. Right. You know and sometimes you feel it there. Yes. Other times you don't realize it until after you see it. And maybe it's, they picked a take that Right. You didn't No. That wasn't the best one. Why didn't they, you know, not nothing is ever perfect.Michael Jamin:Right, right. YouPhil LaMarr:Know,Michael Jamin:And, but do you, like sometimes I'll watch, I'll be on set and I'll watch an actor do something. Usually it's drama and or a dramatic moment. Right. And, and they let it all out. And after you, you'll cut. I'm always like, I wonder if they need a moment alone. You know what I'm saying? It's like Right. I mean, what are your, what's your take on that?Phil LaMarr:Well, I mean, I'm not a, a method guy. I don't put myself into, because Yeah. You, you hear a lot about that, about a guy's like, yeah man, I had to play this character and my girlfriend hated me for a month because when I went home I was still part of that dude. Yeah. You know? And I don't know if it's my improv and sketch background where I take my character off like a hat,Michael Jamin:Uhhuh . IPhil LaMarr:Don't take them home and, you know, I, I try to embody it during the performance, but I don't feel it's, you know, required to have to be the character.Michael Jamin:Right. But if you spend a whole day as a character,Phil LaMarr:It can, it can be draining.Michael Jamin:Yeah. Right. It can be draining. Right. You have to wash yourself up that if, if you don't like that, you know, if you don't like that person, you have to wash yourself of that. Right. And how do you do that?Phil LaMarr:Yeah. Well, I mean that's, that's about, you know, when you leave the set mm-hmm. , you leave those feelings behind, although some actors don't, but you'veMichael Jamin:Just experienced, you spent the whole day experiencing that mm-hmm. that whatever it is, and yes, I understand you left it, but you spent the whole day angry or, or mournful or bitter or whatever it is. Like how do you, you still have to wash yourself from that, don't you? Well,Phil LaMarr:But I mean, the, for me, I'm not fooling myself. I'm not trying to convince myself that the script and the character is real and me. Cuz that's the thing. Like, if you spend all day with your drunken uncle who's nasty on Thanksgiving, that's not fun.Michael Jamin:Right.Phil LaMarr:You know, and then when you leave, you're like, ugh. You can, you can still be right, you know, upset about it, but you're, you're con but because you're connected to that person. For me, it's about, that is fiction. Right. I only, you know, I'm connected to the fiction while performing. I don't feel like I have to be, you know, like when I play Hermes on Futurama, I don't have to speak in a Jamaican accent for the entire season.Michael Jamin:Right.Phil LaMarr:You know?Michael Jamin:But are there moments, and maybe this is less so for a voice acting, but when you're, when you're on, when you're on camera, are there moments when you're like, you're cognizant that, oh, I'm acting now. Mm-Hmm. , you know, and then you, and you have to, oh, I gotta get back. You know, and you're, you're delivering your lines right in the middle of the line, you realize I'm acting.Phil LaMarr:Well, it, it's interesting because I think part of this mental philosophy I have is, you know, comes from watching Sam Jackson Uhhuh because he wasn't method, he wasn't acting like Jules, you know, acting like a gangster, a man with a gun the whole time.Michael Jamin:Right.Phil LaMarr:And he showed me that. And it's funny because while he was doing that, Frank Whaley who had worked on the doors was telling anecdotes about how when Val Kilmer was playing Jim Morrison, he was the exact opposite. Right. He, before they started shooting, he sent out a memo. Everyone is to refer to me as Jim or Mr. Morrison.Michael Jamin:Right.Phil LaMarr:You know, and he had a tent set where he would, you know, work to be in character and would only come on set as Jim Morrison. Right. He was ne They never s they never spoke to Val.Michael Jamin:Right.Phil LaMarr:Right. So, you know, what about, yes. It's definitely difficult for some people if that's their approach. No, no. My approach is I have to live this character.Michael Jamin:Right. You know, so you're, so you, okay, so that's not your problem. You don't have to worry. That's not something you have to Yeah, no. Interesting. I, I'm so interested in the, the actor's approach to the material, you know? Yeah. Because, you know, we write it, but how do you guys do, how do you guys do it? Because there's a difference. There really is a difference. You know, we hear it one way we envision it, but we can't do it. Do you know what I'm saying? Yeah. We can't get it out of our heads onto, into reality, but you can. And so I'm always like, how did you do thatPhil LaMarr:? Right. Well, it was, it was, it was interesting experience, you know, from the writing, acting, you know, crossover. Mm. I worked on a, I was developing an animated show based on a friend of mine's web comic called Goblins.Michael Jamin:Okay.Phil LaMarr:And my partner, Matt King and I, we were both performers, but we adapted the comic into a script. And I called a bunch of my voice actor friends, cuz we were, we were gonna make a trailer, you know, to bring these, you know, comic characters to life Yeah. In animation. And it was funny cuz Matt and I are actors. We had, you know, written the script and we'd acted out these scenes. And so in our heads we, we thought we knew exactly how they'd sound. But then we brought in amazing Billy West, Maurice La Marsh. Mm-Hmm. , Jim Cummings. Mm-Hmm. Steve Bloom, Jennifer. And it was funny because when they performed the scenes we had written, they took it to a whole other level. Right. Beyond what existed in our, in our heads. Right. Like, oh my God, they made it so much better than I even imagined it couldMichael Jamin:Be. Right, right.Phil LaMarr:And it was wild cuz I'd heard writers, you know, express a similar kind of thing. It's like, oh my gosh, you guys did such, such amazing with, and, but to have it, you know, as someone who'd been a performer, to have someone take your and do that miracle with it was an eye-opening experience. Like, ah, butMichael Jamin:There's something else that you do. Cause you know, there's a handful ofri actors, voice of actors, they always work. You're one of them. But pro you call 'em in and it's, it's knowing, especially in comedy, knowing where, how to hit the joke. I mean, we always say, can they hit a joke? And knowing where the laugh falls, not just somewhere, but which word makes it, makes it funny, you know? Mm-Hmm. , you know. And do you think that's your instinct? Or is that just something you've gotten better at?Phil LaMarr:Yes, I think that's something that has grown from performing, especially in the sense of, in the sense of comedy. Because I remember, you know, starting out on stage doing, you know, plays, then doing, doing improv, which is specific comedy cuz when you're doing a play mm-hmm. , the writer has decided which moments are funny, which moments are dramatic, you know. But when you're doing improv, you and the audience are deciding what's funny. Right. And, and I remember coming, you know, back to LA and pursuing acting and then starting to get work on camera and doing comedy. And I realized, huh. Oh wow. I don't have an audience.Michael Jamin:Yes. And youPhil LaMarr:Have, you have to create a gauge in your head for, is this funny? Because when you're on stage and you're doing a funny bit, you're, you know, you can feel from the audience whether, oh, I need to push that up a littleMichael Jamin:Bit. Right.Phil LaMarr:But when you're working on camera, this, the crew is not allowed to laugh outMichael Jamin:Loud. Right.Phil LaMarr:You know, so you have to create an audience inside you, an internal audience in your head to help, you know, is, is this the timing of this?Michael Jamin:Right.Phil LaMarr:And, and it's funny because I've developed that and a couple of years into it, I remember I got a job working on N Y P D, blueMichael Jamin:UhhuhPhil LaMarr:Playing a guy who was being questioned, you know, interrogated in the police station and then gets roughed up by Ricky SchroederMichael Jamin:Uhhuh.Phil LaMarr:But the, the lines, because this guy's on drugs. And I remember like, oh wow, I gotta be careful. This could be funny . Cause he's like, you know, like, you know, cause Ricky Schroeder, you know, sees blood on his, on his clothes, like, take your clothes off. It's like, and the guy take my clothes. What you wanna do? What you ain't gonna put no boom on my ass. Right. And I remembered I have to gauge the funny way to do this and not doMichael Jamin:That. Yes. Right, right. Because,Phil LaMarr:You know, there was, I, and I realize no, no. Pull back the tempo and lean into the anger, not the outrage.Michael Jamin:Right. Right. So, andPhil LaMarr:Then it'll be, then it'll be dramatic, not comedy.Michael Jamin:It's, again, here you are approaching it really from the craft. It's not Yeah. I just wish it's, when I hear people, I want to be an actor. Okay. Take it serious. Are you gonna study? Are you just gonna, do you wanna be famous? Which, what is it you want? You know?Phil LaMarr:Right.Michael Jamin:And well, let's talk about that for a second. What, what's your relationship with, with fame? How do you, you know?Phil LaMarr:Well, that's a very interesting thing because I feel like that has changed mm-hmm. from the generation, like when you're our age, when we were growing up pre-internet mm-hmm.Michael Jamin:Phil LaMarr:Fame only applied to stars.Michael Jamin:Yeah. Right.Phil LaMarr:Now, you know, I mean, nobody knew voice actors, only voice actor anybody knew was Mel Blank.Michael Jamin:Right.Phil LaMarr:You know, people to this day still don't know what Das Butler looks like. Right. But the now anybody who appears on anything, even a YouTuberMichael Jamin:Right.Phil LaMarr:Has some level of fame. Right. You know, and, and it's wild because, because of the internet, the, you know, it now matters what you say. In the old days, if you were a television character actor, like if you were Richard MulliganMichael Jamin:Yeah.Phil LaMarr:It never, nobody was ever gonna post what you said about something.Michael Jamin:Right.Phil LaMarr:It was only if you were Joan Crawford. Right. Or
Bandeirinhas coloridas e comida típica caipira, mas para além dos pratos juninos, a festa também teve acarajé para matar mesmo a saudade da 'terrinha'. Passeamos pelo tradicional Arraiá da Abrisa que mais uma vez esquentou a nossa comunidade na fria Melbourne.
durée : 00:30:49 - Les Nuits de France Culture - Épisode 5 : Patricia Sorel : "En 1849, la Librairie Nouvelle sur le boulevard des Italiens est fréquentée par le tout-Paris des lettres"
Après son accouchement, Gaëlle a été envahie de pensées sombres et d'angoisses. Elle a rapidement identifié qu'elle vivait une dépression du post-partum, un trouble qui touche environ 15% des mères et 10 % des pères, mais pour lequel les consultations restent trop rares. Cette pathologie se soigne pourtant très bien d'après Mathilde Morisod, pédopsychiatre au CHUV. Elle explique comment la dépression du post-partum peut se prendre en charge et pourquoi la période qui suit une naissance peut s'accompagner de ce trouble. Journaliste: Adrien Zerbini Réalisation: Christian Morerod Production: Grégoire Molle Attachée de production: Andreia Glanville
L'hôtel mal fréquenté de Prague
durée : 00:26:59 - Les Nuits de France Culture - Dans "Les arts et les gens - Monographies : Les oeuvres récentes de Jean Bazaine", le peintre commentait ses peintures et des projets monumentaux comme sa fresque dans "la cathédrale souterraine" qu'est le métro parisien de Cluny - La Sorbonne (1ère diffusion : 10/11/1986). A l'occasion de l'exposition de ses tableaux récents à la Fondation Noroit à Arras chez les Petitot, Pierre Descargues retraçait le parcours du peintre Jean Bazaine. Cette émission alterne le texte de Pierre Descargues, l'entretien avec Jean Bazaine et la lecture d'extraits de L'exercice de la peinture du peintre. * Jean Bazaine commente certaines de ses toiles et parle de ses réalisations en cours : les vitraux de la cathédrale de Saint Dié et ceux de Nevers, et "la cathédrale souterraine" c'est-à-dire la station de métro Cluny La Sorbonne à Paris. Il explique le contexte de la commande et de la réalisation de cette oeuvre : C'est une commande de Lang, cette station est vraiment une cathédrale souterraine, car elle a trois voies, Léotard se fait tirer l'oreille mais enfin ça a l'air de redémarrer. Il y a 400 m2 de mosaïques... ça coûte cher. J'ai mis un immense oiseau bleu de 20 mètres de long, et un autre oiseau rouge de 20 mètres de long. J'ai imaginé d'immenses graffitis représentant les signatures de tous les grands noms qui ont fréquenté cet endroit, Victor Hugo, des rois... Production : Pierre Descargues Réalisation : Jacques Béraud Les arts et les gens - Monographies : Les oeuvres récentes de Jean Bazaine - 1ère diffusion : 10/11/1986 Indexation web : Documentation sonore de Radio France Archive Ina-Radio France
"Klimarassismus. Der Kampf der Rechten gegen die ökologische Wende" heißt das neue Buch von Matthias Quent, Axel Salheiser und Christoph Richter. Anlässlich des Klimastreiks von Fridays for Future sprechen Holger und Katrin mit ihm über die großen Themen Gerechtigkeit und Klimaschutz und warum libertäre und faschistische Netzwerke so dagegen hetzen.
The Quent family finds themselves up Shit Creek without a paddle. Luckily Allie and Jordan lived to tell the tale. They discuss their big life change, and bullshit together like old times.
Welcome back, Literary Slummers! This week, we're discussing the final book in Anna's unit on movie novelizations with a book that was based on a movie that was based on a comic book series: Wonder Woman. Join us as we discuss the strangely Christian origin story of a hero based on Greek mythology? Recommended Reading: The Magicians and Mrs. Quent by Galen Beckett Squire by Sara Alfageeh and Nadia Shammas The Love Hypothesis by Ali Hazelwood Join us next week for the final book in our Material Monday series! And don't forget to check out our 2022 reading challenge on our Twitter or Storygraph. Twitter: @shelfawarecast, @amdeebee, @emnoteliza Instagram: @shelfawarecast Email: shelfawarecast @ gmail