Podcasts about chinchillas

Rodent genus

  • 443PODCASTS
  • 636EPISODES
  • 39mAVG DURATION
  • 1WEEKLY EPISODE
  • May 7, 2025LATEST
chinchillas

POPULARITY

20172018201920202021202220232024


Best podcasts about chinchillas

Latest podcast episodes about chinchillas

Machine Learning Guide
MLG 034 Large Language Models 1

Machine Learning Guide

Play Episode Listen Later May 7, 2025 50:48


Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance. Links Notes and resources at ocdevel.com/mlg/mlg34 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code Transformer Foundations and Scaling Laws Transformers: Introduced by the 2017 "Attention is All You Need" paper, transformers allow for parallel training and inference of sequences using self-attention, in contrast to the sequential nature of RNNs. Scaling Laws: Empirical research revealed that LLM performance improves predictably as model size (parameters), data size (training tokens), and compute are increased together, with diminishing returns if only one variable is scaled disproportionately. The "Chinchilla scaling law" (DeepMind, 2022) established the optimal model/data/compute ratio for efficient model performance: earlier large models like GPT-3 were undertrained relative to their size, whereas right-sized models with more training data (e.g., Chinchilla, LLaMA series) proved more compute and inference efficient. Emergent Abilities in LLMs Emergence: When trained beyond a certain scale, LLMs display abilities not present in smaller models, including: In-Context Learning (ICL): Performing new tasks based solely on prompt examples at inference time. Instruction Following: Executing natural language tasks not seen during training. Multi-Step Reasoning & Chain of Thought (CoT): Solving arithmetic, logic, or symbolic reasoning by generating intermediate reasoning steps. Discontinuity & Debate: These abilities appear abruptly in larger models, though recent research suggests that this could result from non-linearities in evaluation metrics rather than innate model properties. Architectural Evolutions: Mixture of Experts (MoE) MoE Layers: Modern LLMs often replace standard feed-forward layers with MoE structures. Composed of many independent "expert" networks specializing in different subdomains or latent structures. A gating network routes tokens to the most relevant experts per input, activating only a subset of parameters—this is called "sparse activation." Enables much larger overall models without proportional increases in compute per inference, but requires the entire model in memory and introduces new challenges like load balancing and communication overhead. Specialization & Efficiency: Experts learn different data/knowledge types, boosting model specialization and throughput, though care is needed to avoid overfitting and underutilization of specialists. The Three-Phase Training Process 1. Unsupervised Pre-Training: Next-token prediction on massive datasets—builds a foundation model capturing general language patterns. 2. Supervised Fine Tuning (SFT): Training on labeled prompt-response pairs to teach the model how to perform specific tasks (e.g., question answering, summarization, code generation). Overfitting and "catastrophic forgetting" are risks if not carefully managed. 3. Reinforcement Learning from Human Feedback (RLHF): Collects human preference data by generating multiple responses to prompts and then having annotators rank them. Builds a reward model (often PPO) based on these rankings, then updates the LLM to maximize alignment with human preferences (helpfulness, harmlessness, truthfulness). Introduces complexity and risk of reward hacking (specification gaming), where the model may exploit the reward system in unanticipated ways. Advanced Reasoning Techniques Prompt Engineering: The art/science of crafting prompts that elicit better model responses, shown to dramatically affect model output quality. Chain of Thought (CoT) Prompting: Guides models to elaborate step-by-step reasoning before arriving at final answers—demonstrably improves results on complex tasks. Variants include zero-shot CoT ("let's think step by step"), few-shot CoT with worked examples, self-consistency (voting among multiple reasoning chains), and Tree of Thought (explores multiple reasoning branches in parallel). Automated Reasoning Optimization: Frontier models selectively apply these advanced reasoning techniques, balancing compute costs with gains in accuracy and transparency. Optimization for Training and Inference Tradeoffs: The optimal balance between model size, data, and compute is determined not only for pretraining but also for inference efficiency, as lifetime inference costs may exceed initial training costs. Current Trends: Efficient scaling, model specialization (MoE), careful fine-tuning, RLHF alignment, and automated reasoning techniques define state-of-the-art LLM development.

Menudo Castillo
Desde Dos Barrios, una charla... diferente y con sorpresa

Menudo Castillo

Play Episode Listen Later May 2, 2025 40:07


Rematando el Proyecto En-tres-lazadas No sé si lo sabéis, pero las Bibliotecas Públicas de los entornos rurales son siempre una maravilla. Las de Chapinería, Chinchilla, Chinchón y Dos Barrios han estado tres años realizando un proyecto fantástico con el que han intentado acercar la mejor literatura a sus vecinos y visitantes. Con tres apartados diferentes: mujer, tercera edad e infancia, el proyecto ha sido una maravilla. Los micrófonos de Menudo Castillo se han colado un par de veces en esta apuesta y hemos estado en la última de las jornadas, que se realizó el pasado 26 de abril desde Dos Barrios. Allí nos reunimos Santiago García-Clairac, Jorge Gómez Soto y yo mismo (Javier Fernández) para montar una pequeña juerga de radio y literatura infantil. El sonido es bastante malo, la verdad, pero creo que podréis disfrutar de algunas cosillas curiosas si escucháis este pequeño espacio de radio.

Radio Dénia
Avelina Chinchilla: “Todo lo que escribo ayuda a pensar, me interesan más las motivaciones que las acciones”

Radio Dénia

Play Episode Listen Later Apr 18, 2025 22:02


La autora alicantina presenta la tercera edición de La luna en agosto, una novela corta que aborda el autoconocimiento, las relaciones personales y la maternidad desde una perspectiva íntima y realista

Social Suplex Podcast Network
Tunnel Talk #203 - A Soft Little Chinchilla Bathing in Dust

Social Suplex Podcast Network

Play Episode Listen Later Apr 13, 2025 109:59


Dynasty may have brought your girls down, but Dynamite this week brought them WAY WAY UP. We may be the only people on the entire internet who felt that way but we're bold truth tellers. We talk through Swerve's loss, the Young Bucks' return and what it might mean for the Death Riders' storyline, and what we want from pro wrestling overall. Plus, we're not animals - we talk about the Young Bucks's outfits, Kenny and the cuck chair, "dookie", Anthony Bowens's insane nipple shirt, and all the other events of the week. Enjoy!(00:00) Chitchat Time and News(12:18) Dynasty and Dynamite Vibe Overview(16:06) Death Riders, CopeTR, the Elite(1:08:06) Don Callis Family(1:13:15) Ospreay, Speedball, etc(1:19:17) Toni Storm and Megan Bayne(1:24:39) Hurt People and MJF(1:27:39) Chris Jericho, Bandido, and the Learning Tree(1:42:35) Max Caster(1:46:51) Adam Cole and the ParagonSupport this podcast at — https://redcircle.com/social-suplex-podcast-network/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

Mining Stock Daily
Ridgeline Minerals and South32 Tee Up a Fresh Round of Drilling at Selena

Mining Stock Daily

Play Episode Listen Later Apr 10, 2025 10:30


Chad Peters, CEO of Ridgeline Minerals, discusses the company's recent corporate update on the Salena project and its exploration budget approval with partners South32. He highlights the drilling and the significance of the Chinchilla sulfide target area. The conversation also touches on the potential for critical minerals and updates on other projects like Big Blue and Atlas, emphasizing the company's largest budget in history and the excitement surrounding their exploration efforts.

WDR Hörspiel-Speicher
Chinchilla Arschloch, waswas - Roadtrip mit Tourette

WDR Hörspiel-Speicher

Play Episode Listen Later Mar 29, 2025 54:22


•Doku-Performance• Ein Roadtrip, der es in sich hat: Hörspiel über die Reise des am Tourette-Syndrom erkrankten Vaters Christian und seiner Tochter Phillis in einem Campingbus durch Deutschland. Von Helgard Haug und Thilo Guschas WDR 2018 www.wdr.de/k/hoerspiel-newsletter Von Helgard Haug und Thilo Guschas.

Kirsty and Briony's Comfort Zone
Dreams of Chinchillas

Kirsty and Briony's Comfort Zone

Play Episode Listen Later Mar 29, 2025 59:12


Old, big, dangly dog lumps and Briony gets a "business job" Music courtesy of Epidemic Sound Learn more about your ad choices. Visit podcastchoices.com/adchoices

Radio Albacete
Chinchilla celebra sus tradicionales Miércoles... Una semana después

Radio Albacete

Play Episode Listen Later Mar 12, 2025 5:05


El pueblo se ha llenado de figuras y mensajes, en esta tradición popular y teatral, que se ha celebrado siete días después del Miércoles de ceniza, por la lluvia de la semana pasada, como nos ha contado el alcalde de Chinchilla, Francisco Morote

How many geese?
In softness no one can hear you scream

How many geese?

Play Episode Listen Later Mar 4, 2025 63:31


BOOM! Today's episode is blowing biology wide open as we take a look at exploding animals and why some species have evolved the most hardcore defensive behaviours on Planet Earth.  Our journey into the world of feathers continues as we hop in the goose time machine to look back to their origins and see what the dinosaurs were dressed in. Then it's time for our fight segment where we welcome Patreon member Nellie nto the ring to help Roddy tackle the fluffiest combatant that's ever stepped into the arena - Chinchillas.  And, after all that excitement, we blow off some steam at a party with the animals as we answer the question - which animal would be the best at fancy dress?  Get your digital window to the natural world here with Green Feathers - https://www.green-feathers.co.uk  Need more geese? Check out our Patreon! Come join the flock for extra episodes - https://www.patreon.com/howmanygeese 

Freies Radio Neumünster
Das Umweltmagazin in unserer Audiothek: Wie kann der Großflecken zum Wohnzimmer werden?

Freies Radio Neumünster

Play Episode Listen Later Mar 4, 2025 60:02


Geht es Euch auch so? Viele Menschen fühlen sich nicht richtig wohl auf dem Großflecken. Gibt wenig Grün, kaum Schatten und es ist laut. Und als Fußgänger:In die Straße zu überqueren, kann zu einem Abenteuer werden, insbesondere für beeinträchtigte Menschen. Wie könnte der Großflecken das Wohnzimmer unserer Stadt werden, in dem sich die Bürger:innen gerne aufhalten? Die Initiative Verkehrsberuhigter Großflecken hat da einige Ideen, die im nächsten Umweltmagazin vorgestellt werden. Wir gehen in den Tierpark und erfahren etwas über bedrohte Chinchillas. Außerdem kommentiert der VCD Neumünster die geplante Fahrradroute nach Wittorf.

Joey and Lauren in the Morning
Make Up or Break Up - They're Chinchillas, Ok?!

Joey and Lauren in the Morning

Play Episode Listen Later Feb 27, 2025 10:51


There is a hamster, or, I mean, Chinchilla problem on today's Make Up or Break Up!Leave a rating and review wherever you listen, it helps us out a lot! Also follow us on social @joeyandlaurenshow Hosted on Acast. See acast.com/privacy for more information.

Rise and Shine with Robbo & Becci
Rise & Shine - Brian from Chinchilla Prays For a Cowboy 26 February 2025

Rise and Shine with Robbo & Becci

Play Episode Listen Later Feb 26, 2025 3:00


Today on Rise and Shine: Great Story of Brian from Chinchilla QLD Praying for a Cowboy in the USAYour support sends the gospel to every corner of Australia through broadcast, online and print media: https://www.vision.org.au/donateSee omnystudio.com/listener for privacy information.

Mining Stock Daily
Ridgeline Minerals MT Survey at Selena Confirms Large Anomaly at the Chinchilla Sulfide Target

Mining Stock Daily

Play Episode Listen Later Feb 25, 2025 14:59


Chad Peters from Ridgeline Minerals discusses the significant developments at the Selena project, particularly the Chinchilla sulfide target. With tje partnership with South32, new data from an MT survey has opened up exciting opportunities for exploration. The conversation delves into the scale of the Chinchilla sulfide target, the structural controls influencing mineralization, and the next steps for drilling. Additionally, Peters highlights the company's new Atlas project and the financial backing that will support ongoing exploration efforts.

Mining Stock Daily
Morning Briefing: Collective Mining Returns 106.35 metres at 9.05 g/t AuEq from Apollo

Mining Stock Daily

Play Episode Listen Later Feb 25, 2025 8:41


Lots of drill results out this morning. MSD reports the latest from Collective Mining, Arizona Sonoran Copper, Rua Gold and Southern Cross Gold. Ridgeline Minerals received MT geophysical data from Selena where the Chinchilla sulfide zone shows a large anomaly. This episode of Mining Stock Daily is brought to you by... Vizsla Silver is focused on becoming one of the world's largest single-asset silver producers through the exploration and development of the 100% owned Panuco-Copala silver-gold district in Sinaloa, Mexico. The company consolidated this historic district in 2019 and has now completed over 325,000 meters of drilling. The company has the world's largest, undeveloped high-grade silver resource. Learn more at⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ ⁠https://vizslasilvercorp.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Calibre Mining is a Canadian-listed, Americas focused, growing mid-tier gold producer with a strong pipeline of development and exploration opportunities across Newfoundland & Labrador in Canada, Nevada and Washington in the USA, and Nicaragua. With a strong balance sheet, a proven management team, strong operating cash flow, accretive development projects and district-scale exploration opportunities Calibre will unlock significant value.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.calibremining.com/⁠

thinkfuture with kalaboukis
1059 The Truth About AI, Privacy, and the Future of Your Data

thinkfuture with kalaboukis

Play Episode Listen Later Feb 5, 2025 32:51


In this episode of ThinkFuture, host Chris Kalaboukis sits down with AI expert Aman to break down the reality of large language models (LLMs) and what they mean for data privacy. They discuss how LLMs, despite their impressive reasoning abilities, are just sophisticated "next word prediction" systems trained on vast amounts of online content—often without permission or compensation. The conversation dives into the thorny issues of data ownership, with Aman highlighting how while individuals may not care much about their personal data being used, corporations and regulators are growing increasingly wary of AI's privacy implications. They explore whether content creators should be compensated for AI-generated outputs based on their work and the evolving legal landscape. Chris and Aman also compare different AI models, including GPT-3, Chinchilla, and Claude, breaking down their strengths, weaknesses, and how newer models are pushing past their predecessors. Looking ahead to 2035, Aman envisions personal AI privacy agents that will help users control their digital footprint while ensuring they get credited and paid for their contributions to AI training datasets. But will privacy ever truly be protected in the age of AI? Tune in to find out!---The First Future Planner: Record First, Action Later: https://foremark.usBe A Better YOU with AI: Join The Community: ⁠https://10xyou.us⁠Get AIDAILY every weekday. ⁠https://aidaily.us⁠My blog: ⁠https://thinkfuture.com

Radio 90 Motilla
Hoy por Hoy CLM desde Chinchilla de Montearagón (05/02/2025)

Radio 90 Motilla

Play Episode Listen Later Feb 5, 2025 100:00


Hoy por Hoy Castilla-La Mancha

desde chinchillas hoy por hoy castilla la mancha
Cadena SER Castilla-La Mancha
Hoy por Hoy CLM desde Chinchilla de Montearagón (05/02/2025)

Cadena SER Castilla-La Mancha

Play Episode Listen Later Feb 5, 2025 100:00


Hoy por Hoy Castilla-La Mancha

desde chinchillas hoy por hoy castilla la mancha
As Goes Wisconsin
Why Doesn’t Jane Want A Chinchilla Anymore? (Hour 2)

As Goes Wisconsin

Play Episode Listen Later Jan 20, 2025 45:07


We're gonna lighten things up in the second hour and to do that, we welcome back to the show UW Madison Professor David Drake to talk about non-traditional pets and why it isn't the best idea to keep them. While he's here, we also discuss cave bats!! Then, what is on your "No-Buy" list to help you save money and pay off those bills? And because it's Monday, we give you This Shouldn't Be A Thing - Not Dead Yet Edition. As always, thank you for listening, texting and calling, we couldn't do this without you! Don't forget to download the free Civic Media app and take us wherever you are in the world! Matenaer On Air is a part of the Civic Media radio network and airs Monday through Friday from 10 am - noon across the state. Subscribe to the podcast to be sure not to miss out on a single episode! You can also rate us on your podcast distribution center of choice, they go a long way! To learn more about the show and all of the programming across the Civic Media network, head over to https://civicmedia.us/shows to see the entire broadcast line up. Follow the show on Facebook, X and YouTube to keep up with Jane and the show! Guest: David Drake

Conclusiones
Expresidenta Chinchilla habla sobre el futuro de la oposición en Venezuela

Conclusiones

Play Episode Listen Later Jan 11, 2025 51:43


La expresidenta de Costa Rica, Laura Chinchilla, habló en Conclusiones sobre lo que no puede descuidar la oposición en Venezuela para continuar con su plan de lograr la transición política en el país. Asegura que un logro es haber conseguido la movilización ciudadana, pero una tarea pendiente es la relación con las Fuerzas Armadas. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Conclusiones
¿El régimen de Maduro mantiene el apoyo de Rusia, China e Irán?

Conclusiones

Play Episode Listen Later Jan 8, 2025 49:20


La expresidenta de Costa Rica Laura Chinchilla analizó en entrevista con Fernando del Rincón cuáles son los apoyos internacionales al régimen de Nicolás Maduro en Venezuela. ¿Está aislado o aún cuenta con el respaldo de China, Rusia e Irán? Según Chinchilla, las imágenes de la caída de Bashar al-Assad deben estar muy presentes en Nicolás Maduro "porque ni Rusia, ni Irán, ni China, llegaron en su ayuda". Learn more about your ad choices. Visit podcastchoices.com/adchoices

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Applications for the 2025 AI Engineer Summit are up, and you can save the date for AIE Singapore in April and AIE World's Fair 2025 in June.Happy new year, and thanks for 100 great episodes! Please let us know what you want to see/hear for the next 100!Full YouTube Episode with Slides/ChartsLike and subscribe and hit that bell to get notifs!Timestamps* 00:00 Welcome to the 100th Episode!* 00:19 Reflecting on the Journey* 00:47 AI Engineering: The Rise and Impact* 03:15 Latent Space Live and AI Conferences* 09:44 The Competitive AI Landscape* 21:45 Synthetic Data and Future Trends* 35:53 Creative Writing with AI* 36:12 Legal and Ethical Issues in AI* 38:18 The Data War: GPU Poor vs. GPU Rich* 39:12 The Rise of GPU Ultra Rich* 40:47 Emerging Trends in AI Models* 45:31 The Multi-Modality War* 01:05:31 The Future of AI Benchmarks* 01:13:17 Pionote and Frontier Models* 01:13:47 Niche Models and Base Models* 01:14:30 State Space Models and RWKB* 01:15:48 Inference Race and Price Wars* 01:22:16 Major AI Themes of the Year* 01:22:48 AI Rewind: January to March* 01:26:42 AI Rewind: April to June* 01:33:12 AI Rewind: July to September* 01:34:59 AI Rewind: October to December* 01:39:53 Year-End Reflections and PredictionsTranscript[00:00:00] Welcome to the 100th Episode![00:00:00] Alessio: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co host Swyx for the 100th time today.[00:00:12] swyx: Yay, um, and we're so glad that, yeah, you know, everyone has, uh, followed us in this journey. How do you feel about it? 100 episodes.[00:00:19] Alessio: Yeah, I know.[00:00:19] Reflecting on the Journey[00:00:19] Alessio: Almost two years that we've been doing this. We've had four different studios. Uh, we've had a lot of changes. You know, we used to do this lightning round. When we first started that we didn't like, and we tried to change the question. The answer[00:00:32] swyx: was cursor and perplexity.[00:00:34] Alessio: Yeah, I love mid journey. It's like, do you really not like anything else?[00:00:38] Alessio: Like what's, what's the unique thing? And I think, yeah, we, we've also had a lot more research driven content. You know, we had like 3DAO, we had, you know. Jeremy Howard, we had more folks like that.[00:00:47] AI Engineering: The Rise and Impact[00:00:47] Alessio: I think we want to do more of that too in the new year, like having, uh, some of the Gemini folks, both on the research and the applied side.[00:00:54] Alessio: Yeah, but it's been a ton of fun. I think we both started, I wouldn't say as a joke, we were kind of like, Oh, we [00:01:00] should do a podcast. And I think we kind of caught the right wave, obviously. And I think your rise of the AI engineer posts just kind of get people. Sombra to congregate, and then the AI engineer summit.[00:01:11] Alessio: And that's why when I look at our growth chart, it's kind of like a proxy for like the AI engineering industry as a whole, which is almost like, like, even if we don't do that much, we keep growing just because there's so many more AI engineers. So did you expect that growth or did you expect that would take longer for like the AI engineer thing to kind of like become, you know, everybody talks about it today.[00:01:32] swyx: So, the sign of that, that we have won is that Gartner puts it at the top of the hype curve right now. So Gartner has called the peak in AI engineering. I did not expect, um, to what level. I knew that I was correct when I called it because I did like two months of work going into that. But I didn't know, You know, how quickly it could happen, and obviously there's a chance that I could be wrong.[00:01:52] swyx: But I think, like, most people have come around to that concept. Hacker News hates it, which is a good sign. But there's enough people that have defined it, you know, GitHub, when [00:02:00] they launched GitHub Models, which is the Hugging Face clone, they put AI engineers in the banner, like, above the fold, like, in big So I think it's like kind of arrived as a meaningful and useful definition.[00:02:12] swyx: I think people are trying to figure out where the boundaries are. I think that was a lot of the quote unquote drama that happens behind the scenes at the World's Fair in June. Because I think there's a lot of doubt or questions about where ML engineering stops and AI engineering starts. That's a useful debate to be had.[00:02:29] swyx: In some sense, I actually anticipated that as well. So I intentionally did not. Put a firm definition there because most of the successful definitions are necessarily underspecified and it's actually useful to have different perspectives and you don't have to specify everything from the outset.[00:02:45] Alessio: Yeah, I was at um, AWS reInvent and the line to get into like the AI engineering talk, so to speak, which is, you know, applied AI and whatnot was like, there are like hundreds of people just in line to go in.[00:02:56] Alessio: I think that's kind of what enabled me. People, right? Which is what [00:03:00] you kind of talked about. It's like, Hey, look, you don't actually need a PhD, just, yeah, just use the model. And then maybe we'll talk about some of the blind spots that you get as an engineer with the earlier posts that we also had on on the sub stack.[00:03:11] Alessio: But yeah, it's been a heck of a heck of a two years.[00:03:14] swyx: Yeah.[00:03:15] Latent Space Live and AI Conferences[00:03:15] swyx: You know, I was, I was trying to view the conference as like, so NeurIPS is I think like 16, 17, 000 people. And the Latent Space Live event that we held there was 950 signups. I think. The AI world, the ML world is still very much research heavy. And that's as it should be because ML is very much in a research phase.[00:03:34] swyx: But as we move this entire field into production, I think that ratio inverts into becoming more engineering heavy. So at least I think engineering should be on the same level, even if it's never as prestigious, like it'll always be low status because at the end of the day, you're manipulating APIs or whatever.[00:03:51] swyx: But Yeah, wrapping GPTs, but there's going to be an increasing stack and an art to doing these, these things well. And I, you know, I [00:04:00] think that's what we're focusing on for the podcast, the conference and basically everything I do seems to make sense. And I think we'll, we'll talk about the trends here that apply.[00:04:09] swyx: It's, it's just very strange. So, like, there's a mix of, like, keeping on top of research while not being a researcher and then putting that research into production. So, like, people always ask me, like, why are you covering Neuralibs? Like, this is a ML research conference and I'm like, well, yeah, I mean, we're not going to, to like, understand everything Or reproduce every single paper, but the stuff that is being found here is going to make it through into production at some point, you hope.[00:04:32] swyx: And then actually like when I talk to the researchers, they actually get very excited because they're like, oh, you guys are actually caring about how this goes into production and that's what they really really want. The measure of success is previously just peer review, right? Getting 7s and 8s on their um, Academic review conferences and stuff like citations is one metric, but money is a better metric.[00:04:51] Alessio: Money is a better metric. Yeah, and there were about 2200 people on the live stream or something like that. Yeah, yeah. Hundred on the live stream. So [00:05:00] I try my best to moderate, but it was a lot spicier in person with Jonathan and, and Dylan. Yeah, that it was in the chat on YouTube.[00:05:06] swyx: I would say that I actually also created.[00:05:09] swyx: Layen Space Live in order to address flaws that are perceived in academic conferences. This is not NeurIPS specific, it's ICML, NeurIPS. Basically, it's very sort of oriented towards the PhD student, uh, market, job market, right? Like literally all, basically everyone's there to advertise their research and skills and get jobs.[00:05:28] swyx: And then obviously all the, the companies go there to hire them. And I think that's great for the individual researchers, but for people going there to get info is not great because you have to read between the lines, bring a ton of context in order to understand every single paper. So what is missing is effectively what I ended up doing, which is domain by domain, go through and recap the best of the year.[00:05:48] swyx: Survey the field. And there are, like NeurIPS had a, uh, I think ICML had a like a position paper track, NeurIPS added a benchmarks, uh, datasets track. These are ways in which to address that [00:06:00] issue. Uh, there's always workshops as well. Every, every conference has, you know, a last day of workshops and stuff that provide more of an overview.[00:06:06] swyx: But they're not specifically prompted to do so. And I think really, uh, Organizing a conference is just about getting good speakers and giving them the correct prompts. And then they will just go and do that thing and they do a very good job of it. So I think Sarah did a fantastic job with the startups prompt.[00:06:21] swyx: I can't list everybody, but we did best of 2024 in startups, vision, open models. Post transformers, synthetic data, small models, and agents. And then the last one was the, uh, and then we also did a quick one on reasoning with Nathan Lambert. And then the last one, obviously, was the debate that people were very hyped about.[00:06:39] swyx: It was very awkward. And I'm really, really thankful for John Franco, basically, who stepped up to challenge Dylan. Because Dylan was like, yeah, I'll do it. But He was pro scaling. And I think everyone who is like in AI is pro scaling, right? So you need somebody who's ready to publicly say, no, we've hit a wall.[00:06:57] swyx: So that means you're saying Sam Altman's wrong. [00:07:00] You're saying, um, you know, everyone else is wrong. It helps that this was the day before Ilya went on, went up on stage and then said pre training has hit a wall. And data has hit a wall. So actually Jonathan ended up winning, and then Ilya supported that statement, and then Noam Brown on the last day further supported that statement as well.[00:07:17] swyx: So it's kind of interesting that I think the consensus kind of going in was that we're not done scaling, like you should believe in a better lesson. And then, four straight days in a row, you had Sepp Hochreiter, who is the creator of the LSTM, along with everyone's favorite OG in AI, which is Juergen Schmidhuber.[00:07:34] swyx: He said that, um, we're pre trading inside a wall, or like, we've run into a different kind of wall. And then we have, you know John Frankel, Ilya, and then Noam Brown are all saying variations of the same thing, that we have hit some kind of wall in the status quo of what pre trained, scaling large pre trained models has looked like, and we need a new thing.[00:07:54] swyx: And obviously the new thing for people is some make, either people are calling it inference time compute or test time [00:08:00] compute. I think the collective terminology has been inference time, and I think that makes sense because test time, calling it test, meaning, has a very pre trained bias, meaning that the only reason for running inference at all is to test your model.[00:08:11] swyx: That is not true. Right. Yeah. So, so, I quite agree that. OpenAI seems to have adopted, or the community seems to have adopted this terminology of ITC instead of TTC. And that, that makes a lot of sense because like now we care about inference, even right down to compute optimality. Like I actually interviewed this author who recovered or reviewed the Chinchilla paper.[00:08:31] swyx: Chinchilla paper is compute optimal training, but what is not stated in there is it's pre trained compute optimal training. And once you start caring about inference, compute optimal training, you have a different scaling law. And in a way that we did not know last year.[00:08:45] Alessio: I wonder, because John is, he's also on the side of attention is all you need.[00:08:49] Alessio: Like he had the bet with Sasha. So I'm curious, like he doesn't believe in scaling, but he thinks the transformer, I wonder if he's still. So, so,[00:08:56] swyx: so he, obviously everything is nuanced and you know, I told him to play a character [00:09:00] for this debate, right? So he actually does. Yeah. He still, he still believes that we can scale more.[00:09:04] swyx: Uh, he just assumed the character to be very game for, for playing this debate. So even more kudos to him that he assumed a position that he didn't believe in and still won the debate.[00:09:16] Alessio: Get rekt, Dylan. Um, do you just want to quickly run through some of these things? Like, uh, Sarah's presentation, just the highlights.[00:09:24] swyx: Yeah, we can't go through everyone's slides, but I pulled out some things as a factor of, like, stuff that we were going to talk about. And we'll[00:09:30] Alessio: publish[00:09:31] swyx: the rest. Yeah, we'll publish on this feed the best of 2024 in those domains. And hopefully people can benefit from the work that our speakers have done.[00:09:39] swyx: But I think it's, uh, these are just good slides. And I've been, I've been looking for a sort of end of year recaps from, from people.[00:09:44] The Competitive AI Landscape[00:09:44] swyx: The field has progressed a lot. You know, I think the max ELO in 2023 on LMSys used to be 1200 for LMSys ELOs. And now everyone is at least at, uh, 1275 in their ELOs, and this is across Gemini, Chadjibuti, [00:10:00] Grok, O1.[00:10:01] swyx: ai, which with their E Large model, and Enthopic, of course. It's a very, very competitive race. There are multiple Frontier labs all racing, but there is a clear tier zero Frontier. And then there's like a tier one. It's like, I wish I had everything else. Tier zero is extremely competitive. It's effectively now three horse race between Gemini, uh, Anthropic and OpenAI.[00:10:21] swyx: I would say that people are still holding out a candle for XAI. XAI, I think, for some reason, because their API was very slow to roll out, is not included in these metrics. So it's actually quite hard to put on there. As someone who also does charts, XAI is continually snubbed because they don't work well with the benchmarking people.[00:10:42] swyx: Yeah, yeah, yeah. It's a little trivia for why XAI always gets ignored. The other thing is market share. So these are slides from Sarah. We have it up on the screen. It has gone from very heavily open AI. So we have some numbers and estimates. These are from RAMP. Estimates of open AI market share in [00:11:00] December 2023.[00:11:01] swyx: And this is basically, what is it, GPT being 95 percent of production traffic. And I think if you correlate that with stuff that we asked. Harrison Chase on the LangChain episode, it was true. And then CLAUD 3 launched mid middle of this year. I think CLAUD 3 launched in March, CLAUD 3. 5 Sonnet was in June ish.[00:11:23] swyx: And you can start seeing the market share shift towards opening, uh, towards that topic, uh, very, very aggressively. The more recent one is Gemini. So if I scroll down a little bit, this is an even more recent dataset. So RAM's dataset ends in September 2 2. 2024. Gemini has basically launched a price war at the low end, uh, with Gemini Flash, uh, being basically free for personal use.[00:11:44] swyx: Like, I think people don't understand the free tier. It's something like a billion tokens per day. Unless you're trying to abuse it, you cannot really exhaust your free tier on Gemini. They're really trying to get you to use it. They know they're in like third place, um, fourth place, depending how you, how you count.[00:11:58] swyx: And so they're going after [00:12:00] the Lower tier first, and then, you know, maybe the upper tier later, but yeah, Gemini Flash, according to OpenRouter, is now 50 percent of their OpenRouter requests. Obviously, these are the small requests. These are small, cheap requests that are mathematically going to be more.[00:12:15] swyx: The smart ones obviously are still going to OpenAI. But, you know, it's a very, very big shift in the market. Like basically 2023, 2022, To going into 2024 opening has gone from nine five market share to Yeah. Reasonably somewhere between 50 to 75 market share.[00:12:29] Alessio: Yeah. I'm really curious how ramped does the attribution to the model?[00:12:32] Alessio: If it's API, because I think it's all credit card spin. . Well, but it's all, the credit card doesn't say maybe. Maybe the, maybe when they do expenses, they upload the PDF, but yeah, the, the German I think makes sense. I think that was one of my main 2024 takeaways that like. The best small model companies are the large labs, which is not something I would have thought that the open source kind of like long tail would be like the small model.[00:12:53] swyx: Yeah, different sizes of small models we're talking about here, right? Like so small model here for Gemini is AB, [00:13:00] right? Uh, mini. We don't know what the small model size is, but yeah, it's probably in the double digits or maybe single digits, but probably double digits. The open source community has kind of focused on the one to three B size.[00:13:11] swyx: Mm-hmm . Yeah. Maybe[00:13:12] swyx: zero, maybe 0.5 B uh, that's moon dream and that is small for you then, then that's great. It makes sense that we, we have a range for small now, which is like, may, maybe one to five B. Yeah. I'll even put that at, at, at the high end. And so this includes Gemma from Gemini as well. But also includes the Apple Foundation models, which I think Apple Foundation is 3B.[00:13:32] Alessio: Yeah. No, that's great. I mean, I think in the start small just meant cheap. I think today small is actually a more nuanced discussion, you know, that people weren't really having before.[00:13:43] swyx: Yeah, we can keep going. This is a slide that I smiley disagree with Sarah. She's pointing to the scale SEAL leaderboard. I think the Researchers that I talked with at NeurIPS were kind of positive on this because basically you need private test [00:14:00] sets to prevent contamination.[00:14:02] swyx: And Scale is one of maybe three or four people this year that has really made an effort in doing a credible private test set leaderboard. Llama405B does well compared to Gemini and GPT 40. And I think that's good. I would say that. You know, it's good to have an open model that is that big, that does well on those metrics.[00:14:23] swyx: But anyone putting 405B in production will tell you, if you scroll down a little bit to the artificial analysis numbers, that it is very slow and very expensive to infer. Um, it doesn't even fit on like one node. of, uh, of H100s. Cerebras will be happy to tell you they can serve 4 or 5B on their super large chips.[00:14:42] swyx: But, um, you know, if you need to do anything custom to it, you're still kind of constrained. So, is 4 or 5B really that relevant? Like, I think most people are basically saying that they only use 4 or 5B as a teacher model to distill down to something. Even Meta is doing it. So with Lama 3. [00:15:00] 3 launched, they only launched the 70B because they use 4 or 5B to distill the 70B.[00:15:03] swyx: So I don't know if like open source is keeping up. I think they're the, the open source industrial complex is very invested in telling you that the, if the gap is narrowing, I kind of disagree. I think that the gap is widening with O1. I think there are very, very smart people trying to narrow that gap and they should.[00:15:22] swyx: I really wish them success, but you cannot use a chart that is nearing 100 in your saturation chart. And look, the distance between open source and closed source is narrowing. Of course it's going to narrow because you're near 100. This is stupid. But in metrics that matter, is open source narrowing?[00:15:38] swyx: Probably not for O1 for a while. And it's really up to the open source guys to figure out if they can match O1 or not.[00:15:46] Alessio: I think inference time compute is bad for open source just because, you know, Doc can donate the flops at training time, but he cannot donate the flops at inference time. So it's really hard to like actually keep up on that axis.[00:15:59] Alessio: Big, big business [00:16:00] model shift. So I don't know what that means for the GPU clouds. I don't know what that means for the hyperscalers, but obviously the big labs have a lot of advantage. Because, like, it's not a static artifact that you're putting the compute in. You're kind of doing that still, but then you're putting a lot of computed inference too.[00:16:17] swyx: Yeah, yeah, yeah. Um, I mean, Llama4 will be reasoning oriented. We talked with Thomas Shalom. Um, kudos for getting that episode together. That was really nice. Good, well timed. Actually, I connected with the AI meta guy, uh, at NeurIPS, and, um, yeah, we're going to coordinate something for Llama4. Yeah, yeah,[00:16:32] Alessio: and our friend, yeah.[00:16:33] Alessio: Clara Shi just joined to lead the business agent side. So I'm sure we'll have her on in the new year.[00:16:39] swyx: Yeah. So, um, my comment on, on the business model shift, this is super interesting. Apparently it is wide knowledge that OpenAI wanted more than 6. 6 billion dollars for their fundraise. They wanted to raise, you know, higher, and they did not.[00:16:51] swyx: And what that means is basically like, it's very convenient that we're not getting GPT 5, which would have been a larger pre train. We should have a lot of upfront money. And [00:17:00] instead we're, we're converting fixed costs into variable costs, right. And passing it on effectively to the customer. And it's so much easier to take margin there because you can directly attribute it to like, Oh, you're using this more.[00:17:12] swyx: Therefore you, you pay more of the cost and I'll just slap a margin in there. So like that lets you control your growth margin and like tie your. Your spend, or your sort of inference spend, accordingly. And it's just really interesting to, that this change in the sort of inference paradigm has arrived exactly at the same time that the funding environment for pre training is effectively drying up, kind of.[00:17:36] swyx: I feel like maybe the VCs are very in tune with research anyway, so like, they would have noticed this, but, um, it's just interesting.[00:17:43] Alessio: Yeah, and I was looking back at our yearly recap of last year. Yeah. And the big thing was like the mixed trial price fights, you know, and I think now it's almost like there's nowhere to go, like, you know, Gemini Flash is like basically giving it away for free.[00:17:55] Alessio: So I think this is a good way for the labs to generate more revenue and pass down [00:18:00] some of the compute to the customer. I think they're going to[00:18:02] swyx: keep going. I think that 2, will come.[00:18:05] Alessio: Yeah, I know. Totally. I mean, next year, the first thing I'm doing is signing up for Devin. Signing up for the pro chat GBT.[00:18:12] Alessio: Just to try. I just want to see what does it look like to spend a thousand dollars a month on AI?[00:18:17] swyx: Yes. Yes. I think if your, if your, your job is a, at least AI content creator or VC or, you know, someone who, whose job it is to stay on, stay on top of things, you should already be spending like a thousand dollars a month on, on stuff.[00:18:28] swyx: And then obviously easy to spend, hard to use. You have to actually use. The good thing is that actually Google lets you do a lot of stuff for free now. So like deep research. That they just launched. Uses a ton of inference and it's, it's free while it's in preview.[00:18:45] Alessio: Yeah. They need to put that in Lindy.[00:18:47] Alessio: I've been using Lindy lately. I've been a built a bunch of things once we had flow because I liked the new thing. It's pretty good. I even did a phone call assistant. Um, yeah, they just launched Lindy voice. Yeah, I think once [00:19:00] they get advanced voice mode like capability today, still like speech to text, you can kind of tell.[00:19:06] Alessio: Um, but it's good for like reservations and things like that. So I have a meeting prepper thing. And so[00:19:13] swyx: it's good. Okay. I feel like we've, we've covered a lot of stuff. Uh, I, yeah, I, you know, I think We will go over the individual, uh, talks in a separate episode. Uh, I don't want to take too much time with, uh, this stuff, but that suffice to say that there is a lot of progress in each field.[00:19:28] swyx: Uh, we covered vision. Basically this is all like the audience voting for what they wanted. And then I just invited the best people I could find in each audience, especially agents. Um, Graham, who I talked to at ICML in Vienna, he is currently still number one. It's very hard to stay on top of SweetBench.[00:19:45] swyx: OpenHand is currently still number one. switchbench full, which is the hardest one. He had very good thoughts on agents, which I, which I'll highlight for people. Everyone is saying 2025 is the year of agents, just like they said last year. And, uh, but he had [00:20:00] thoughts on like eight parts of what are the frontier problems to solve in agents.[00:20:03] swyx: And so I'll highlight that talk as well.[00:20:05] Alessio: Yeah. The number six, which is the Hacken agents learn more about the environment, has been a Super interesting to us as well, just to think through, because, yeah, how do you put an agent in an enterprise where most things in an enterprise have never been public, you know, a lot of the tooling, like the code bases and things like that.[00:20:23] Alessio: So, yeah, there's not indexing and reg. Well, yeah, but it's more like. You can't really rag things that are not documented. But people know them based on how they've been doing it. You know, so I think there's almost this like, you know, Oh, institutional knowledge. Yeah, the boring word is kind of like a business process extraction.[00:20:38] Alessio: Yeah yeah, I see. It's like, how do you actually understand how these things are done? I see. Um, and I think today the, the problem is that, Yeah, the agents are, that most people are building are good at following instruction, but are not as good as like extracting them from you. Um, so I think that will be a big unlock just to touch quickly on the Jeff Dean thing.[00:20:55] Alessio: I thought it was pretty, I mean, we'll link it in the, in the things, but. I think the main [00:21:00] focus was like, how do you use ML to optimize the systems instead of just focusing on ML to do something else? Yeah, I think speculative decoding, we had, you know, Eugene from RWKB on the podcast before, like he's doing a lot of that with Fetterless AI.[00:21:12] swyx: Everyone is. I would say it's the norm. I'm a little bit uncomfortable with how much it costs, because it does use more of the GPU per call. But because everyone is so keen on fast inference, then yeah, makes sense.[00:21:24] Alessio: Exactly. Um, yeah, but we'll link that. Obviously Jeff is great.[00:21:30] swyx: Jeff is, Jeff's talk was more, it wasn't focused on Gemini.[00:21:33] swyx: I think people got the wrong impression from my tweet. It's more about how Google approaches ML and uses ML to design systems and then systems feedback into ML. And I think this ties in with Lubna's talk.[00:21:45] Synthetic Data and Future Trends[00:21:45] swyx: on synthetic data where it's basically the story of bootstrapping of humans and AI in AI research or AI in production.[00:21:53] swyx: So her talk was on synthetic data, where like how much synthetic data has grown in 2024 in the pre training side, the post training side, [00:22:00] and the eval side. And I think Jeff then also extended it basically to chips, uh, to chip design. So he'd spend a lot of time talking about alpha chip. And most of us in the audience are like, we're not working on hardware, man.[00:22:11] swyx: Like you guys are great. TPU is great. Okay. We'll buy TPUs.[00:22:14] Alessio: And then there was the earlier talk. Yeah. But, and then we have, uh, I don't know if we're calling them essays. What are we calling these? But[00:22:23] swyx: for me, it's just like bonus for late in space supporters, because I feel like they haven't been getting anything.[00:22:29] swyx: And then I wanted a more high frequency way to write stuff. Like that one I wrote in an afternoon. I think basically we now have an answer to what Ilya saw. It's one year since. The blip. And we know what he saw in 2014. We know what he saw in 2024. We think we know what he sees in 2024. He gave some hints and then we have vague indications of what he saw in 2023.[00:22:54] swyx: So that was the Oh, and then 2016 as well, because of this lawsuit with Elon, OpenAI [00:23:00] is publishing emails from Sam's, like, his personal text messages to Siobhan, Zelis, or whatever. So, like, we have emails from Ilya saying, this is what we're seeing in OpenAI, and this is why we need to scale up GPUs. And I think it's very prescient in 2016 to write that.[00:23:16] swyx: And so, like, it is exactly, like, basically his insights. It's him and Greg, basically just kind of driving the scaling up of OpenAI, while they're still playing Dota. They're like, no, like, we see the path here.[00:23:30] Alessio: Yeah, and it's funny, yeah, they even mention, you know, we can only train on 1v1 Dota. We need to train on 5v5, and that takes too many GPUs.[00:23:37] Alessio: Yeah,[00:23:37] swyx: and at least for me, I can speak for myself, like, I didn't see the path from Dota to where we are today. I think even, maybe if you ask them, like, they wouldn't necessarily draw a straight line. Yeah,[00:23:47] Alessio: no, definitely. But I think like that was like the whole idea of almost like the RL and we talked about this with Nathan on his podcast.[00:23:55] Alessio: It's like with RL, you can get very good at specific things, but then you can't really like generalize as much. And I [00:24:00] think the language models are like the opposite, which is like, you're going to throw all this data at them and scale them up, but then you really need to drive them home on a specific task later on.[00:24:08] Alessio: And we'll talk about the open AI reinforcement, fine tuning, um, announcement too, and all of that. But yeah, I think like scale is all you need. That's kind of what Elia will be remembered for. And I think just maybe to clarify on like the pre training is over thing that people love to tweet. I think the point of the talk was like everybody, we're scaling these chips, we're scaling the compute, but like the second ingredient which is data is not scaling at the same rate.[00:24:35] Alessio: So it's not necessarily pre training is over. It's kind of like What got us here won't get us there. In his email, he predicted like 10x growth every two years or something like that. And I think maybe now it's like, you know, you can 10x the chips again, but[00:24:49] swyx: I think it's 10x per year. Was it? I don't know.[00:24:52] Alessio: Exactly. And Moore's law is like 2x. So it's like, you know, much faster than that. And yeah, I like the fossil fuel of AI [00:25:00] analogy. It's kind of like, you know, the little background tokens thing. So the OpenAI reinforcement fine tuning is basically like, instead of fine tuning on data, you fine tune on a reward model.[00:25:09] Alessio: So it's basically like, instead of being data driven, it's like task driven. And I think people have tasks to do, they don't really have a lot of data. So I'm curious to see how that changes, how many people fine tune, because I think this is what people run into. It's like, Oh, you can fine tune llama. And it's like, okay, where do I get the data?[00:25:27] Alessio: To fine tune it on, you know, so it's great that we're moving the thing. And then I really like he had this chart where like, you know, the brain mass and the body mass thing is basically like mammals that scaled linearly by brain and body size, and then humans kind of like broke off the slope. So it's almost like maybe the mammal slope is like the pre training slope.[00:25:46] Alessio: And then the post training slope is like the, the human one.[00:25:49] swyx: Yeah. I wonder what the. I mean, we'll know in 10 years, but I wonder what the y axis is for, for Ilya's SSI. We'll try to get them on.[00:25:57] Alessio: Ilya, if you're listening, you're [00:26:00] welcome here. Yeah, and then he had, you know, what comes next, like agent, synthetic data, inference, compute, I thought all of that was like that.[00:26:05] Alessio: I don't[00:26:05] swyx: think he was dropping any alpha there. Yeah, yeah, yeah.[00:26:07] Alessio: Yeah. Any other new reps? Highlights?[00:26:10] swyx: I think that there was comparatively a lot more work. Oh, by the way, I need to plug that, uh, my friend Yi made this, like, little nice paper. Yeah, that was really[00:26:20] swyx: nice.[00:26:20] swyx: Uh, of, uh, of, like, all the, he's, she called it must read papers of 2024.[00:26:26] swyx: So I laid out some of these at NeurIPS, and it was just gone. Like, everyone just picked it up. Because people are dying for, like, little guidance and visualizations And so, uh, I thought it was really super nice that we got there.[00:26:38] Alessio: Should we do a late in space book for each year? Uh, I thought about it. For each year we should.[00:26:42] Alessio: Coffee table book. Yeah. Yeah. Okay. Put it in the will. Hi, Will. By the way, we haven't introduced you. He's our new, you know, general organist, Jamie. You need to[00:26:52] swyx: pull up more things. One thing I saw that, uh, Okay, one fun one, and then one [00:27:00] more general one. So the fun one is this paper on agent collusion. This is a paper on steganography.[00:27:06] swyx: This is secret collusion among AI agents, multi agent deception via steganography. I tried to go to NeurIPS in order to find these kinds of papers because the real reason Like NeurIPS this year has a lottery system. A lot of people actually even go and don't buy tickets because they just go and attend the side events.[00:27:22] swyx: And then also the people who go and end up crowding around the most popular papers, which you already know and already read them before you showed up to NeurIPS. So the only reason you go there is to talk to the paper authors, but there's like something like 10, 000 other. All these papers out there that, you know, are just people's work that they, that they did on the air and they failed to get attention for one reason or another.[00:27:42] swyx: And this was one of them. Uh, it was like all the way at the back. And this is a deep mind paper that actually focuses on collusion between AI agents, uh, by hiding messages in the text that they generate. Uh, so that's what steganography is. So a very simple example would be the first letter of every word.[00:27:57] swyx: If you Pick that out, you know, and the code sends a [00:28:00] different message than that. But something I've always emphasized is to LLMs, we read left to right. LLMs can read up, down, sideways, you know, in random character order. And it's the same to them as it is to us. So if we were ever to get You know, self motivated, underlined LLMs that we're trying to collaborate to take over the planet.[00:28:19] swyx: This would be how they do it. They spread messages among us in the messages that we generate. And he developed a scaling law for that. So he marked, I'm showing it on screen right now, the emergence of this phenomenon. Basically, for example, for Cypher encoding, GPT 2, Lama 2, mixed trial, GPT 3. 5, zero capabilities, and sudden 4.[00:28:40] swyx: And this is the kind of Jason Wei type emergence properties that people kind of look for. I think what made this paper stand out as well, so he developed the benchmark for steganography collusion, and he also focused on shelling point collusion, which is very low coordination. For agreeing on a decoding encoding format, you kind of need to have some [00:29:00] agreement on that.[00:29:00] swyx: But, but shelling point means like very, very low or almost no coordination. So for example, if I, if I ask someone, if the only message I give you is meet me in New York and you're not aware. Or when you would probably meet me at Grand Central Station. That is the Grand Central Station is a shelling point.[00:29:16] swyx: And it's probably somewhere, somewhere during the day. That is the shelling point of New York is Grand Central. To that extent, shelling points for steganography are things like the, the, the common decoding methods that we talked about. It will be interesting at some point in the future when we are worried about alignment.[00:29:30] swyx: It is not interesting today, but it's interesting that DeepMind is already thinking about this.[00:29:36] Alessio: I think that's like one of the hardest things about NeurIPS. It's like the long tail. I[00:29:41] swyx: found a pricing guy. I'm going to feature him on the podcast. Basically, this guy from NVIDIA worked out the optimal pricing for language models.[00:29:51] swyx: It's basically an econometrics paper at NeurIPS, where everyone else is talking about GPUs. And the guy with the GPUs is[00:29:57] Alessio: talking[00:29:57] swyx: about economics instead. [00:30:00] That was the sort of fun one. So the focus I saw is that model papers at NeurIPS are kind of dead. No one really presents models anymore. It's just data sets.[00:30:12] swyx: This is all the grad students are working on. So like there was a data sets track and then I was looking around like, I was like, you don't need a data sets track because every paper is a data sets paper. And so data sets and benchmarks, they're kind of flip sides of the same thing. So Yeah. Cool. Yeah, if you're a grad student, you're a GPU boy, you kind of work on that.[00:30:30] swyx: And then the, the sort of big model that people walk around and pick the ones that they like, and then they use it in their models. And that's, that's kind of how it develops. I, I feel like, um, like, like you didn't last year, you had people like Hao Tian who worked on Lava, which is take Lama and add Vision.[00:30:47] swyx: And then obviously actually I hired him and he added Vision to Grok. Now he's the Vision Grok guy. This year, I don't think there was any of those.[00:30:55] Alessio: What were the most popular, like, orals? Last year it was like the [00:31:00] Mixed Monarch, I think, was like the most attended. Yeah, uh, I need to look it up. Yeah, I mean, if nothing comes to mind, that's also kind of like an answer in a way.[00:31:10] Alessio: But I think last year there was a lot of interest in, like, furthering models and, like, different architectures and all of that.[00:31:16] swyx: I will say that I felt the orals, oral picks this year were not very good. Either that or maybe it's just a So that's the highlight of how I have changed in terms of how I view papers.[00:31:29] swyx: So like, in my estimation, two of the best papers in this year for datasets or data comp and refined web or fine web. These are two actually industrially used papers, not highlighted for a while. I think DCLM got the spotlight, FineWeb didn't even get the spotlight. So like, it's just that the picks were different.[00:31:48] swyx: But one thing that does get a lot of play that a lot of people are debating is the role that's scheduled. This is the schedule free optimizer paper from Meta from Aaron DeFazio. And this [00:32:00] year in the ML community, there's been a lot of chat about shampoo, soap, all the bathroom amenities for optimizing your learning rates.[00:32:08] swyx: And, uh, most people at the big labs are. Who I asked about this, um, say that it's cute, but it's not something that matters. I don't know, but it's something that was discussed and very, very popular. 4Wars[00:32:19] Alessio: of AI recap maybe, just quickly. Um, where do you want to start? Data?[00:32:26] swyx: So to remind people, this is the 4Wars piece that we did as one of our earlier recaps of this year.[00:32:31] swyx: And the belligerents are on the left, journalists, writers, artists, anyone who owns IP basically, New York Times, Stack Overflow, Reddit, Getty, Sarah Silverman, George RR Martin. Yeah, and I think this year we can add Scarlett Johansson to that side of the fence. So anyone suing, open the eye, basically. I actually wanted to get a snapshot of all the lawsuits.[00:32:52] swyx: I'm sure some lawyer can do it. That's the data quality war. On the right hand side, we have the synthetic data people, and I think we talked about Lumna's talk, you know, [00:33:00] really showing how much synthetic data has come along this year. I think there was a bit of a fight between scale. ai and the synthetic data community, because scale.[00:33:09] swyx: ai published a paper saying that synthetic data doesn't work. Surprise, surprise, scale. ai is the leading vendor of non synthetic data. Only[00:33:17] Alessio: cage free annotated data is useful.[00:33:21] swyx: So I think there's some debate going on there, but I don't think it's much debate anymore that at least synthetic data, for the reasons that are blessed in Luna's talk, Makes sense.[00:33:32] swyx: I don't know if you have any perspectives there.[00:33:34] Alessio: I think, again, going back to the reinforcement fine tuning, I think that will change a little bit how people think about it. I think today people mostly use synthetic data, yeah, for distillation and kind of like fine tuning a smaller model from like a larger model.[00:33:46] Alessio: I'm not super aware of how the frontier labs use it outside of like the rephrase, the web thing that Apple also did. But yeah, I think it'll be. Useful. I think like whether or not that gets us the big [00:34:00] next step, I think that's maybe like TBD, you know, I think people love talking about data because it's like a GPU poor, you know, I think, uh, synthetic data is like something that people can do, you know, so they feel more opinionated about it compared to, yeah, the optimizers stuff, which is like,[00:34:17] swyx: they don't[00:34:17] Alessio: really work[00:34:18] swyx: on.[00:34:18] swyx: I think that there is an angle to the reasoning synthetic data. So this year, we covered in the paper club, the star series of papers. So that's star, Q star, V star. It basically helps you to synthesize reasoning steps, or at least distill reasoning steps from a verifier. And if you look at the OpenAI RFT, API that they released, or that they announced, basically they're asking you to submit graders, or they choose from a preset list of graders.[00:34:49] swyx: Basically It feels like a way to create valid synthetic data for them to fine tune their reasoning paths on. Um, so I think that is another angle where it starts to make sense. And [00:35:00] so like, it's very funny that basically all the data quality wars between Let's say the music industry or like the newspaper publishing industry or the textbooks industry on the big labs.[00:35:11] swyx: It's all of the pre training era. And then like the new era, like the reasoning era, like nobody has any problem with all the reasoning, especially because it's all like sort of math and science oriented with, with very reasonable graders. I think the more interesting next step is how does it generalize beyond STEM?[00:35:27] swyx: We've been using O1 for And I would say like for summarization and creative writing and instruction following, I think it's underrated. I started using O1 in our intro songs before we killed the intro songs, but it's very good at writing lyrics. You know, I can actually say like, I think one of the O1 pro demos.[00:35:46] swyx: All of these things that Noam was showing was that, you know, you can write an entire paragraph or three paragraphs without using the letter A, right?[00:35:53] Creative Writing with AI[00:35:53] swyx: So like, like literally just anything instead of token, like not even token level, character level manipulation and [00:36:00] counting and instruction following. It's, uh, it's very, very strong.[00:36:02] swyx: And so no surprises when I ask it to rhyme, uh, and to, to create song lyrics, it's going to do that very much better than in previous models. So I think it's underrated for creative writing.[00:36:11] Alessio: Yeah.[00:36:12] Legal and Ethical Issues in AI[00:36:12] Alessio: What do you think is the rationale that they're going to have in court when they don't show you the thinking traces of O1, but then they want us to, like, they're getting sued for using other publishers data, you know, but then on their end, they're like, well, you shouldn't be using my data to then train your model.[00:36:29] Alessio: So I'm curious to see how that kind of comes. Yeah, I mean, OPA has[00:36:32] swyx: many ways to publish, to punish people without bringing, taking them to court. Already banned ByteDance for distilling their, their info. And so anyone caught distilling the chain of thought will be just disallowed to continue on, on, on the API.[00:36:44] swyx: And it's fine. It's no big deal. Like, I don't even think that's an issue at all, just because the chain of thoughts are pretty well hidden. Like you have to work very, very hard to, to get it to leak. And then even when it leaks the chain of thought, you don't know if it's, if it's [00:37:00] The bigger concern is actually that there's not that much IP hiding behind it, that Cosign, which we talked about, we talked to him on Dev Day, can just fine tune 4.[00:37:13] swyx: 0 to beat 0. 1 Cloud SONET so far is beating O1 on coding tasks without, at least O1 preview, without being a reasoning model, same for Gemini Pro or Gemini 2. 0. So like, how much is reasoning important? How much of a moat is there in this, like, All of these are proprietary sort of training data that they've presumably accomplished.[00:37:34] swyx: Because even DeepSeek was able to do it. And they had, you know, two months notice to do this, to do R1. So, it's actually unclear how much moat there is. Obviously, you know, if you talk to the Strawberry team, they'll be like, yeah, I mean, we spent the last two years doing this. So, we don't know. And it's going to be Interesting because there'll be a lot of noise from people who say they have inference time compute and actually don't because they just have fancy chain of thought.[00:38:00][00:38:00] swyx: And then there's other people who actually do have very good chain of thought. And you will not see them on the same level as OpenAI because OpenAI has invested a lot in building up the mythology of their team. Um, which makes sense. Like the real answer is somewhere in between.[00:38:13] Alessio: Yeah, I think that's kind of like the main data war story developing.[00:38:18] The Data War: GPU Poor vs. GPU Rich[00:38:18] Alessio: GPU poor versus GPU rich. Yeah. Where do you think we are? I think there was, again, going back to like the small model thing, there was like a time in which the GPU poor were kind of like the rebel faction working on like these models that were like open and small and cheap. And I think today people don't really care as much about GPUs anymore.[00:38:37] Alessio: You also see it in the price of the GPUs. Like, you know, that market is kind of like plummeted because there's people don't want to be, they want to be GPU free. They don't even want to be poor. They just want to be, you know, completely without them. Yeah. How do you think about this war? You[00:38:52] swyx: can tell me about this, but like, I feel like the, the appetite for GPU rich startups, like the, you know, the, the funding plan is we will raise 60 million and [00:39:00] we'll give 50 of that to NVIDIA.[00:39:01] swyx: That is gone, right? Like, no one's, no one's pitching that. This was literally the plan, the exact plan of like, I can name like four or five startups, you know, this time last year. So yeah, GPU rich startups gone.[00:39:12] The Rise of GPU Ultra Rich[00:39:12] swyx: But I think like, The GPU ultra rich, the GPU ultra high net worth is still going. So, um, now we're, you know, we had Leopold's essay on the trillion dollar cluster.[00:39:23] swyx: We're not quite there yet. We have multiple labs, um, you know, XAI very famously, you know, Jensen Huang praising them for being. Best boy number one in spinning up 100, 000 GPU cluster in like 12 days or something. So likewise at Meta, likewise at OpenAI, likewise at the other labs as well. So like the GPU ultra rich are going to keep doing that because I think partially it's an article of faith now that you just need it.[00:39:46] swyx: Like you don't even know what it's going to, what you're going to use it for. You just, you just need it. And it makes sense that if, especially if we're going into. More researchy territory than we are. So let's say 2020 to 2023 was [00:40:00] let's scale big models territory because we had GPT 3 in 2020 and we were like, okay, we'll go from 1.[00:40:05] swyx: 75b to 1. 8b, 1. 8t. And that was GPT 3 to GPT 4. Okay, that's done. As far as everyone is concerned, Opus 3. 5 is not coming out, GPT 4. 5 is not coming out, and Gemini 2, we don't have Pro, whatever. We've hit that wall. Maybe I'll call it the 2 trillion perimeter wall. We're not going to 10 trillion. No one thinks it's a good idea, at least from training costs, from the amount of data, or at least the inference.[00:40:36] swyx: Would you pay 10x the price of GPT Probably not. Like, like you want something else that, that is at least more useful. So it makes sense that people are pivoting in terms of their inference paradigm.[00:40:47] Emerging Trends in AI Models[00:40:47] swyx: And so when it's more researchy, then you actually need more just general purpose compute to mess around with, uh, at the exact same time that production deployments of the old, the previous paradigm is still ramping up,[00:40:58] swyx: um,[00:40:58] swyx: uh, pretty aggressively.[00:40:59] swyx: So [00:41:00] it makes sense that the GPU rich are growing. We have now interviewed both together and fireworks and replicates. Uh, we haven't done any scale yet. But I think Amazon, maybe kind of a sleeper one, Amazon, in a sense of like they, at reInvent, I wasn't expecting them to do so well, but they are now a foundation model lab.[00:41:18] swyx: It's kind of interesting. Um, I think, uh, you know, David went over there and started just creating models.[00:41:25] Alessio: Yeah, I mean, that's the power of prepaid contracts. I think like a lot of AWS customers, you know, they do this big reserve instance contracts and now they got to use their money. That's why so many startups.[00:41:37] Alessio: Get bought through the AWS marketplace so they can kind of bundle them together and prefer pricing.[00:41:42] swyx: Okay, so maybe GPU super rich doing very well, GPU middle class dead, and then GPU[00:41:48] Alessio: poor. I mean, my thing is like, everybody should just be GPU rich. There shouldn't really be, even the GPU poorest, it's like, does it really make sense to be GPU poor?[00:41:57] Alessio: Like, if you're GPU poor, you should just use the [00:42:00] cloud. Yes, you know, and I think there might be a future once we kind of like figure out what the size and shape of these models is where like the tiny box and these things come to fruition where like you can be GPU poor at home. But I think today is like, why are you working so hard to like get these models to run on like very small clusters where it's like, It's so cheap to run them.[00:42:21] Alessio: Yeah, yeah,[00:42:22] swyx: yeah. I think mostly people think it's cool. People think it's a stepping stone to scaling up. So they aspire to be GPU rich one day and they're working on new methods. Like news research, like probably the most deep tech thing they've done this year is Distro or whatever the new name is.[00:42:38] swyx: There's a lot of interest in heterogeneous computing, distributed computing. I tend generally to de emphasize that historically, but it may be coming to a time where it is starting to be relevant. I don't know. You know, SF compute launched their compute marketplace this year, and like, who's really using that?[00:42:53] swyx: Like, it's a bunch of small clusters, disparate types of compute, and if you can make that [00:43:00] useful, then that will be very beneficial to the broader community, but maybe still not the source of frontier models. It's just going to be a second tier of compute that is unlocked for people, and that's fine. But yeah, I mean, I think this year, I would say a lot more on device, We are, I now have Apple intelligence on my phone.[00:43:19] swyx: Doesn't do anything apart from summarize my notifications. But still, not bad. Like, it's multi modal.[00:43:25] Alessio: Yeah, the notification summaries are so and so in my experience.[00:43:29] swyx: Yeah, but they add, they add juice to life. And then, um, Chrome Nano, uh, Gemini Nano is coming out in Chrome. Uh, they're still feature flagged, but you can, you can try it now if you, if you use the, uh, the alpha.[00:43:40] swyx: And so, like, I, I think, like, you know, We're getting the sort of GPU poor version of a lot of these things coming out, and I think it's like quite useful. Like Windows as well, rolling out RWKB in sort of every Windows department is super cool. And I think the last thing that I never put in this GPU poor war, that I think I should now, [00:44:00] is the number of startups that are GPU poor but still scaling very well, as sort of wrappers on top of either a foundation model lab, or GPU Cloud.[00:44:10] swyx: GPU Cloud, it would be Suno. Suno, Ramp has rated as one of the top ranked, fastest growing startups of the year. Um, I think the last public number is like zero to 20 million this year in ARR and Suno runs on Moto. So Suno itself is not GPU rich, but they're just doing the training on, on Moto, uh, who we've also talked to on, on the podcast.[00:44:31] swyx: The other one would be Bolt, straight cloud wrapper. And, and, um, Again, another, now they've announced 20 million ARR, which is another step up from our 8 million that we put on the title. So yeah, I mean, it's crazy that all these GPU pores are finding a way while the GPU riches are also finding a way. And then the only failures, I kind of call this the GPU smiling curve, where the edges do well, because you're either close to the machines, and you're like [00:45:00] number one on the machines, or you're like close to the customers, and you're number one on the customer side.[00:45:03] swyx: And the people who are in the middle. Inflection, um, character, didn't do that great. I think character did the best of all of them. Like, you have a note in here that we apparently said that character's price tag was[00:45:15] Alessio: 1B.[00:45:15] swyx: Did I say that?[00:45:16] Alessio: Yeah. You said Google should just buy them for 1B. I thought it was a crazy number.[00:45:20] Alessio: Then they paid 2. 7 billion. I mean, for like,[00:45:22] swyx: yeah.[00:45:22] Alessio: What do you pay for node? Like, I don't know what the game world was like. Maybe the starting price was 1B. I mean, whatever it was, it worked out for everybody involved.[00:45:31] The Multi-Modality War[00:45:31] Alessio: Multimodality war. And this one, we never had text to video in the first version, which now is the hottest.[00:45:37] swyx: Yeah, I would say it's a subset of image, but yes.[00:45:40] Alessio: Yeah, well, but I think at the time it wasn't really something people were doing, and now we had VO2 just came out yesterday. Uh, Sora was released last month, last week. I've not tried Sora, because the day that I tried, it wasn't, yeah. I[00:45:54] swyx: think it's generally available now, you can go to Sora.[00:45:56] swyx: com and try it. Yeah, they had[00:45:58] Alessio: the outage. Which I [00:46:00] think also played a part into it. Small things. Yeah. What's the other model that you posted today that was on Replicate? Video or OneLive?[00:46:08] swyx: Yeah. Very, very nondescript name, but it is from Minimax, which I think is a Chinese lab. The Chinese labs do surprisingly well at the video models.[00:46:20] swyx: I'm not sure it's actually Chinese. I don't know. Hold me up to that. Yep. China. It's good. Yeah, the Chinese love video. What can I say? They have a lot of training data for video. Or a more relaxed regulatory environment.[00:46:37] Alessio: Uh, well, sure, in some way. Yeah, I don't think there's much else there. I think like, you know, on the image side, I think it's still open.[00:46:45] Alessio: Yeah, I mean,[00:46:46] swyx: 11labs is now a unicorn. So basically, what is multi modality war? Multi modality war is, do you specialize in a single modality, right? Or do you have GodModel that does all the modalities? So this is [00:47:00] definitely still going, in a sense of 11 labs, you know, now Unicorn, PicoLabs doing well, they launched Pico 2.[00:47:06] swyx: 0 recently, HeyGen, I think has reached 100 million ARR, Assembly, I don't know, but they have billboards all over the place, so I assume they're doing very, very well. So these are all specialist models, specialist models and specialist startups. And then there's the big labs who are doing the sort of all in one play.[00:47:24] swyx: And then here I would highlight Gemini 2 for having native image output. Have you seen the demos? Um, yeah, it's, it's hard to keep up. Literally they launched this last week and a shout out to Paige Bailey, who came to the Latent Space event to demo on the day of launch. And she wasn't prepared. She was just like, I'm just going to show you.[00:47:43] swyx: So they have voice. They have, you know, obviously image input, and then they obviously can code gen and all that. But the new one that OpenAI and Meta both have but they haven't launched yet is image output. So you can literally, um, I think their demo video was that you put in an image of a [00:48:00] car, and you ask for minor modifications to that car.[00:48:02] swyx: They can generate you that modification exactly as you asked. So there's no need for the stable diffusion or comfy UI workflow of like mask here and then like infill there in paint there and all that, all that stuff. This is small model nonsense. Big model people are like, huh, we got you in as everything in the transformer.[00:48:21] swyx: This is the multimodality war, which is, do you, do you bet on the God model or do you string together a whole bunch of, uh, Small models like a, like a chump. Yeah,[00:48:29] Alessio: I don't know, man. Yeah, that would be interesting. I mean, obviously I use Midjourney for all of our thumbnails. Um, they've been doing a ton on the product, I would say.[00:48:38] Alessio: They launched a new Midjourney editor thing. They've been doing a ton. Because I think, yeah, the motto is kind of like, Maybe, you know, people say black forest, the black forest models are better than mid journey on a pixel by pixel basis. But I think when you put it, put it together, have you tried[00:48:53] swyx: the same problems on black forest?[00:48:55] Alessio: Yes. But the problem is just like, you know, on black forest, it generates one image. And then it's like, you got to [00:49:00] regenerate. You don't have all these like UI things. Like what I do, no, but it's like time issue, you know, it's like a mid[00:49:06] swyx: journey. Call the API four times.[00:49:08] Alessio: No, but then there's no like variate.[00:49:10] Alessio: Like the good thing about mid journey is like, you just go in there and you're cooking. There's a lot of stuff that just makes it really easy. And I think people underestimate that. Like, it's not really a skill issue, because I'm paying mid journey, so it's a Black Forest skill issue, because I'm not paying them, you know?[00:49:24] Alessio: Yeah,[00:49:25] swyx: so, okay, so, uh, this is a UX thing, right? Like, you, you, you understand that, at least, we think that Black Forest should be able to do all that stuff. I will also shout out, ReCraft has come out, uh, on top of the image arena that, uh, artificial analysis has done, has apparently, uh, Flux's place. Is this still true?[00:49:41] swyx: So, Artificial Analysis is now a company. I highlighted them I think in one of the early AI Newses of the year. And they have launched a whole bunch of arenas. So, they're trying to take on LM Arena, Anastasios and crew. And they have an image arena. Oh yeah, Recraft v3 is now beating Flux 1. 1. Which is very surprising [00:50:00] because Flux And Black Forest Labs are the old stable diffusion crew who left stability after, um, the management issues.[00:50:06] swyx: So Recurve has come from nowhere to be the top image model. Uh, very, very strange. I would also highlight that Grok has now launched Aurora, which is, it's very interesting dynamics between Grok and Black Forest Labs because Grok's images were originally launched, uh, in partnership with Black Forest Labs as a, as a thin wrapper.[00:50:24] swyx: And then Grok was like, no, we'll make our own. And so they've made their own. I don't know, there are no APIs or benchmarks about it. They just announced it. So yeah, that's the multi modality war. I would say that so far, the small model, the dedicated model people are winning, because they are just focused on their tasks.[00:50:42] swyx: But the big model, People are always catching up. And the moment I saw the Gemini 2 demo of image editing, where I can put in an image and just request it and it does, that's how AI should work. Not like a whole bunch of complicated steps. So it really is something. And I think one frontier that we haven't [00:51:00] seen this year, like obviously video has done very well, and it will continue to grow.[00:51:03] swyx: You know, we only have Sora Turbo today, but at some point we'll get full Sora. Oh, at least the Hollywood Labs will get Fulsora. We haven't seen video to audio, or video synced to audio. And so the researchers that I talked to are already starting to talk about that as the next frontier. But there's still maybe like five more years of video left to actually be Soda.[00:51:23] swyx: I would say that Gemini's approach Compared to OpenAI, Gemini seems, or DeepMind's approach to video seems a lot more fully fledged than OpenAI. Because if you look at the ICML recap that I published that so far nobody has listened to, um, that people have listened to it. It's just a different, definitely different audience.[00:51:43] swyx: It's only seven hours long. Why are people not listening? It's like everything in Uh, so, so DeepMind has, is working on Genie. They also launched Genie 2 and VideoPoet. So, like, they have maybe four years advantage on world modeling that OpenAI does not have. Because OpenAI basically only started [00:52:00] Diffusion Transformers last year, you know, when they hired, uh, Bill Peebles.[00:52:03] swyx: So, DeepMind has, has a bit of advantage here, I would say, in, in, in showing, like, the reason that VO2, while one, They cherry pick their videos. So obviously it looks better than Sora, but the reason I would believe that VO2, uh, when it's fully launched will do very well is because they have all this background work in video that they've done for years.[00:52:22] swyx: Like, like last year's NeurIPS, I already was interviewing some of their video people. I forget their model name, but for, for people who are dedicated fans, they can go to NeurIPS 2023 and see, see that paper.[00:52:32] Alessio: And then last but not least, the LLMOS. We renamed it to Ragops, formerly known as[00:52:39] swyx: Ragops War. I put the latest chart on the Braintrust episode.[00:52:43] swyx: I think I'm going to separate these essays from the episode notes. So the reason I used to do that, by the way, is because I wanted to show up on Hacker News. I wanted the podcast to show up on Hacker News. So I always put an essay inside of there because Hacker News people like to read and not listen.[00:52:58] Alessio: So episode essays,[00:52:59] swyx: I remember [00:53:00] purchasing them separately. You say Lanchain Llama Index is still growing.[00:53:03] Alessio: Yeah, so I looked at the PyPy stats, you know. I don't care about stars. On PyPy you see Do you want to share your screen? Yes. I prefer to look at actual downloads, not at stars on GitHub. So if you look at, you know, Lanchain still growing.[00:53:20] Alessio: These are the last six months. Llama Index still growing. What I've basically seen is like things that, One, obviously these things have A commercial product. So there's like people buying this and sticking with it versus kind of hopping in between things versus, you know, for example, crew AI, not really growing as much.[00:53:38] Alessio: The stars are growing. If you look on GitHub, like the stars are growing, but kind of like the usage is kind of like flat. In the last six months, have they done some[00:53:4

god ceo new york amazon spotify time world europe google ai china apple vision pr voice future speaking san francisco new york times phd video thinking chinese simple data predictions elon musk iphone surprise impact legal code tesla chatgpt reflecting memory ga discord busy reddit lgbt cloud flash stem honestly ab pros jeff bezos windows excited researchers unicorns lower ip tackling sort survey insane tier cto vc whispers applications doc signing seal fireworks f1 genie academic openai sf gemini organizing nvidia ux api assembly davos frontier chrome makes scarlett johansson ui mm turbo gpt bash soda aws ml lama dropbox mosaic creative writing github drafting reinvent canvas 1b bolt apis lava ruler exact stripe dev pico strawberry hundred wwdc vm sander bt flux vcs taiwanese 200k moto arr gartner opus assumption sora google docs parting nemo blackwell sam altman google drive llm sombra gpu opa tbd ramp 3b elia elo agi gnome 5b estimates bytedance midjourney leopold dota ciso haiku dx sarah silverman coursera rag gpus sonnets george rr martin cypher quill getty cobalt sdks deepmind ilya noam perplexity sheesh v2 ttc alessio grok future trends anthropic lms satya r1 ssi stack overflow rl 8b itc emerging trends theoretically sota vo2 replicate yi mistral suno veo black forest inflection graphql aitor xai brain trust databricks chinchillas gpts adept nosql mcp grand central jensen huang ai models grand central station hacker news zep hacken ethical issues cosign claud ai news gpc distro lubna autogpt neo4j tpu o3 jeremy howard gbt o1 quent gpd heygen gradients exa loras 70b langchain minimax neurips 400b jeff dean 128k elos gemini pro cerebras code interpreter icml john franco lstm r1s ai winter aws reinvent muser latent space pypy dan gross nova pro paige bailey noam brown quiet capital john frankel
BonkTable Podcast
BonkTable Podcast EP41 Wadrhun THE Faction (Featuring Apex Chinchilla)

BonkTable Podcast

Play Episode Listen Later Dec 22, 2024 331:54


Episode 41 We invited Apex Chinchilla from Chanting Conquest to talk about Wadrhun in our longest episode yet.Chanting Conquesthttps://chantingconquest.blogspot.com/Cassandra's Twittertwitter.com/bonktablecassWe are also starting up a discord server for anyone who wants to join us and talk about Conquest or any other miniature wargames people play.https://discord.gg/ztuD6MUMrEYou can also find this podcast anywhere where you listen to Podcastshttps://www.buzzsprout.com/2183437/shareMusicAlso check out our YouTube channel for Battle Reports and Lore readings.https://youtube.com/@BonkTable?si=VOHnpMb5yDC5uTn6Thumbnail and art from Nicolette NuyttenTwitter and Instagram are @LibraryNii Website www.nicolettenuytten.com

Radioestadio noche
Roberto Chinchilla, de Campeones, y su lado más solidario con los afectados de la DANA

Radioestadio noche

Play Episode Listen Later Dec 20, 2024 8:25


El actor de Campeones cuenta en Radioestadio noche la iniciativa solidaria de este sábado. 

BuneBape
Ep 196: Hating On Chinchillas With Bastila

BuneBape

Play Episode Listen Later Dec 19, 2024 212:38


Follow Bastila:Twitch: twitch.tv/bastilaYoutube: youtube.com/@QueenBastilaTwitter: twitter.com/Basti_BabeInstagram: instagram.com/immadeofplasticThis week we are joined by Bastila! We also talk about the game jam, fletching minigame, and we do a Q&A.EPISODE TIME STAMPS00:00 Intro/personal updates06:08 Game Jam VI47:36 Fletching minigame1:05:31 Q&A with Rob & Michelle1:13:02 Joined by Bastila3:00:34 Q&A with Bastila3:30:22 OutroEpisode notes:https://secure.runescape.com/m=news/a=13/game-jam-vi---october-2024?oldschool=1https://secure.runescape.com/m=news/a=13/fletching-activity---varlamore-the-final-dawn?oldschool=1Help buy cosplay supplies:https://throne.com/bunebapeWatch live at:https://www.twitch.tv/bunebapeJoin Our Community Discord at:https://discord.gg/44jX6yNCVKJoin our OSRS Clan!Clan: BunebapeFriend Chat: /BunebapeosrsDid you enjoy the content or have any questions? Let us know by commenting and check out more content you might enjoy at the links below.Podcast: https://anchor.fm/bunebapeInstagram: https://www.instagram.com/bunebape/?hl=enTwitter: https://twitter.com/bunebapeosrsTikTok: https://www.tiktok.com/@bunebapeosrsMerch: https://bunebape.comYoutube: https://youtube.com/bunebapeBusiness Inquiries:Bunebape@gmail.comTags:#osrs #oldschoolrunescape #osrspodcast #runescapepodcast #podcastIs the King Kyatt gameplay loop too gnarly?YesNo

Encyclopedia Womannica
Go-Getters: Peggy Hopkins Joyce

Encyclopedia Womannica

Play Episode Listen Later Dec 16, 2024 7:33 Transcription Available


Peggy Hopkins Joyce (1893-1957) was an American actress, model and socialite known for her multiple marriages to millionaires. For Further Reading: The Iron Forger and the Gold Digger The Legend of Peggy Hopkins Joyce: She Collected Men, Chinchilla, Diamonds Peggy Hopkins Joyce Dies at 63; Showgirl of ‘20’s Gold Digger: The Outrageous Life and Times of Peggy Hopkins Joyce This month we're talking about Go-Getters. Women who purposefully—or accidentally!—acquired life-changing wealth, good fortune, or influence. History classes can get a bad rap, and sometimes for good reason. When we were students, we couldn’t help wondering... where were all the ladies at? Why were so many incredible stories missing from the typical curriculum? Enter, Womanica. On this Wonder Media Network podcast we explore the lives of inspiring women in history you may not know about, but definitely should. Every weekday, listeners explore the trials, tragedies, and triumphs of groundbreaking women throughout history who have dramatically shaped the world around us. In each 5 minute episode, we’ll dive into the story behind one woman listeners may or may not know–but definitely should. These diverse women from across space and time are grouped into easily accessible and engaging monthly themes like Educators, Villains, Indigenous Storytellers, Activists, and many more. Womanica is hosted by WMN co-founder and award-winning journalist Jenny Kaplan. The bite-sized episodes pack painstakingly researched content into fun, entertaining, and addictive daily adventures. Womanica was created by Liz Kaplan and Jenny Kaplan, executive produced by Jenny Kaplan, and produced by Grace Lynch, Maddy Foley, Brittany Martinez, Edie Allard, Lindsey Kratochwill, Adesuwa Agbonile, Carmen Borca-Carrillo, Taylor Williamson, Sara Schleede, Paloma Moreno Jimenez, Luci Jones, Abbey Delk, Hannah Bottum, Adrien Behn, Alyia Yates, and Vanessa Handy. Special thanks to Shira Atkins. Original theme music composed by Miles Moran. Follow Wonder Media Network: Website Instagram Twitter See omnystudio.com/listener for privacy information.

Jubal Phone Pranks from The Jubal Show
You Left Your Chinchilla in My Uber

Jubal Phone Pranks from The Jubal Show

Play Episode Listen Later Dec 5, 2024 4:07 Transcription Available


➡︎ Jubal Phone Pranks on The Jubal ShowNeed someone to feel the wrath of a Jubal Fresh character? He'll call whoever you want and prank them... so hard. It's funny. Submit yours here: https://forms.gle/mgACgtLBP3SPcyRR7======This is just a tiny piece of The Jubal Show. You can find every podcast we have, including the full show every weekday right here…➡︎ https://thejubalshow.com/podcasts======The Jubal Show is everywhere, and also these places: Website ➡︎ https://thejubalshow.com  Instagram ➡︎ https://instagram.com/thejubalshow  X/Twitter ➡︎ https://twitter.com/thejubalshow  Tiktok ➡︎ https://www.tiktok.com/@the.jubal.show YouTube ➡︎ https://www.youtube.com/@JubalFresh  ======Meet The Jubal Show Cast:====== Jubal Fresh - https://jubalshow.com/featured/jubal-fresh/  Nina - https://thejubalshow.com/featured/ninaontheair/ Victoria - https://jubalshow.com/featured/victoria-ramirez/  Brad Nolan - https://jubalshow.com/featured/brad-nolan/  Sharkey - https://jubalshow.com/featured/richard-sharkey/ See omnystudio.com/listener for privacy information.

Phone Pranks with Jubal Fresh
You Left Your Chinchilla in My Uber

Phone Pranks with Jubal Fresh

Play Episode Listen Later Dec 5, 2024 4:07 Transcription Available


➡︎ Jubal Phone Pranks on The Jubal ShowNeed someone to feel the wrath of a Jubal Fresh character? He'll call whoever you want and prank them... so hard. It's funny. Submit yours here: https://forms.gle/mgACgtLBP3SPcyRR7======This is just a tiny piece of The Jubal Show. You can find every podcast we have, including the full show every weekday right here…➡︎ https://thejubalshow.com/podcasts======The Jubal Show is everywhere, and also these places: Website ➡︎ https://thejubalshow.com  Instagram ➡︎ https://instagram.com/thejubalshow  X/Twitter ➡︎ https://twitter.com/thejubalshow  Tiktok ➡︎ https://www.tiktok.com/@the.jubal.show YouTube ➡︎ https://www.youtube.com/@JubalFresh  ======Meet The Jubal Show Cast:====== Jubal Fresh - https://jubalshow.com/featured/jubal-fresh/  Nina - https://thejubalshow.com/featured/ninaontheair/ Victoria - https://jubalshow.com/featured/victoria-ramirez/  Brad Nolan - https://jubalshow.com/featured/brad-nolan/  Sharkey - https://jubalshow.com/featured/richard-sharkey/ See omnystudio.com/listener for privacy information.

Not Proud of It Podcast
Episode 282 - Chinchilla

Not Proud of It Podcast

Play Episode Listen Later Nov 28, 2024 109:07


This week we learn about turkeys, witches and the dangers lurking beneath fallen leaves.

ExplicitNovels
Cáel and the Manhattan Amazons: Part 3

ExplicitNovels

Play Episode Listen Later Nov 3, 2024


Women of any age can drive a man to madness.In 25 parts, edited from the works of FinalStand.Listen and subscribe to the ► Podcast at Connected..“Instinct, education and experience are complementary, not in opposition.”(Wednesday)The phone rang. The clock was flashing 6:15. Odette snuggled up to me, making cute, happy cat-like noises. Timothy's bed was bigger than mine so I had to reach out to get my mobile device. For the tenth time, I silently thanked Timothy for switching bedrooms with me, though I believed he had chosen to sleep on the sofa instead."Hello," I said quietly."It's Buffy. I'll be there in fifteen minutes," she stated firmly."I have a companion over," I hesitated. "Can you make it twenty-five?""Who is that, Cáel Nyilas," Odette yawned. She liked the way my full name rolled of her tongue."Who is that?" Buffy grilled me."She's a sweet young lady I met; the rest is none of your business," I told Buffy. To Odette, "It is one of my many bosses. After my 'auto accident' (I couldn't tell a stranger that some psycho bitch; who I had just screwed; had her mentor kick the shit out of me), she brought me home then deposited me at your workplace. My bike is still at work." I had told Odette I was a cyclist."Does she think you are sexy?" Odette giggled. I groaned."81 days, Cáel," Buffy reminded me. "81 days," then she hung up. I wasn't getting my extra ten minutes."Do we have time?" Odette wiggled her whole body against mine."I don't think so. Babe," I sighed. "All I can do is go down on you then I have to grab a shower and get dressed." Odette blinked, blinked again, then brightened up incredibly."If that's all we can do," she exhibited no regrets as she hurled the covers back. It took me seven minutes to bring her to.I was good, but I had also torn up Odette pretty badly last night. I had to buy Timothy some more condoms. I felt kinda bad for using the number I did. I raced to the shower, did a Wonder Woman (hold your arms out and spin around a few times in the shower), raced back to Timothy's room; Timothy shot me with his Nerf gun from the sofa (Odette was vocal); and began dressing."Odette, stay and get some sleep," I stroked her cheek. "Timothy heads to work around ten, so if you could head out with him so he can lock up the place. Fix whatever breakfast you like. If it is Timothy, I'll make it up to him.""You mean beyond letting us use his room?" she fixed me with her feline eyes. I coughed."Come on, Cáel Nyilas, this room is plastered with male Calvin Klein models and you have five copies of the Village Voice on your dresser. You are far too proficient with punching all my buttons to be gay," she pointed out."Gay men can be very sexually proficient," I countered."Cáel Nyilas (damn, she loved my name), you came five times. I lost track of how many orgasms I had. If you are gay, you aren't in De-Nile, you are in Ethiopia," she giggled. This wasn't the right moment to brag that I ejaculated eight times last night. Rhada filled up three condoms during our little escapade. I repeat, I have an out of control libido."Gotta go," I straddled Odette and gave her a kiss. I deftly avoided the French grapple because I had the feeling that Buffy wasn't the kind to wait patiently."Timothy;” I mumbled as I sped to the door."I know; girl; bed; sleeping," he groaned. As the door shut I heard him add, "at least he's not dull."I managed not to kill myself tumbling down the stairs in my haste to reach the street. Buffy was waiting and drumming her hands on the steering wheel. I tried the car door; it was locked. A tap on the window earned me a baleful glare. I sighed and fell on my knees."Please," I begged. "Please, please, please let me in the car." I heard a click after ten seconds."You're late," she remarked as we sped away. I hastily put on my seat belt."I apologize," I tried being obsequious."You had better be, damn it," she seethed. Oh; I scented arousal; and jealousy. We drove a few blocks in silence. "Who was it?""Are we on the clock?" I countered. Pause."No," she said in a clipped tone."None of your fucking business, then," I growled. "My sex life is none of your concern, Buffy. It is none of your group's concern, so give it a rest.""Or what?" Buffy's eyes narrowed. I wished she would watch the road."Thunderdome, Bitch!" I grinned. Oh, she tried. She tried really hard to stay angry with me."I hate you," she snickered. She pulled out her phone and handed it to me. It was a picture of Buffy, Katrina, Tessa, Desiree and some woman who looked familiar standing, or kneeling, behind a pile of dead animals. All the ladies had bows, knives and camo gear."Does the Audubon Society know about this? I'm pretty sure the World Wildlife Fund would have a freaking stroke," I nodded."Ladies at Havenstone have a passion for killing things," Buffy measured me. "I thought you might want to know.""Why do you use bows?" I questioned. "Don't your boobs get in the way?" Buffy smacked me in the chest; hard. I could have blocked. That would have been counterproductive. No, I grabbed her right boob and gave it a strong squeeze. In retaliation, she hit me again. I grabbed her boob. This went on until we entered the garage. She got in the last hit."We are on the clock now," I notified her. She seemed less than pleased. "Very nice, by the way.""Huh?" Buffy studied."Sorry. Any continuation of this conversation would constitute sexual harassment," I sighed."I am mentally projecting negative emotions your way," Buffy grumbled."I believe the totality of your efforts create a positive outlook for me," I grinned."Have you ever been skydiving?" Buffy dropped out of the blue on me in the elevator ride up."With, or without, a parachute?" I inquired. She blessed me with a feral smile.I hurried to Katrina's office, Buffy a step behind me, rumbling like the jaguar she'd performed illegal dentistry on. She wasn't trying to intimidate me. Buffy was trying to mark her territory. I made it to my desk without actually being scent-marked, so I considered the encounter a draw."Have fun last night?" Katrina inquired without looking up."More than any one man should have," I confessed. Further conversation was severed by the arrival of the first of the female 'new hires'. As Katrina started our little meeting, I surreptitiously put in the work order for my suits. I wasn't sneaky enough for Katrina."Are you suffering some sort of head trauma that makes you believe you can avoid participation in this meeting?" she purred."No, Ma; Katrina," I was contrite. "I had to submit a work order for the business suits Buffy and Helena purchased for me last night so I would stop coming to work dressed like a homeless panhandler." That killed four of the girls; they failed to stifle their giggles."Couldn't you have dealt with that on the way in?" Katrina had this glitter in her eyes."Buffy was attempting to subject me to vehicular homicide," I replied. "I was afraid for my life on multiple occasions, up to and including her entry into the garage.""How horrifying for you," Katrina delivered deadpan."I had my hands full, I swear," I placed my hand over my heart."I suspect that was the case," Katrina allowed. "Is there anything else you need to take care of while the rest of us wait on you?""Thank you, yes there is," I smiled, nodded and began typing away."I was being facetious, but then you knew that," Katrina teased. Several girls were openly giggling now.When I finished, I walked around Katrina's desk, went to one knee and lowered my head. Katrina scanned my latest request."Really?" she was intrigued."Yes, Ma'am," I looked up at her. She ran her hands through my hair. "Katrina.""You are trying," Katrina remarked. That could read either way. "Go back to your station before I show you where you really belong," she chuckled. I stood up and fist-pumped."Woo-who!" I shouted. "I'm going to bed." That finished them off. Even Fabiola cracked a tiny bit and snickered behind her hand.The real joke they were embracing; making me part of their new breeding program; was the punchline to the joke Katrina and I found amusing. I knew the truth. We received our assignments and left the office."How did your date with Rhada go last night?" Paula nudged me."It wasn't a date. It was a corporate appointment," I corrected. "As for the rest; you don't want to know. Please believe me, you don't want to know.""I can make you tell us," Fabiola smirked. The group kept together until I reached Desiree's desk. She was my boss for the day and she was not pleased, or amused.Fabiola saved me."Sister, compel this one to tell us what happened with Rhada last night," Fabiola sneered in Hittite. I played dumb which wasn't hard in my fatigued state. Desiree transferred all of her dislike of me into outrage at Fabiola's breach."Is your blood poisoned?" Desiree seethed. "When they tossed you off the rocks, did you bounce back up, or are you so arrogantly stupid you would flaunt one of our most basic safeguards?""You are only half the woman you could have been," Fabiola shot back.By the way Desiree flew out of her chair that was a deadly insult. I put my body between them and grabbed Desiree by her upper arms."Release me," she yelled, her hate returned its focus to me."You are my boss," I explained calmly. "I most join you in your battles. Is this a battle you truly want to fight, here and now?""Release me at once," Desiree commanded."One of us hiding behind a man," Fabiola mocked Desiree. Daphne punched her. "Ow!""Care to try that on me?" Daphne challenged Fabiola. "My family's prestige has never been called into question." I was starting to think they meant genetic purity."Buffy would not want me to let you come to harm," I whispered to Desiree then released her. It was that hunting photo that made me make that leap. Desiree glared at me. A slap followed, but it wasn't all that hard."Do not touch me without permission, Cáel Nyilas," she commanded in a clear voice.

Chilluminati Podcast
Midweek Mini - Hyper-Dimensional Chinchillas

Chilluminati Podcast

Play Episode Listen Later Oct 24, 2024 50:24


It's a weird one in this Midweek Mini as the boys wrap up 2023! Want Minisodes AS THEY RELEASE? Then head over to Patreon and enjoy 50+ more episodes! MERCH - http://www.theyetee.com/collections/chilluminati Special thanks to our sponsors this episode - All you lovely people at Patreon! HTTP://PATREON.COM/CHILLUMINATIPOD Jesse Cox - http://www.youtube.com/jessecox Alex Faciane - http://www.youtube.com/user/superbeardbros Editor - DeanCutty http://www.twitter.com/deancutty Art Commissioned by - http://www.mollyheadycarroll.com

Liss’N Kristi
Episode 62: Beauty Face-Off, Part Two

Liss’N Kristi

Play Episode Listen Later Oct 9, 2024 40:14


In Part Two of our Beauty Face-Off, we explore the wide-ranging arts of shimmer eyeshadow and the power of eyelash primers, sharing expert tips using both common and undiscovered brands. Alissa offers advice for those who are new to the world of makeup, while Kristi demonstrates techniques for brushing up around the eyebrows, the eyes, and the cheeks. She also tells of the time she became Marilyn Monroe.We also share excitement about the upcoming Scott Barnes Masterclass in Dallas, TX., and provide practical tips for blending and mixing products, ensuring you build a beauty routine that suits your lifestyle and budget.https://www.lissnkristi.comhttps://www.amazon.com/shop/schillehttps://www.eventbrite.com/e/scott-barnes-master-class-tickets-1036299829687STORIES:- 00:00 Start00:12 Shimmer, or glow, on the eye01:35 Eyelash Primer02:35 You want it to stay where you put it!04:10 The brush I'll never live without 05:06 Having a hard time with mascara05:20 The $500 eyelash curler06:29 A trick on Ebay07:20 Snatched and sculpted, work from out to in 07:50 The Scott Barnes Masterclass 08:06 Korean "porn star" Primer09:58 My whole new regimen of health care products10:55 I went to hairdressing school long enough to learn to do my own hair12:20 If I can't see it, I'm not worried about it 13:05 Kim Gatlin and Good Christian Bitches15:15 If you're new to makeup brands15:56 My skin has improved with these products16:30 Familiarity with clothing brands - are they late to the makeup game?18:34 Lashes: Heavier on the edge, because that's where you want the most drama19:10 Right above the waterline, I go in with a chocolate21:25 Scott's Masterclass - Downtown Dallas at 5 p.m. Nov. 9 202422:02 Q-tip - great invention23:10 Bronzers which are "buildable"24:45 I love listening to Jim Nance and Bailey Sarian 26:15 Everybody's moving to Texas28:50 Merle Norman "plaster for your face"30:05 Mary Kay has a story31:15 "Can you get that done in 3 hours?"32:32 How many bridesmaids were there? 33:18 Everyday staples for runaround makeup 35:00 Mascara and the brush 36:03 Using bronze everywhere37:22 70s and 80s looks in the future37:30 Looking for Marilyn Monroe, and finding Kristi       

Okay But Did You Know?
Ep. 86 Did You Know We Prefer 90's Movies?

Okay But Did You Know?

Play Episode Listen Later Oct 9, 2024 44:41


Join us as we recap and chat about Bob's Burgers Season 5 Episode 15 Adventures in Chinchilla-sitting and Season 5 Episode The Runaway Club Did you know the rounds to the bar trivia are all themed. Round one appears to be history, round two appears to be about US geography, round three is technology and round four's answers are all Kurt Russell movies. Wiki page for the episode: Adventures in Chinchilla-sitting The Runaway Club Links, articles, and videos mentioned in this episode: Join our Book Club and get access to exclusive content on Patreon Follow us on Instagram Follow us on Tiktok --- Support this podcast: https://podcasters.spotify.com/pod/show/obdykpod/support

AXRP - the AI X-risk Research Podcast
37 - Jaime Sevilla on AI Forecasting

AXRP - the AI X-risk Research Podcast

Play Episode Listen Later Oct 4, 2024 104:25


Epoch AI is the premier organization that tracks the trajectory of AI - how much compute is used, the role of algorithmic improvements, the growth in data used, and when the above trends might hit an end. In this episode, I speak with the director of Epoch AI, Jaime Sevilla, about how compute, data, and algorithmic improvements are impacting AI, and whether continuing to scale can get us AGI. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast The transcript: https://axrp.net/episode/2024/10/04/episode-37-jaime-sevilla-forecasting-ai.html   Topics we discuss, and timestamps: 0:00:38 - The pace of AI progress 0:07:49 - How Epoch AI tracks AI compute 0:11:44 - Why does AI compute grow so smoothly? 0:21:46 - When will we run out of computers? 0:38:56 - Algorithmic improvement 0:44:21 - Algorithmic improvement and scaling laws 0:56:56 - Training data 1:04:56 - Can scaling produce AGI? 1:16:55 - When will AGI arrive? 1:21:20 - Epoch AI 1:27:06 - Open questions in AI forecasting 1:35:21 - Epoch AI and x-risk 1:41:34 - Following Epoch AI's research   Links for Jaime and Epoch AI: Epoch AI: https://epochai.org/ Machine Learning Trends dashboard: https://epochai.org/trends Epoch AI on X / Twitter: https://x.com/EpochAIResearch Jaime on X / Twitter: https://x.com/Jsevillamol   Research we discuss: Training Compute of Frontier AI Models Grows by 4-5x per Year: https://epochai.org/blog/training-compute-of-frontier-ai-models-grows-by-4-5x-per-year Optimally Allocating Compute Between Inference and Training: https://epochai.org/blog/optimally-allocating-compute-between-inference-and-training Algorithmic Progress in Language Models [blog post]: https://epochai.org/blog/algorithmic-progress-in-language-models Algorithmic progress in language models [paper]: https://arxiv.org/abs/2403.05812 Training Compute-Optimal Large Language Models [aka the Chinchilla scaling law paper]: https://arxiv.org/abs/2203.15556 Will We Run Out of Data? Limits of LLM Scaling Based on Human-Generated Data [blog post]: https://epochai.org/blog/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data Will we run out of data? Limits of LLM scaling based on human-generated data [paper]: https://arxiv.org/abs/2211.04325 The Direct Approach: https://epochai.org/blog/the-direct-approach   Episode art by Hamish Doodles: hamishdoodles.com

Music Interviews with Rob Herrera on Front Row Live
CHINCHILLA Interview | LA Debut, Making “Little Girl Gone” & ‘Flytrap' EP

Music Interviews with Rob Herrera on Front Row Live

Play Episode Listen Later Oct 2, 2024 21:38


CHINCHILLA sat down with Rob Herrera backstage of her LA Debut show for an interview behind the creative process for viral single “Little Girl Gone” and recently released ‘Flytrap' EP. Thank you for listening! If you enjoyed and learned something from this podcast please be sure to follow and rate it in order to help us grow in the podcast space. You are also welcome to help support this podcast with a small monthly donation to help sustain future episodes. If you'd like to watch my video interviews, I invite you to Subscribe to my channel at ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠www.YouTube.com/FrontRowLiveEnt⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Follow Us: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@FrontRowLiveEnt⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ | ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@Robertherrera3⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ #CHINCHILLA #FrontRowLiveEnt --- Support this podcast: https://podcasters.spotify.com/pod/show/frontrowliveent/support

Juicebox Podcast: Type 1 Diabetes
#1319 Secret Chinchilla

Juicebox Podcast: Type 1 Diabetes

Play Episode Listen Later Sep 30, 2024 72:02


Maddie, a 19-year-old college student with PCOS, acid reflux, low B12, low iron, and reactive hypoglycemia. Screen It Like You Mean It Eversense CGM Learn about the Medtronic Champions This BetterHelp link saves 10% on your first month of therapy Try delicious AG1 - Drink AG1.com/Juicebox I Have Vision Use code JUICEBOX to save 40% at Cozy Earth  JUICE CRUISE 2025 Eat Hungryroot Get Gvoke HypoPen CONTOUR NextGen smart meter and CONTOUR DIABETES app Learn about the Dexcom G6 and G7 CGM Go tubeless with Omnipod 5 or Omnipod DASH * Get your supplies from US MED  or call 888-721-1514 Learn about Touched By Type 1 Take the T1DExchange survey *The Pod has an IP28 rating for up to 25 feet for 60 minutes. The Omnipod 5 Controller is not waterproof.  How to listen, disclaimer and more Apple Podcasts> Subscribe to the podcast today! The podcast is available on Spotify, Google Play, iHeartRadio, Radio Public, Amazon Music and all Android devices The Juicebox Podcast is a free show, but if you'd like to support the podcast directly, you can make a gift here or buy me a coffee. Thank you! Disclaimer - Nothing you hear on the Juicebox Podcast or read on Arden's Day is intended as medical advice. You should always consult a physician before making changes to your health plan.  If the podcast has helped you to live better with type 1 please tell someone else how to find the show and consider leaving a rating and review on Apple Podcasts. Thank you! The Juicebox Podcast is not a charitable organization.

Jubal Phone Pranks from The Jubal Show
You Left Your Chinchilla in My Uber

Jubal Phone Pranks from The Jubal Show

Play Episode Listen Later Sep 27, 2024 4:07 Transcription Available


➡︎ Jubal Phone Pranks on The Jubal ShowNeed someone to feel the wrath of a Jubal Fresh character? He'll call whoever you want and prank them... so hard. It's funny. Submit yours here: https://forms.gle/mgACgtLBP3SPcyRR7======This is just a tiny piece of The Jubal Show. You can find every podcast we have, including the full show every weekday right here…➡︎ https://thejubalshow.com/podcasts======The Jubal Show is everywhere, and also these places: Website ➡︎ https://thejubalshow.com  Instagram ➡︎ https://instagram.com/thejubalshow  X/Twitter ➡︎ https://twitter.com/thejubalshow  Tiktok ➡︎ https://www.tiktok.com/@the.jubal.show YouTube ➡︎ https://www.youtube.com/@JubalFresh  ======Meet The Jubal Show Cast:====== Jubal Fresh - https://jubalshow.com/featured/jubal-fresh/  Nina - https://thejubalshow.com/featured/ninaontheair/ Victoria - https://jubalshow.com/featured/victoria-ramirez/  Brad Nolan - https://jubalshow.com/featured/brad-nolan/  Sharkey - https://jubalshow.com/featured/richard-sharkey/ See omnystudio.com/listener for privacy information.

Phone Pranks with Jubal Fresh
You Left Your Chinchilla in My Uber

Phone Pranks with Jubal Fresh

Play Episode Listen Later Sep 27, 2024 4:07 Transcription Available


➡︎ Jubal Phone Pranks on The Jubal ShowNeed someone to feel the wrath of a Jubal Fresh character? He'll call whoever you want and prank them... so hard. It's funny. Submit yours here: https://forms.gle/mgACgtLBP3SPcyRR7======This is just a tiny piece of The Jubal Show. You can find every podcast we have, including the full show every weekday right here…➡︎ https://thejubalshow.com/podcasts======The Jubal Show is everywhere, and also these places: Website ➡︎ https://thejubalshow.com  Instagram ➡︎ https://instagram.com/thejubalshow  X/Twitter ➡︎ https://twitter.com/thejubalshow  Tiktok ➡︎ https://www.tiktok.com/@the.jubal.show YouTube ➡︎ https://www.youtube.com/@JubalFresh  ======Meet The Jubal Show Cast:====== Jubal Fresh - https://jubalshow.com/featured/jubal-fresh/  Nina - https://thejubalshow.com/featured/ninaontheair/ Victoria - https://jubalshow.com/featured/victoria-ramirez/  Brad Nolan - https://jubalshow.com/featured/brad-nolan/  Sharkey - https://jubalshow.com/featured/richard-sharkey/ See omnystudio.com/listener for privacy information.

Animal Party -  Dog & Cat News, Animal Facts, Topics & Guests - Pets & Animals on Pet Life Radio (PetLifeRadio.com)

Deborah Wolfe talks about Taylor Swift's 3 cats, what old pets need & trick training tips for dogs and cats! See Deb Wolfe-Pet Expert on YouTube for cat and dog training demos and how to test puppy temperament/personality with 2 cute 8 week old rescued pups. Where does your cat come from? The last 3 original cats; Siamese, Abyssinian, Chinchilla. ‘Karma is a cat purring on my lap because it loves me!!!' Taylor Swift. A little Taylor talk about Meredith Grey the Scottish Fold you might spot in Deadpool 2. Deb says the fastest learners are Standard Poodles and trick cats. Check out Deb Wolfe-Pet Expert on Facebook to see a fantastic Poodle photo shoot. Please send your text, type or voice clip pet questions, problems or guest suggestions to deb@petliferadio.com EPISODE NOTES: Taylor Swift Loves Her Cats!

Dice Company
Small Embers: Chapter 51 - Schrodinger's Chinchilla

Dice Company

Play Episode Listen Later Sep 17, 2024 93:34


In the fifty-first chapter of Small Embers... stranded in Roanoke, with seemingly only one way to go, the gang prepare to enter the ruins of the Citadel... Small Embers is a Dice Company Narrative Adventure Audio Podcast, using D&D rules as a framework in this Actual Play variation. For the best listening experience, please check-out our exclusive Patreon: https://www.patreon.com/Dicecompany  We also have a Dice Company Universe Discord server for listerners https://discord.gg/yr69WZAEaD  For more information, please visit https://dicecompanypodcast.com/ or check-out our Link Tree: https://linktr.ee/dicecompany    Edited by: TC Patrick Starring (with Special Thanks to): Richard Godden (https://richardgoddenvoiceactor.com)   Music (Thanks to): Intro Theme (Dynamic Intro) by Mykola Sosin Medieval Market by Tabletop Sounds Deep Relaxation by SamuelFrancisJohnson Cave of Time by Tabletop Sounds Epic Battle Song by Rolandomat Fantasy Epic by Life Saturation Life of the Celts by Free-to-Use-Audio Mysterious Journey by Free-to-Use-Audio A Special Thank You to our Artwork Team: Sarah Isabella - Background Art Joey (The Sleepy Pencil) - Character Artwork https://www.etsy.com/uk/shop/ThesleepypencilArt Ben Lee (Foundation) - Merchandise Designs Extra Thanks to: Sound & music by Syrinscape (we don't always use them, but we highly recommend checking them out and especially their subscription): https://syrinscape.com/ "Because Epic Games Need Epic Sound"  

The Gossip Gays
Sloppy Seconds: There's a Chinchilla in the hotel lobby!

The Gossip Gays

Play Episode Listen Later Sep 2, 2024 14:59


In today's Sloppy Seconds, Sam and Lillie-Mae have a chat with Chinchilla ahead of her performance at Manchester Pride in at hotel lobby, they talk about how queer women are taking over the music industry, what it's like to have a gone go viral on TikTok when you're an independent artist and how the name Chinchilla came around.Want to be a Gossip Goddess or a Question Queen?Get involved…Send us your crazy and dirty confessions! They could be your own saucy tales or the goss you have on your friends! Send them in here: https://forms.gle/5uwNGBb9QAkgXKKz5 or you can even get in touch via Whatsapp! Texts/ voice notes, go wild! If you wish to remain anon, just say. We will never out you and can even disguise your voice. Whatsapp the show: https://wa.me/message/NJKXUPHEB7AAI1 Hosted on Acast. See acast.com/privacy for more information.

Misterios
Memoria Negra: El Caso Manuel Chinchilla · El Crimen De Porto Cristo

Misterios

Play Episode Listen Later Aug 26, 2024 96:41


Año 1992. El descubrimiento en un acantilado de la costa de Mallorca de un cadáver decapitado es el punto de partida de una investigación policial que destapará la realidad que se esconde detrás del asesinato ejecutado por una red de narcotraficante. 24 de julio del año 2014. Ángel Ábad muere asesinado por dos disparos en el interior del restaurante que regentaba en la localidad de Porto Cristo, Mallorca. Un crimen ejecutado por Arnau Matas en venganza por una supuesta relación extra-matrimonial que la víctima mantenía con su esposa.

TNT Radio
Senator Gerard Rennick & Shane Healey on The Chris Smith Show - 08 August 2024

TNT Radio

Play Episode Listen Later Aug 8, 2024 55:35


GUEST 1 OVERVIEW: Senator Gerard Rennick was elected to the Senate for Queensland in the Parliament of Australia in 2019. He was born and raised on a property outside Chinchilla, on the Darling Downs. He has a deep appreciation of the land, its people and the challenges they face. Gerard has extensive experience in senior finance roles across a range of industries, business types and countries. GUEST 2 OVERVIEW: Shane Healey is a terrorism and youth justice expert. He's a former Australian Defence Force Special Operations Command intelligence operator, and private military contractor. Shane has been deployed twice to Afghanistan (2010/2011 and 2012) as part of Task Force 66 where he provided insurgent threat assessments. When in Australia he was part of the Tactical Assault Group – East and West where he was involved in several real time terrorist incidents.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you see this in time, join our emergency LLM paper club on the Llama 3 paper!For everyone else, join our special AI in Action club on the Latent Space Discord for a special feature with the Cursor cofounders on Composer, their newest coding agent!Today, Meta is officially releasing the largest and most capable open model to date, Llama3-405B, a dense transformer trained on 15T tokens that beats GPT-4 on all major benchmarks:The 8B and 70B models from the April Llama 3 release have also received serious spec bumps, warranting the new label of Llama 3.1.If you are curious about the infra / hardware side, go check out our episode with Soumith Chintala, one of the AI infra leads at Meta. Today we have Thomas Scialom, who led Llama2 and now Llama3 post-training, so we spent most of our time on pre-training (synthetic data, data pipelines, scaling laws, etc) and post-training (RLHF vs instruction tuning, evals, tool calling).Synthetic data is all you needLlama3 was trained on 15T tokens, 7x more than Llama2 and with 4 times as much code and 30 different languages represented. But as Thomas beautifully put it:“My intuition is that the web is full of s**t in terms of text, and training on those tokens is a waste of compute.” “Llama 3 post-training doesn't have any human written answers there basically… It's just leveraging pure synthetic data from Llama 2.”While it is well speculated that the 8B and 70B were "offline distillations" of the 405B, there are a good deal more synthetic data elements to Llama 3.1 than the expected. The paper explicitly calls out:* SFT for Code: 3 approaches for synthetic data for the 405B bootstrapping itself with code execution feedback, programming language translation, and docs backtranslation.* SFT for Math: The Llama 3 paper credits the Let's Verify Step By Step authors, who we interviewed at ICLR:* SFT for Multilinguality: "To collect higher quality human annotations in non-English languages, we train a multilingual expert by branching off the pre-training run and continuing to pre-train on a data mix that consists of 90% multilingualtokens."* SFT for Long Context: "It is largely impractical to get humans to annotate such examples due to the tedious and time-consuming nature of reading lengthy contexts, so we predominantly rely on synthetic data to fill this gap. We use earlier versions of Llama 3 to generate synthetic data based on the key long-context use-cases: (possibly multi-turn) question-answering, summarization for long documents, and reasoning over code repositories, and describe them in greater detail below"* SFT for Tool Use: trained for Brave Search, Wolfram Alpha, and a Python Interpreter (a special new ipython role) for single, nested, parallel, and multiturn function calling.* RLHF: DPO preference data was used extensively on Llama 2 generations. This is something we partially covered in RLHF 201: humans are often better at judging between two options (i.e. which of two poems they prefer) than creating one (writing one from scratch). Similarly, models might not be great at creating text but they can be good at classifying their quality.Last but not least, Llama 3.1 received a license update explicitly allowing its use for synthetic data generation.Llama2 was also used as a classifier for all pre-training data that went into the model. It both labelled it by quality so that bad tokens were removed, but also used type (i.e. science, law, politics) to achieve a balanced data mix. Tokenizer size mattersThe tokens vocab of a model is the collection of all tokens that the model uses. Llama2 had a 34,000 tokens vocab, GPT-4 has 100,000, and 4o went up to 200,000. Llama3 went up 4x to 128,000 tokens. You can find the GPT-4 vocab list on Github.This is something that people gloss over, but there are many reason why a large vocab matters:* More tokens allow it to represent more concepts, and then be better at understanding the nuances.* The larger the tokenizer, the less tokens you need for the same amount of text, extending the perceived context size. In Llama3's case, that's ~30% more text due to the tokenizer upgrade. * With the same amount of compute you can train more knowledge into the model as you need fewer steps.The smaller the model, the larger the impact that the tokenizer size will have on it. You can listen at 55:24 for a deeper explanation.Dense models = 1 Expert MoEsMany people on X asked “why not MoE?”, and Thomas' answer was pretty clever: dense models are just MoEs with 1 expert :)[00:28:06]: I heard that question a lot, different aspects there. Why not MoE in the future? The other thing is, I think a dense model is just one specific variation of the model for an hyperparameter for an MOE with basically one expert. So it's just an hyperparameter we haven't optimized a lot yet, but we have some stuff ongoing and that's an hyperparameter we'll explore in the future.Basically… wait and see!Llama4Meta already started training Llama4 in June, and it sounds like one of the big focuses will be around agents. Thomas was one of the authors behind GAIA (listen to our interview with Thomas in our ICLR recap) and has been working on agent tooling for a while with things like Toolformer. Current models have “a gap of intelligence” when it comes to agentic workflows, as they are unable to plan without the user relying on prompting techniques and loops like ReAct, Chain of Thought, or frameworks like Autogen and Crew. That may be fixed soon?

The Nonlinear Library
LW - Musings on LLM Scale (Jul 2024) by Vladimir Nesov

The Nonlinear Library

Play Episode Listen Later Jul 6, 2024 5:09


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Musings on LLM Scale (Jul 2024), published by Vladimir Nesov on July 6, 2024 on LessWrong. In a recent interview, Dario Amodei claimed that cost of training is (starting with models already available) Right now, $100 million. There are models in training today that are more like a $1 billion. I think if we go to $10 or a $100 billion, and I think that will happen in 2025-2026, maybe 2027, ... (Epistemic status: Fermi estimates, 8 is approximately 10 which is greater than 9.) Assuming $40,000 per H100 and associated infrastructure in a datacenter, $1 billion gives 25K H100s, which matches the scale of for example Meta's new training clusters and requires about 40MW of power. At $2 per hour, training time cost of 25K H100s reaches $100 million in 80 days, which seems reasonable if on the short side for a production training run. The cost of time matches $1 billion at 2.3 years. An H100 (SXM) is rated for 2e15 FLOP/s in BF16 (my impression is this is usually stable out of the box). This becomes 4e15 FLOP/s in FP8, which seems practical if done carefully, no degradation in pre-training loss compared to FP32. The $100 million run then translates to 9e25 FLOPs at 30% utilization in BF16, or 2e26 FLOPs in FP8. (For some reason this SemiAnalysis estimate is 2x lower, peak 2e20 FLOP/s for 100,000 H100s at FP8, possibly the sparsity footnote in H100 specification for the 4000 teraFLOP/s figure is the culprit.) This is maybe 10x original GPT-4, estimated at 2e25 FLOPs. The leading models (Claude 3.5 Sonnet, Gemini 1.5 Pro, GPT-4 Omni) cost $15-20 per million output tokens, compared to $75-120 for once-frontier models Claude 3 Opus, Gemini 1 Ultra, original GPT-4. Given a Chinchilla optimal model, if we reduce its active parameters 3x and increase training compute 3x, we get approximately the same performance, but it's now at least 3x cheaper for inference. This increases data 10x, which if everything else fails can be obtained by repeating the old data, giving 30x overtraining in compute compared to what is Chinchilla optimal for the smaller model. Llama-3-70b is overtrained 10x, Llama-3-8b 90x, though they don't use MoE and their performance is lower than for MoE models with the same active parameters and training cost. Beyond $100 million The current frontier models are overtrained on compute that could enable even smarter models. Compute is increasing, but it mostly goes to reduction of inference cost, and only a little bit to capabilities. Why aren't any of the three labs directing the compute to train/release models optimized for maximum capability? Possibly costs are already such that training at too many parameter/data tradeoff points won't be done, instead they choose an option that's currently most useful and spend the rest on experiments that would make imminent larger scale runs better. Even OpenAI's next frontier model in training as of May 28 might just be using compute comparable to what GPT-4 Omni required, not OOMs more, and it could still get much more capable if allowed to be more expensive for inference. To do a run at $1 billion in cost of time, even 100K H100s would need 200 days (powered by 150MW). There probably aren't any individual clusters of this scale yet (which would cost about $4 billion). Gemini 1.0 report stated that Training Gemini Ultra used a large fleet of TPUv4 accelerators owned by Google across multiple datacenters. ... we combine SuperPods in multiple datacenters using Google's intra-cluster and inter-cluster network. Google's network latencies and bandwidths are sufficient to support the commonly used synchronous training paradigm, exploiting model parallelism within superpods and data-parallelism across superpods. This together with Amodei's claim of current $1 billion training runs and individual 100K H100 clusters still getting built ...

No Such Thing As A Fish
536: No Such Thing As A Soaring Chinchilla

No Such Thing As A Fish

Play Episode Listen Later Jun 20, 2024 52:37


Live from the Nerdland Festival, Andrew, James, Dan and Lieven Scheire discuss coughing crocs, cunning computers, testing toilets, and tall tales about tails. Visit nosuchthingasafish.com for news about live shows, merchandise and more episodes. Join Club Fish for ad-free episodes and exclusive bonus content at apple.co/nosuchthingasafish or nosuchthingasafish.com/patreon

Joey and Lauren in the Morning
The Phone Jenks - Tinkles The Chinchilla!

Joey and Lauren in the Morning

Play Episode Listen Later Jun 18, 2024 5:35


Joey is missing his Chinchilla, named Tinkles, and Lucy needs to help find him!

Can You Don't?
Can You Don't? | Sticker. Pee Slap. Chinchilla. Boat Fence.

Can You Don't?

Play Episode Listen Later Jun 5, 2024 86:27


No matter how old or wise we get, we are never safe from doing some idiotic shit. Let's talk about that, accidentally giving a millionaire a dollar thinking he was homeless, finding a gun while working at PetSmart, pissing off the HOA by painting a picture of your boat on your fence, and more on today's episode of Can You Don't?!*** Wanna become part of The Gaggle and access all the extra content on the end of each episode PLUS tons more?! Our Patreon page is LIVE! This is the biggest way you can support the show. It would mean the world to us: http://patreon.com/canyoudontpodcast ***New Episodes every Wednesday at 12pm PSTWatch on Youtube: https://youtu.be/ij7L_2noxUkSend in segment content: heyguys@canyoudontpodcast.comMerch: http://canyoudontpodcast.comMerch Inquires: store@canyoudontpodcast.comFB: http://facebook.com/canyoudontpodcastIG: http://instagram.com/canyoudontpodcastYouTube Channel: https://bit.ly/3wyt5rtOfficial Website: http://canyoudontpodcast.comCustom Music Beds by Zach CohenFan Mail:Can You Don't?PO Box 1062Coeur d'Alene, ID 83816Hugs and Tugs.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Business Daily
Business Daily meets: Laura Chinchilla

Business Daily

Play Episode Listen Later May 23, 2024 17:29


Laura Chinchilla was the first woman to serve as president of Costa Rica and one of the first in Latin America.We talk to her about what that journey to the top job in her country was like, and the challenges facing Latin America - from corruption to crime, the drugs trade, migration, the brain drain, poor governance and low economic productivity. And we consider some of the potential solutions to those problems - solutions that could help Latin America bring prosperity to its people.(Picture: Laura Chinchilla Miranda, former President of Costa Rica, speaking at a conference. Credit: Getty Images)Presented and produced by Gideon Long

Just the Zoo of Us
232: Chinchilla & Walrus

Just the Zoo of Us

Play Episode Listen Later Mar 20, 2024 72:21


Ellen softens up to the chinchilla & Christian breaks the ice with the walrus. We discuss Jackie Chan Adventures, fur slips, dust baths, tusks, whiskers, bells & whistles, Odysseus, and Lewis Carroll. Goo goo g'joob.Links:Get involved with the MaxFunDrive & check out this year's awesome gifts & bonus content!For more information about us & our podcast, head over to our website!Follow Just the Zoo of Us on Threads, Facebook, Instagram & Discord!Follow Ellen on TikTok! MaxFunDrive ends on March 29, 2024! Support our show now by becoming a member at maximumfun.org/join.