Podcasts about mle

  • 189PODCASTS
  • 356EPISODES
  • 46mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 18, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about mle

Latest podcast episodes about mle

The Power's Point Podcast
Stomach Stretchers

The Power's Point Podcast

Play Episode Listen Later Apr 18, 2025 42:45 Transcription Available


What drives someone to consume 76 hot dogs in 10 minutes or choke down eight pounds of pure mayonnaise? The fascinating and sometimes disturbing world of competitive eating exists at the intersection of sport, spectacle, and sheer human determination.We dive deep into this peculiar subculture that's evolved dramatically over the years. From the famous Nathan's Hot Dog Eating Contest to obscure challenges involving beef tongue and jalapeno peppers, these competitions push human bodies to their absolute limits. Joey Chestnut and Takeru Kobayashi emerge as the titans of the industry, with Chestnut holding an astonishing number of world records across various food categories.The science behind competitive eating proves surprisingly complex. We explore how these gastronomic athletes train their bodies through stomach stretching techniques, drinking gallons of water before events, and even learning to partially dislocate their jaws. These aren't just people with big appetites—they're dedicated competitors who approach eating with strategic precision.What began as casual county fair entertainment has transformed into a global phenomenon with significant cash prizes. The Wing Bowl offers $50,000 to its champion, while most competitions range between $2,500 and $10,000 for first place. For those at the top of the field, competitive eating can become a legitimate career path, though one that raises serious questions about long-term health consequences.As we debate which food challenges we might personally attempt—from White Castle sliders to deviled eggs—we're left wondering: is competitive eating an impressive display of human potential, or simply a grotesque spectacle? Whatever your take, one thing's certain—it's impossible to look away.Join us for this eye-opening exploration of what happens when eating becomes sport, and discover why these food warriors continue to push the boundaries of what we thought humanly possible.Thank you for giving us a go, and hope you stick with us as we have some really amazing guest on and hole you have a laugh or two but no more than three. Support the showThank you for joining us on today's show, as always, we appreciate each and every one of you! Talk to you soon.X - @PodcastScottIG - Powers31911

The Power's Point Podcast
The 27 Club

The Power's Point Podcast

Play Episode Listen Later Apr 8, 2025 35:41 Transcription Available


What dark forces connect legendary musicians who died at exactly 27? The mysterious "27 Club" includes some of music's most brilliant minds - Jimi Hendrix, Janis Joplin, Jim Morrison, Kurt Cobain, and Amy Winehouse - all claimed at the same haunting age.Our hosts dive deep into the origins of this phenomenon, tracing it back to blues legend Robert Johnson, whose supernatural guitar skills spawned myths of a deal with the devil at the crossroads. When Johnson mysteriously died at 27 in 1938, it began what would become a disturbing pattern.The conversation takes particularly fascinating turns when examining recent revelations about Kurt Cobain's death. A new witness claims to have been present when Cobain was murdered, contrary to the official suicide ruling. We explore the compelling evidence suggesting Cobain's suicide note may have been partially forged, and the suspicious timing of Hole bassist Kristen Pfaff's death shortly afterward - also at 27.Beyond the sensational theories, we examine what makes this phenomenon so captivating. Is it merely confirmation bias focusing on famous people who happened to die at the same age? Or does the intense pressure of fame, coupled with substance abuse and the "live fast, die young" lifestyle, create a perfect storm for vulnerable young artists? We even discuss the bizarre "white lighter curse" - the superstition that white lighters were found at multiple 27 Club death scenes.Whether you believe in cosmic connections or statistical coincidences, this episode offers a thoughtful exploration of creativity, fame's dark side, and our need to find meaning in tragedy. Email us at powerspointpodcast@yahoo.com with your own theories or suggestions for future topics!Thank you for giving us a go, and hope you stick with us as we have some really amazing guest on and hole you have a laugh or two but no more than three. Support the showThank you for joining us on today's show, as always, we appreciate each and every one of you! Talk to you soon.X - @PodcastScottIG - Powers31911

Major League Eventing Podcast
Valerie Pride 5* Eventer & FEI Level 3 Eventing Dressage Judge

Major League Eventing Podcast

Play Episode Listen Later Mar 19, 2025 57:05


Karen and Robby welcome back 5* Eventer and FEI Level 3 Eventing Dressage judge Valerie Pride. Valerie was on the MLE podcast back on October 21, 2020 and we had a lot of catching up to do. Valerie most recently judged at the Maryland 5* and explains everything that the Ground Jury has to do - it's not just judging Dressage. She also has some exciting news that will take her on a European tour this fall to judge the European Championships at Blenhem, the 7 year old Championships at Cornbury and then Burghley. Valerie also competes and runs her business Blue Clover Eventing and she explains how she is able to compete, run her business and judge. We also ask her some Dressage questions that we hope everyone is able to learn something from.PC: Shannon BrinkmanTo follow Valerie:https://www.blueclovereventing.com/about-valerie/https://www.instagram.com/blueclovereventing/?hl=enhttps://www.facebook.com/blueclovereventingPlease support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop 

Millásreggeli • Gazdasági Muppet Show
Millásreggeli podcast: lemerülő akkugyárak, felvirágzó logisztika - 2025-03-18 08 óra

Millásreggeli • Gazdasági Muppet Show

Play Episode Listen Later Mar 18, 2025


2025. március 18., kedd 8 - 9 óra Lemerülőben - Mélyrepülésben az akkugyártás Közzétette a magyar ipar részletes januári termelési adatait a KSH. Az adatsor szerint folytatódik az ipar elnyúló válsága, és egy év alatt 3,9 százalékkal mérséklődött a termelés. Az összes ágazat közül legjobban a néhány éve még a magyar gazdaság húzóágazatának kikiáltott akkumulátorgyártás teljesített a legrosszabbul. Az adatközlés szerint az akkumulátor és szárazelem gyártása nevű statisztikai kategória termelési volumene egy év alatt 46 százalékkal zsugorodott. De egyátalán jó ötlet-e akkumulátorokat gyártani Magyarországon? Dr. Győrffy Dóra, a Corvinus Egyetem közgazdász professzora. ÉSZJÁTÉK: Felvirágozhat a kelet-magyarországi logisztika A kelet-magyarországi régió, különösen Nyíregyháza, Debrecen és Miskolc térsége új löketet kaphat logisztikai fejlesztések terén. Ehhez viszont az orosz-ukrán háborús helyzet javulása járulhat hozzá érdemben, de a térségben várható úthálózat fejlesztések és ipari beruházások szintén a fellendülést támogathatják. Dr. Doór Zoltán, a Magyar Logisztikai Egyesület (MLE) elnöke. ARANYKÖPÉS: "Valamennyi növény közt a paradicsom látszik a legemberibbnek, lelkes és sérülékeny és könnyen rohad." John Updike amerikai író, költő (1932)

Major League Eventing Podcast
Catching Up with Ariel Grald

Major League Eventing Podcast

Play Episode Listen Later Mar 12, 2025 42:01


Karen and Robby catch up with previous guest, 5* eventer Ariel Grald. Since the last time Ariel has been on the MLE podcast in 2019, she has placed 3rd at Luhmulen and placed 11th individually at the World Championships in Pratoni with her longtime partner Leamore Master Plan. In 2024 at the Kentucky 5*, Simon suffered an injury where he is now living the semi retired life and is fuzzy and chunky with hopes to slowly bring him back to some sort of competing - maybe in the show jumping ring. Even with Simon sidelined, Ariel is busy with several top horses that she has big plans for the spring and fall. We hope you enjoy hearing everything Ariel has going on!PC: Shannon BrinkmanFollow Ariel's journey:https://www.instagram.com/arielmgrald/?hl=enhttps://www.facebook.com/amgequestrian/Please support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop 

Bax of All Trades
How To Break Into Tech (Economics to NVIDIA Engineer) | BoaT #24

Bax of All Trades

Play Episode Listen Later Feb 14, 2025 75:18


In this episode of BoaT, I interview my good friend Ishan Dhanani, who is a MLE for Inference at NVIDIA. Less than 2 years ago, Ishan graduated with an Economics degree from Texas A&M. Since then, he has dropped out of Columbia, been acquired twice (once by Brev.dev, once by NVIDIA), and moved across the country.We discuss how you can become technical, the future of AI, and much more.EnjoyFOLLOW ISHAN:https://x.com/0xishandCONNECT WITH ME

The Best Advice Show
Try These Witchy, Water-based Maneuvers to Improve Your Life with Dr. MLE

The Best Advice Show

Play Episode Listen Later Feb 12, 2025 8:36


Emily Carr/Dr. MLE is a Wellness Witch, a professional poet, amateur Tarot reader, joyous revolutionary arts educator, and fitness coach. Subscribe to her newsletter @ https://soulfloss.substack.com/LISTEN: You Should Buy Mice and Other Transformative Advice from Lucy Anderton Fill out the first-ever TBAS listener survey to help Zak get to know you better and to enter the drawing to win a custom designed shirt by Zak and his daughter @https://forms.gle/f1HxJ45Df4V3m2Dg9---Help Zak continue making this show by becoming a Best Advice Show Patron @ https://www.patreon.com/bestadviceshow---Call Zak on the advice show hotline @ 844-935-BEST---Share this episode on IG @BestAdviceShow

Radio Campus France
MLE | Campus Club, mixtape | Rhythm Section

Radio Campus France

Play Episode Listen Later Feb 10, 2025 55:34


mixtape by MLE | Campus Club MLE is the label manager of Peckham, London based label Rhythm Section With a youth surrounded by endless grass, practical boredom and nothing to do, MLE spent her time escaping to the virtual reality of the internet, discovering the world of electronic music and its digital culture. Musical interests spurred by learning piano, clarinet and drums from a young age, her musical understanding informs an eclectic taste from Turkish disco folk to New Yorkian left field and clanging electro. Combining these interests with a warrior spirit for breaking down gender imbalance within dance music, dope synth noodler, and party instigator. MLE's dedication is unmatched and will only continue to grow as an artist, selector and booty shaking motivator.' https://www.wearerhythmsection.com/artist-page-mle SC https://soundcloud.com/can_u_feel_ml3 @can_u_feel_ml3 IG https://www.instagram.com/emily.mle ------------------------------------------------------ CAMPUS CLUB, l'émission Au plus près des cultures électro qui marquent la création musicale d'aujourd'hui et à l'international, le réseau Radio Campus France donne carte blanche aux artistes et labels défricheurs des nouveaux talents. En écoute régulière sur plus de 30 radios et en podcast, retrouvez chaque semaine CAMPUS CLUB, un mix exclusif d'un.e DJ ou producteur.ice. de la scène française ou étrangère. Toutes les mixtapes : www.radiocampus.fr/emission/campus-club-mixtapes ------------------------------------------------------ RADIO CAMPUS FRANCE Radio Campus France est le réseau des radios associatives, libres, étudiantes et locales fédérant 30 radios partout en France. NOUS SUIVRE | FOLLOW US www.radiocampus.fr

Campus Club
MLE | Campus Club, mixtape | Rythm Section Berlin

Campus Club

Play Episode Listen Later Feb 10, 2025 55:31


mixtape by MLE | Campus Club special Rhtythm Section Berlin MLE est la fondatrice du label berlinois Rythm Section 'With a youth surrounded by endless grass, practical boredom and nothing to do, MLE spent her time escaping to the virtual reality of the internet, discovering the world of electronic music and its digital culture. Musical interests spurred by learning piano, clarinet and drums from a young age, her musical understanding informs an eclectic taste from Turkish disco folk to New Yorkian left field and clanging electro. Combining these interests with a warrior spirit for breaking down gender imbalance within dance music, dope synth noodler, and party instigator. MLE's dedication is unmatched and will only continue to grow as an artist, selector and booty shaking motivator.' https://www.wearerhythmsection.com/artist-page-mle SC https://soundcloud.com/can_u_feel_ml3 @can_u_feel_ml3 IG https://www.instagram.com/emily.mle ------------------------------------------------------ CAMPUS CLUB, l'émission Au plus près des cultures électro qui marquent la création musicale d'aujourd'hui et à l'international, le réseau Radio Campus France donne carte blanche aux artistes et labels défricheurs des nouveaux talents. En écoute régulière sur plus de 30 radios et en podcast, retrouvez chaque semaine CAMPUS CLUB, un mix exclusif d'un.e DJ ou producteur.ice. de la scène française ou étrangère. Toutes les mixtapes : www.radiocampus.fr/emission/campus-club-mixtapes ------------------------------------------------------ RADIO CAMPUS FRANCE Radio Campus France est le réseau des radios associatives, libres, étudiantes et locales fédérant 30 radios partout en France. NOUS SUIVRE | FOLLOW US www.radiocampus.frHébergé par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations.

The Dane Moore NBA Podcast
The Julius Randle Player Option Conundrum + A Win In Orlando w/ Kyle Theige

The Dane Moore NBA Podcast

Play Episode Listen Later Jan 10, 2025 98:35


On today's show, Dane is joined by Kyle Theige to discuss the looming question of what is going to happen with Julius Randle's player option. Dane and Kyle take a look at the Wolves salary cap sheet, and in that detail how much they have to spend if Randle opts in and how much added financial flexibility they'll receive if he opts out. After salary cap talk, Dane and Kyle also discuss themes from the win in Orlando and ripple effects of the starting lineup change. Specific topics and timestamps below... - Wolves cap sheet update + The two paths of the Randle player option (2:00) - What position to use the MLE on in free agency, if Randle opts out/isn't on the team (15:00) - Big games for all 3 bigs in an ideal Orlando matchup (36:00) - Finch's applause for Ant finding consistent effectiveness in games following his comments (45:00) - Ripples of the starting lineup change and how they impact DiVincenzo and McDaniels specifically (65:00) If you'd like to support our partners... -- Try out our new sponsor WtrMln Wtr at Whole Foods or Target: https://drinkwtrmln.com/ -- Contact Adrianna Lonick with Coldwell Banker Realty for a free consultation at: https://www.thedancingrealtor.com/ or call/text 715-304-9920 -- For more information on Treasure Island Watch Parties, visit https://www.ticasino.com -- Get yourself a pair of Duer jeans for 20% by going to: https://www.shopduer.com/danemoore -- Contact Your Home Improvement Company: https://www.yourhomeimprovementco.com/ -- Sign up for Prize Picks, promo code "DANE" for a signup bonus: https://www.prizepicks.com/ -- Want to advertise on the show? Reach out to DaneMooreProductions@gmail.com -- Support the show by subscribing for $5 a month: https://www.patreon.com/DaneMooreNBA -- #BlueWireVideo Learn more about your ad choices. Visit podcastchoices.com/adchoices

Major League Eventing Podcast
2024 Year In Review

Major League Eventing Podcast

Play Episode Listen Later Jan 1, 2025 26:45


Happy New Year! Karen and Robby sit down with a pour of a 15 year Pappy Van Winkle to discuss all things that happened with MLE in 2024. We go over how we are listened to in 68 countries, our top 5 US cities and the top 5 listened to episodes. You'll even get a little insight on what we have going on personally as well as all things Corgi related. We hope you enjoy!Please support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop 

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all our LS supporters who helped fund the venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Since Nathan Lambert ( Interconnects ) joined us for the hit RLHF 201 episode at the start of this year, it is hard to overstate how much Open Models have exploded this past year. In 2023 only five names were playing in the top LLM ranks, Mistral, Mosaic's MPT, TII UAE's Falcon, Yi from Kai-Fu Lee's 01.ai, and of course Meta's Llama 1 and 2. This year a whole cast of new open models have burst on the scene, from Google's Gemma and Cohere's Command R, to Alibaba's Qwen and Deepseek models, to LLM 360 and DCLM and of course to the Allen Institute's OLMo, OL MOE, Pixmo, Molmo, and Olmo 2 models. We were honored to host Luca Soldaini, one of the research leads on the Olmo series of models at AI2.Pursuing Open Model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe, California and the White House. We also were honored to hear from and Sophia Yang, head of devrel at Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track!Full Talk on YouTubePlease like and subscribe!Timestamps* 00:00 Welcome to Latent Space Live * 00:12 Recap of 2024: Best Moments and Keynotes * 01:22 Explosive Growth of Open Models in 2024 * 02:04 Challenges in Open Model Research * 02:38 Keynote by Luca Soldani: State of Open Models * 07:23 Significance of Open Source AI Licenses * 11:31 Research Constraints and Compute Challenges * 13:46 Fully Open Models: A New Trend * 27:46 Mistral's Journey and Innovations * 32:57 Interactive Demo: Lachat Capabilities * 36:50 Closing Remarks and NetworkingTranscriptSession3Audio[00:00:00] AI Charlie: Welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the latent space network to cover each field.[00:00:28] AI Charlie: 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our next keynote covers the state of open models in 2024, with Luca Soldani and Nathan Lambert of the Allen Institute for AI, with a special appearance from Dr. Sophia Yang of Mistral. Our first hit episode of 2024 was with Nathan Lambert on RLHF 201 back in January.[00:00:57] AI Charlie: Where he discussed both reinforcement learning for language [00:01:00] models and the growing post training and mid training stack with hot takes on everything from constitutional AI to DPO to rejection sampling and also previewed the sea change coming to the Allen Institute. And to Interconnects, his incredible substack on the technical aspects of state of the art AI training.[00:01:18] AI Charlie: We highly recommend subscribing to get access to his Discord as well. It is hard to overstate how much open models have exploded this past year. In 2023, only five names were playing in the top LLM ranks. Mistral, Mosaics MPT, and Gatsby. TII UAE's Falcon, Yi, from Kaifu Lee's 01. ai, And of course, Meta's Lama 1 and 2.[00:01:43] AI Charlie: This year, a whole cast of new open models have burst on the scene. From Google's Jemma and Cohere's Command R, To Alibaba's Quen and DeepSeq models, to LLM360 and DCLM, and of course, to the Allen Institute's OLMO, [00:02:00] OLMOE, PIXMO, MOLMO, and OLMO2 models. Pursuing open model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe.[00:02:14] AI Charlie: California and the White House. We also were honored to hear from Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track. As always, don't forget to check the show notes for the YouTube link to their talk, as well as their slides. Watch out and take care.[00:02:35] Luca Intro[00:02:35] Luca Soldaini: Cool. Yeah, thanks for having me over. I'm Luca. I'm a research scientist at the Allen Institute for AI. I threw together a few slides on sort of like a recap of like interesting themes in open models for, for 2024. Have about maybe 20, 25 minutes of slides, and then we can chat if there are any questions.[00:02:57] Luca Soldaini: If I can advance to the next slide. [00:03:00] Okay, cool. So I did the quick check of like, to sort of get a sense of like, how much 2024 was different from 2023. So I went on Hugging Face and sort of get, tried to get a picture of what kind of models were released in 2023 and like, what do we get in 2024?[00:03:16] Luca Soldaini: 2023 we get, we got things like both LLAMA 1 and 2, we got Mistral, we got MPT, Falcon models, I think the YI model came in at the end. Tail end of the year. It was a pretty good year. But then I did the same for 2024. And it's actually quite stark difference. You have models that are, you know, reveling frontier level.[00:03:38] Luca Soldaini: Performance of what you can get from closed models from like Quen, from DeepSeq. We got Llama3. We got all sorts of different models. I added our own Olmo at the bottom. There's this growing group of like, Fully open models that I'm going to touch on a little bit later. But you know, just looking at the slides, it feels like 2024 [00:04:00] was just smooth sailing, happy knees, much better than previous year.[00:04:04] Luca Soldaini: And you know, you can plot you can pick your favorite benchmark Or least favorite, I don't know, depending on what point you're trying to make. And plot, you know, your closed model, your open model and sort of spin it in ways that show that, oh, you know open models are much closer to where closed models are today versus to Versus last year where the gap was fairly significant.[00:04:29] Luca Soldaini: So one thing that I think I don't know if I have to convince people in this room, but usually when I give this talks about like open models, there is always like this background question in, in, in people's mind of like, why should we use open models? APIs argument, you know, it's, it's. Just an HTTP request to get output from a, from one of the best model out there.[00:04:53] Luca Soldaini: Why do I have to set up infra and use local models? And there are really like two answer. There is the more [00:05:00] researchy answer for this, which is where it might be. Background lays, which is just research. If you want to do research on language models, research thrives on, on open models, there is like large swath of research on modeling, on how these models behave on evaluation and inference on mechanistic interpretability that could not happen at all if you didn't have open models they're also for AI builders, they're also like.[00:05:30] Luca Soldaini: Good use cases for using local models. You know, you have some, this is like a very not comprehensive slides, but you have things like there are some application where local models just blow closed models out of the water. So like retrieval, it's a very clear example. We might have like constraints like Edge AI applications where it makes sense.[00:05:51] Luca Soldaini: But even just like in terms of like stability, being able to say this model is not changing under the hood. It's, there's plenty of good cases for, [00:06:00] for open models. And the community is just not models. Is I stole this slide from one of the Quent2 announcement blog posts. But it's super cool to see like how much tech exists around open models and serving them on making them efficient and hosting them.[00:06:18] Luca Soldaini: It's pretty cool. And so. It's if you think about like where the term opens come from, comes from like the open source really open models meet the core tenants of, of open, of open source specifically when it comes around collaboration, there is truly a spirit, like through these open models, you can build on top of other people.[00:06:41] Luca Soldaini: innovation. We see a lot of these even in our own work of like, you know, as we iterate in the various versions of Alma it's not just like every time we collect from scratch all the data. No, the first step is like, okay, what are the cool data sources and datasets people have put [00:07:00] together for language model for training?[00:07:01] Luca Soldaini: Or when it comes to like our post training pipeline We one of the steps is you want to do some DPO and you use a lot of outputs of other models to improve your, your preference model. So it's really having like an open sort of ecosystem benefits and accelerates the development of open models.[00:07:23] The Definition of Open Models[00:07:23] Luca Soldaini: One thing that we got in 2024, which is not a specific model, but I thought it was really significant, is we first got we got our first open source AI definition. So this is from the open source initiative they've been generally the steward of a lot of the open source licenses when it comes to software and so they embarked on this journey in trying to figure out, okay, How does a license, an open source license for a model look like?[00:07:52] Luca Soldaini: Majority of the work is very dry because licenses are dry. So I'm not going to walk through the license step by [00:08:00] step, but I'm just going to pick out one aspect that is very good and then one aspect that personally feels like it needs improvement on the good side. This this open source AI license actually.[00:08:13] Luca Soldaini: This is very intuitive. If you ever build open source software and you have some expectation around like what open source looks like for software for, for AI, sort of matches your intuition. So, the weights need to be fairly available the code must be released with an open source license and there shouldn't be like license clauses that block specific use cases.[00:08:39] Luca Soldaini: So. Under this definition, for example, LLAMA or some of the QUEN models are not open source because the license says you can't use this model for this or it says if you use this model you have to name the output this way or derivative needs to be named that way. Those clauses don't meet open source [00:09:00] definition and so they will not be covered.[00:09:02] Luca Soldaini: The LLAMA license will not be covered under the open source definition. It's not perfect. One of the thing that, um, internally, you know, in discussion with with OSI, we were sort of disappointed is around the language. For data. So you might imagine that an open source AI model means a model where the data is freely available.[00:09:26] Luca Soldaini: There were discussion around that, but at the end of the day, they decided to go with a softened stance where they say a model is open source if you provide sufficient detail information. On how to sort of replicate the data pipeline. So you have an equivalent system, sufficient, sufficiently detailed.[00:09:46] Luca Soldaini: It's very, it's very fuzzy. Don't like that. An equivalent system is also very fuzzy. And this doesn't take into account the accessibility of the process, right? It might be that you provide enough [00:10:00] information, but this process costs, I don't know, 10 million to do. Now the open source definition. Like, any open source license has never been about accessibility, so that's never a factor in open source software, how accessible software is.[00:10:14] Luca Soldaini: I can make a piece of open source, put it on my hard drive, and never access it. That software is still open source, the fact that it's not widely distributed doesn't change the license, but practically there are expectations of like, what we want good open sources to be. So, it's, It's kind of sad to see that the data component in this license is not as, as, Open as some of us would like would like it to be.[00:10:40] Challenges for Open Models[00:10:40] Luca Soldaini: and I linked a blog post that Nathan wrote on the topic that it's less rambly and easier to follow through. One thing that in general, I think it's fair to say about the state of open models in 2024 is that we know a lot more than what we knew in, [00:11:00] in 2023. Like both on the training data, like And the pre training data you curate on like how to do like all the post training, especially like on the RL side.[00:11:10] Luca Soldaini: You know, 2023 was a lot of like throwing random darts at the board. I think 2024, we have clear recipes that, okay, don't get the same results as a closed lab because there is a cost in, in actually matching what they do. But at least we have a good sense of like, okay, this is, this is the path to get state of the art language model.[00:11:31] Luca Soldaini: I think that one thing that it's a downside of 2024 is that I think we are more research constrained in 2023. It feels that, you know, the barrier for compute that you need to, to move innovation along as just being right rising and rising. So like, if you go back to this slide, there is now this, this cluster of models that are sort of released by the.[00:11:57] Luca Soldaini: Compute rich club. Membership is [00:12:00] hotly debated. You know, some people don't want to be. Called the rich because it comes to expectations. Some people want to be called rich, but I don't know, there's debate, but like, these are players that have, you know, 10, 000, 50, 000 GPUs at minimum. And so they can do a lot of work and a lot of exploration and improving models that it's not very accessible.[00:12:21] Luca Soldaini: To give you a sense of like how I personally think about. Research budget for each part of the, of the language model pipeline is like on the pre training side, you can maybe do something with a thousand GPUs, really you want 10, 000. And like, if you want real estate of the art, you know, your deep seek minimum is like 50, 000 and you can scale to infinity.[00:12:44] Luca Soldaini: The more you have, the better it gets. Everyone on that side still complains that they don't have enough GPUs. Post training is a super wide sort of spectrum. You can do as little with like eight GPUs as long as you're able to [00:13:00] run, you know, a good version of, say, a LLAMA model, you can do a lot of work there.[00:13:05] Luca Soldaini: You can scale a lot of the methodology, just like scales with compute, right? If you're interested in you know, your open replication of what OpenAI's O1 is you're going to be on the 10K spectrum of our GPUs. Inference, you can do a lot with very few resources. Evaluation, you can do a lot with, well, I should say at least one GPUs if you want to evaluate GPUs.[00:13:30] Luca Soldaini: Open models but in general, like if you are, if you care a lot about intervention to do on this model, which it's my prefer area of, of research, then, you know, the resources that you need are quite, quite significant. Yeah. One other trends that has emerged in 2024 is this cluster of fully open models.[00:13:54] Luca Soldaini: So Omo the model that we built at ai, two being one of them and you know, it's nice [00:14:00] that it's not just us. There's like a cluster of other mostly research efforts who are working on this. And so it's good to to give you a primer of what like fully open means. So fully open, the easy way to think about it is instead of just releasing a model checkpoint that you run, you release a full recipe so that other people working on it.[00:14:24] Luca Soldaini: Working on that space can pick and choose whatever they want from your recipe and create their own model or improve on top of your model. You're giving out the full pipeline and all the details there instead of just like the end output. So I pull up the screenshot from our recent MOE model.[00:14:43] Luca Soldaini: And like for this model, for example, we released the model itself. Data that was trained on, the code, both for training and inference all the logs that we got through the training run, as well as every intermediate checkpoint and like the fact that you release different part of the pipeline [00:15:00] allows others to do really cool things.[00:15:02] Luca Soldaini: So for example, this tweet from early this year from folks in news research they use our pre training data to do a replication of the BitNet paper in the open. So they took just a Really like the initial part of a pipeline and then the, the thing on top of it. It goes both ways.[00:15:21] Luca Soldaini: So for example, for the Olmo2 model a lot of our pre trained data for the first stage of pre training was from this DCLM initiative that was led by folks Ooh, a variety of ins a variety of institutions. It was a really nice group effort. But you know, for When it was nice to be able to say, okay, you know, the state of the art in terms of like what is done in the open has improved.[00:15:46] AI2 Models - Olmo, Molmo, Pixmo etc[00:15:46] Luca Soldaini: We don't have to like do all this work from scratch to catch up the state of the art. We can just take it directly and integrate it and do our own improvements on top of that. I'm going to spend a few minutes doing like a [00:16:00] shameless plug for some of our fully open recipes. So indulge me in this.[00:16:05] Luca Soldaini: So a few things that we released this year was, as I was mentioning, there's OMOE model which is, I think still is state of the art MOE model in its size class. And it's also. Fully open, so every component of this model is available. We released a multi modal model called Molmo. Molmo is not just a model, but it's a full recipe of how you go from a text only model to a multi modal model, and we apply this recipe on top of Quent checkpoints, on top of Olmo checkpoints, as well as on top of OlmoE.[00:16:37] Luca Soldaini: And I think there'd be a replication doing that on top of Mistral as well. The post training side we recently released 2. 0. 3. Same story. This is a recipe on how you go from a base model to A state of the art post training model. We use the Tulu recipe on top of Olmo, on top of Llama, and then there's been open replication effort [00:17:00] to do that on top of Quen as well.[00:17:02] Luca Soldaini: It's really nice to see like, you know, when your recipe sort of, it's kind of turnkey, you can apply it to different models and it kind of just works. And finally, the last thing we released this year was Olmo 2, which so far is the best state of the art. Fully open language model a Sera combines aspect from all three of these previous models.[00:17:22] Luca Soldaini: What we learn on the data side from MomoE and what we learn on like making models that are easy to adapt from the Momo project and the Tulu project. I will close with a little bit of reflection of like ways this, this ecosystem of open models like it's not all roses. It's not all happy. It feels like day to day, it's always in peril.[00:17:44] Luca Soldaini: And, you know, I talked a little bit about like the compute issues that come with it. But it's really not just compute. One thing that is on top of my mind is due to like the environment and how you know, growing feelings about like how AI is treated. [00:18:00] It's actually harder to get access to a lot of the data that was used to train a lot of the models up to last year.[00:18:06] Luca Soldaini: So this is a screenshot from really fabulous work from Shane Longpre who's, I think is in Europe about Just access of like diminishing access to data for language model pre training. So what they did is they went through every snapshot of common crawl. Common crawl is this publicly available scrape of the, of a subset of the internet.[00:18:29] Luca Soldaini: And they looked at how For any given website whether a website that was accessible in say 2017, what, whether it was accessible or not in 2024. And what they found is as a reaction to like the close like of the existence of closed models like OpenAI or Cloud GPT or Cloud a lot of content owners have blanket Blocked any type of crawling to your website.[00:18:57] Luca Soldaini: And this is something that we see also internally at [00:19:00] AI2. Like one project that we started this year is we wanted to, we wanted to understand, like, if you're a good citizen of the internet and you crawl following sort of norms and policy that have been established in the last 25 years, what can you crawl?[00:19:17] Luca Soldaini: And we found that there's a lot of website where. The norms of how you express preference of whether to crawl your data or not are broken. A lot of people would block a lot of crawling, but do not advertise that in RobustDXT. You can only tell that they're crawling, that they're blocking you in crawling when you try doing it.[00:19:37] Luca Soldaini: Sometimes you can't even crawl the robots. txt to, to check whether you're allowed or not. And then a lot of websites there's, there's like all these technologies that historically have been, have existed to make websites serving easier such as Cloudflare or DNS. They're now being repurposed for blocking AI or any type of crawling [00:20:00] in a way that is Very opaque to the content owners themselves.[00:20:04] Luca Soldaini: So, you know, you go to these websites, you try to access them and they're not available and you get a feeling it's like, Oh, someone changed, something changed on the, on the DNS side that it's blocking this and likely the content owner has no idea. They're just using a Cloudflare for better, you know, load balancing.[00:20:25] Luca Soldaini: And this is something that was sort of sprung on them with very little notice. And I think the problem is this, this blocking or ideas really, it impacts people in different ways. It disproportionately helps companies that have a headstart, which are usually the closed labs and it hurts incoming newcomer players where either have now to do things in a sketchy way or you're never going to get that content that the closed lab might have.[00:20:54] Luca Soldaini: So there's a lot, it was a lot of coverage. I'm going to plug Nathan's blog post again. That is, [00:21:00] that I think the title of this one is very succinct which is like, we're actually not, You know, before thinking about running out of training data, we're actually running out of open training data. And so if we want better open models they should be on top of our mind.[00:21:13] Regulation and Lobbying[00:21:13] Luca Soldaini: The other thing that has emerged is that there is strong lobbying efforts on trying to define any kind of, AI as like a new extremely risky and I want to be precise here. Like the problem is now, um, like the problem is not not considering the risk of this technology. Every technology has risks that, that should always be considered.[00:21:37] Luca Soldaini: The thing that it's like to me is sorry, is ingenious is like just putting this AI on a pedestal and calling it like, An unknown alien technology that has like new and undiscovered potentials to destroy humanity. When in reality, all the dangers I think are rooted in [00:22:00] dangers that we know from existing software industry or existing issues that come with when using software on on a lot of sensitive domains, like medical areas.[00:22:13] Luca Soldaini: And I also noticed a lot of efforts that have actually been going on and trying to make this open model safe. I pasted one here from AI2, but there's actually like a lot of work that has been going on on like, okay, how do you make, if you're distributing this model, Openly, how do you make it safe?[00:22:31] Luca Soldaini: How, what's the right balance between accessibility on open models and safety? And then also there's annoying brushing of sort of concerns that are then proved to be unfounded under the rug. You know, if you remember the beginning of this year, it was all about bio risk of these open models.[00:22:48] Luca Soldaini: The whole thing fizzled because as being Finally, there's been like rigorous research, not just this paper from Cohere folks, but it's been rigorous research showing [00:23:00] that this is really not a concern that we should be worried about. Again, there is a lot of dangerous use of AI applications, but this one was just like, A lobbying ploy to just make things sound scarier than they actually are.[00:23:15] Luca Soldaini: So I got to preface this part. It says, this is my personal opinion. It's not my employer, but I look at things like the SP 1047 from, from California. And I think we kind of dodged a bullet on, on this legislation. We, you know, the open source community, a lot of the community came together at the last, sort of the last minute and did a very good effort trying to explain all the negative impact of this bill.[00:23:43] Luca Soldaini: But There's like, I feel like there's a lot of excitement on building these open models or like researching on these open models. And lobbying is not sexy it's kind of boring but it's sort of necessary to make sure that this ecosystem can, can really [00:24:00] thrive. This end of presentation, I have Some links, emails, sort of standard thing in case anyone wants to reach out and if folks have questions or anything they wanted to discuss.[00:24:13] Luca Soldaini: Is there an open floor? I think we have Sophia[00:24:16] swyx: who wants to who one, one very important open model that we haven't covered is Mistral. Ask her on this slide. Yeah, yeah. Well, well, it's nice to have the Mistral person talk recap the year in Mistral. But while Sophia gets set up, does anyone have like, just thoughts or questions about the progress in this space?[00:24:32] Questions - Incentive Alignment[00:24:32] swyx: Do you always have questions?[00:24:34] Quesiton: I'm very curious how we should build incentives to build open models, things like Francois Chollet's ArcPrize, and other initiatives like that. What is your opinion on how we should better align incentives in the community so that open models stay open?[00:24:49] Luca Soldaini: The incentive bit is, like, really hard.[00:24:51] Luca Soldaini: Like, even It's something that I actually, even we think a lot about it internally because like building open models is risky. [00:25:00] It's very expensive. And so people don't want to take risky bets. I think the, definitely like the challenges like our challenge, I think those are like very valid approaches for it.[00:25:13] Luca Soldaini: And then I think in general, promoting, building, so, any kind of effort to participate in this challenge, in those challenges, if we can promote doing that on top of open models and sort of really lean into like this multiplier effect, I think that is a good way to go. If there were more money for that.[00:25:35] Luca Soldaini: For efforts like research efforts around open models. There's a lot of, I think there's a lot of investments in companies that at the moment are releasing their model in the open, which is really cool. But it's usually more because of commercial interest and not wanting to support this, this like open models in the longterm, it's a really hard problem because I think everyone is operating sort of [00:26:00] in what.[00:26:01] Luca Soldaini: Everyone is at their local maximum, right? In ways that really optimize their position on the market. Global maximum is harder to achieve.[00:26:11] Question2: Can I ask one question? No.[00:26:12] Luca Soldaini: Yeah.[00:26:13] Question2: So I think one of the gap between the closed and open source models is the mutability. So the closed source models like chat GPT works pretty good on the low resource languages, which is not the same on the open, open source models, right?[00:26:27] Question2: So is it in your plan to improve on that?[00:26:32] Luca Soldaini: I think in general,[00:26:32] Luca Soldaini: yes, is I think it's. I think we'll see a lot of improvements there in, like, 2025. Like, there's groups like, Procurement English on the smaller side that are already working on, like, better crawl support, multilingual support. I think what I'm trying to say here is you really want to be experts.[00:26:54] Luca Soldaini: who are actually in those countries that teach those languages to [00:27:00] participate in the international community. To give you, like, a very easy example I'm originally from Italy. I think I'm terribly equipped to build a model that works well in Italian. Because one of the things you need to be able to do is having that knowledge of, like, okay, how do I access, you know, how Libraries, or content that is from this region that covers this language.[00:27:23] Luca Soldaini: I've been in the US long enough that I no longer know. So, I think that's the efforts that folks in Central Europe, for example, are doing. Around like, okay, let's tap into regional communities. To get access you know, to bring in collaborators from those areas. I think it's going to be, like, very crucial for getting products there.[00:27:46] Mistral intro[00:27:46] Sophia Yang: Hi everyone. Yeah, I'm super excited to be here to talk to you guys about Mistral. A really short and quick recap of what we have done, what kind of models and products we have released in the [00:28:00] past year and a half. So most of you We have already known that we are a small startup funded about a year and a half ago in Paris in May, 2003, it was funded by three of our co founders, and in September, 2003, we released our first open source model, Mistral 7b yeah, how, how many of you have used or heard about Mistral 7b?[00:28:24] Sophia Yang: Hey, pretty much everyone. Thank you. Yeah, it's our Pretty popular and community. Our committee really loved this model, and in December 23, we, we released another popular model with the MLE architecture Mr. A X seven B and oh. Going into this year, you can see we have released a lot of things this year.[00:28:46] Sophia Yang: First of all, in February 2004, we released MrSmall, MrLarge, LeChat, which is our chat interface, I will show you in a little bit. We released an embedding model for, you [00:29:00] know, converting your text into embedding vectors, and all of our models are available. The, the big cloud resources. So you can use our model on Google cloud, AWS, Azure Snowflake, IBM.[00:29:16] Sophia Yang: So very useful for enterprise who wants to use our model through cloud. And in April and May this year, we released another powerful open source MOE model, AX22B. And we also released our first code. Code Model Coastal, which is amazing at 80 plus languages. And then we provided another fine tuning service for customization.[00:29:41] Sophia Yang: So because we know the community love to fine tune our models, so we provide you a very nice and easy option for you to fine tune our model on our platform. And also we released our fine tuning code base called Menstrual finetune. It's open source, so feel free to take it. Take a look and.[00:29:58] Sophia Yang: More models. [00:30:00] On July 2, November this year, we released many, many other models. First of all is the two new small, best small models. We have Minestra 3B great for Deploying on edge devices we have Minstrel 8B if you used to use Minstrel 7B, Minstrel 8B is a great replacement with much stronger performance than Minstrel 7B.[00:30:25] Sophia Yang: We also collaborated with NVIDIA and open sourced another model, Nemo 12B another great model. And Just a few weeks ago, we updated Mistral Large with the version 2 with the updated, updated state of the art features and really great function calling capabilities. It's supporting function calling in LatentNate.[00:30:45] Sophia Yang: And we released two multimodal models Pixtral 12b. It's this open source and Pixtral Large just amazing model for, models for not understanding images, but also great at text understanding. So. Yeah, a [00:31:00] lot of the image models are not so good at textual understanding, but pixel large and pixel 12b are good at both image understanding and textual understanding.[00:31:09] Sophia Yang: And of course, we have models for research. Coastal Mamba is built on Mamba architecture and MathRoll, great with working with math problems. So yeah, that's another model.[00:31:29] Sophia Yang: Here's another view of our model reference. We have several premier models, which means these models are mostly available through our API. I mean, all of the models are available throughout our API, except for Ministry 3B. But for the premier model, they have a special license. Minstrel research license, you can use it for free for exploration, but if you want to use it for enterprise for production use, you will need to purchase a license [00:32:00] from us.[00:32:00] Sophia Yang: So on the top row here, we have Minstrel 3b and 8b as our premier model. Minstrel small for best, best low latency use cases, MrLarge is great for your most sophisticated use cases. PixelLarge is the frontier class multimodal model. And, and we have Coastral for great for coding and then again, MrEmbedding model.[00:32:22] Sophia Yang: And The bottom, the bottom of the slides here, we have several Apache 2. 0 licensed open way models. Free for the community to use, and also if you want to fine tune it, use it for customization, production, feel free to do so. The latest, we have Pixtros 3 12b. We also have Mr. Nemo mum, Coastal Mamba and Mastro, as I mentioned, and we have three legacy models that we don't update anymore.[00:32:49] Sophia Yang: So we recommend you to move to our newer models if you are still using them. And then, just a few weeks ago, [00:33:00] we did a lot of, uh, improvements to our code interface, Lachette. How many of you have used Lachette? Oh, no. Only a few. Okay. I highly recommend Lachette. It's chat. mistral. ai. It's free to use.[00:33:16] Sophia Yang: It has all the amazing capabilities I'm going to show you right now. But before that, Lachette in French means cat. So this is actually a cat logo. If you You can tell this is the cat eyes. Yeah. So first of all, I want to show you something Maybe let's, let's take a look at image understanding.[00:33:36] Sophia Yang: So here I have a receipts and I want to ask, just going to get the prompts. Cool. So basically I have a receipt and I said I ordered I don't know. Coffee and the sausage. How much do I owe? Add a 18 percent tip. So hopefully it was able to get the cost of the coffee and the [00:34:00] sausage and ignore the other things.[00:34:03] Sophia Yang: And yeah, I don't really understand this, but I think this is coffee. It's yeah. Nine, eight. And then cost of the sausage, we have 22 here. And then it was able to add the cost, calculate the tip, and all that. Great. So, it's great at image understanding, it's great at OCR tasks. So, if you have OCR tasks, please use it.[00:34:28] Sophia Yang: It's free on the chat. It's also available through our API. And also I want to show you a Canvas example. A lot of you may have used Canvas with other tools before. But, With Lachat, it's completely free again. Here, I'm asking it to create a canvas that's used PyScript to execute Python in my browser.[00:34:51] Sophia Yang: Let's see if it works. Import this. Okay, so, yeah, so basically it's executing [00:35:00] Python here. Exactly what we wanted. And the other day, I was trying to ask Lachat to create a game for me. Let's see if we can make it work. Yeah, the Tetris game. Yep. Let's just get one row. Maybe. Oh no. Okay. All right. You get the idea. I failed my mission. Okay. Here we go. Yay! Cool. Yeah. So as you can see, Lachet can write, like, a code about a simple game pretty easily. And you can ask Lachet to explain the code. Make updates however you like. Another example. There is a bar here I want to move.[00:35:48] Sophia Yang: Okay, great, okay. And let's go back to another one. Yeah, we also have web search capabilities. Like, you can [00:36:00] ask what's the latest AI news. Image generation is pretty cool. Generate an image about researchers. Okay. In Vancouver? Yeah, it's Black Forest Labs flux Pro. Again, this is free, so Oh, cool.[00:36:19] Sophia Yang: I guess researchers here are mostly from University of British Columbia. That's smart. Yeah. So this is Laia ira. Please feel free to use it. And let me know if you have any feedback. We're always looking for improvement and we're gonna release a lot more powerful features in the coming years.[00:36:37] Sophia Yang: Thank you. Get full access to Latent Space at www.latent.space/subscribe

The co-lab career stories
Emily Li Mandri - Founder & Designer of MLE

The co-lab career stories

Play Episode Listen Later Dec 17, 2024 12:07


Emily Li Mandri is the founder & designer of MLE, a consciously created women's accessories brand based in upstate NY. After spending over 15 years in fashion and digital marketing with a focus on innovative design and emerging brand growth, Emily decided it was time to launch her eponymous label, MLE. She now incorporates the first-hand experience she gained in both industries to MLE, with a focus on sustainability and quality. In this episode, Alexis Carey interviews Emily, an accessories designer based in New York. Emily discusses her background, including her education at Johns Hopkins, and her journey from creating silkscreen t-shirts in college to founding her successful accessories line MLE. She shares insights into the challenges and rewards of being an entrepreneur, her career evolution, and future plans for expanding her business.

L’Heure du Monde
La mystérieuse disparition des Fingers

L’Heure du Monde

Play Episode Listen Later Dec 16, 2024 15:25


Mais où sont donc passés les Fingers ? Depuis quelques mois, ces biscuits chocolatés, très sucrés, et commercialisés par le groupe Mondelez se sont volatilisés des rayonnages français, sans la moindre explication. A l'instar des Figolu, en 2015, d'autres biscuits ont déjà disparu, avant de réapparaître, à la suite d'une mobilisation des consommateurs. Cette fois, l'histoire semble être différente.Alors comment expliquer qu'un produit aussi connu puisse disparaître sans explications ? Est-ce que cela pourrait être un coup de communication ? Et qu'est-ce que cela dit de notre rapport à l'alimentation quand nos produits fétiches sont devenus massivement industrialisés ?Dans cet épisode du podcast « L'Heure du Monde », Coline Clavaud-Mégevand, journaliste, revient sur l'enquête qu'elle a menée sur le sujet pour « M Le magazine du Monde ».Un épisode produit et présenté par Adèle Ponticelli avec l'aide de Marion Bothorel. Réalisation : Quentin Bresson. Musiques : Amandine Robillard.Episode publié le 16 décembre 2024. Hébergé par Audion. Visitez https://www.audion.fm/fr/privacy-policy pour plus d'informations.

L’Heure du Monde
La boxeuse algérienne Imane Khelif, icône queer malgré elle

L’Heure du Monde

Play Episode Listen Later Nov 25, 2024 21:17


Le 1er août 2024, Imane Khelif passait de l'ombre à la lumière. La boxeuse algérienne de 25 ans faisait son entrée dans la compétition aux Jeux olympiques de Paris, catégorie « moins de 66 kg ». Mais ce jour-là, ce n'est pas tant sa performance qu'on a retenue, que l'abandon, en larmes, de son adversaire l'italienne Angela Carini qui estimait sa défaite « injuste ».Il n'en fallait pas plus pour relancer une rumeur qui colle à la peau d'Imane Khelif depuis les championnats du monde de boxe de New Delhi en 2023, selon laquelle elle ne serait pas vraiment une femme. Dans les minutes qui suivent le combat, de nombreuses personnalités s'en saisissent, de la présidente du Conseil italien, Giorgia Meloni, à l'autrice britannique J. K. Rowling, en passant par Donald Trump alors en pleine campagne présidentielle.Sauf que cette rumeur s'appuie sur des tests de « féminité » réalisés par une fédération de boxe controversée et proche du Kremlin, l'IBA, dont les résultats n'ont jamais été publiés. Imane Khelif est née femme, s'est toujours considérée comme telle et a toujours combattu dans les catégories féminines.Depuis cette polémique, Imane Khelif est donc devenue, malgré elle, une icône paradoxale : un symbole du droit à la différence en Occident quand bien même elle n'en revendique aucune, une égérie gender fluid pour les créateurs de mode, une fierté nationale pour l'Algérie, et une cible pour les activistes anti-trans.Comment Imane Khelif vit-elle cette nouvelle notoriété ambiguë et encombrante ? Comment expliquer qu'elle soit devenue le catalyseur de toutes les passions et fantasmes autour du genre ? Réponse dans cet épisode du podcast « L'Heure du Monde », avec Gaspard Dhellemmes, qui a écrit son portrait pour « M Le magazine du Monde ».Un épisode d'Adélaïde Tenaglia. Présentation et rédaction en chef : Jean-Guillaume Santi. Réalisation : Florentin Baume. Musiques : Amandine Robillard et Epidemic Sounds. Cet épisode a été publié le 25 novembre 2024.---Abonnez-vous à la chaîne Whatsapp du "Monde" : https://lemde.fr/4eMPTJd Hébergé par Audion. Visitez https://www.audion.fm/fr/privacy-policy pour plus d'informations.

Revue de presse française
À la Une: la France se prépare à un hiver de grèves

Revue de presse française

Play Episode Listen Later Nov 17, 2024 4:54


Winter is coming. L'hiver arrive en France, et l'on se dirige « vers un Noël sous tension », titre La Tribune Dimanche. Le spectre de la grève refait surface du côté des agriculteurs contre l'accord de libre-échange, toujours en négociation, entre l'Union européenne et les pays du Mercosur ; du côté des fonctionnaires contre la volonté du gouvernement de passer de un à trois jours de carence, comme dans le privé ; du côté des cheminots contre la disparition du Fret SNCF, prévue au 1e janvier 2025. « Va-t-on revivre, s'interroge La Tribune Dimanche, une fin d'année comme en 2022, avec une France paralysée, des trains annulés et des milliers de voyageurs ne pouvant pas rejoindre leurs proches pour les fêtes ? ». Dans l'hebdomadaire, le président du groupe SNCF, Jean-Pierre Farrandou, en appelle « au sens des responsabilités des cheminots », au moment « où la France connaît une situation économique compliquée ». Un constat dressé, aussi, par Marianne, qui voit réapparaître « le spectre du chômage de masse ». Le magazine décompte 183 défaillances d'entreprises par jour, des « fleurons nationaux » comme Auchan et Michelin licencient. Et avec un déficit à 6% du PIB, « le gouvernement se trouve dans l'impossibilité, selon Marianne, d'intervenir massivement, comme par le passé, pour colmater la brèche avec de l'argent public ».À lire aussiL'accord UE-Mercosur n'est «plus du tout en phase avec les impératifs écologiques de l'époque»Donald Trump, du Queens à la Maison-BlancheLui a jeté un froid supplémentaire après sa victoire : Donald Trump de retour au pouvoir aux États-Unis. Puisque de nombreuses analyses ont déjà été écrites pour expliquer cette victoire, le vote des Américains, la défaite des démocrates, les innombrables conséquences du retour de Donald Trump aux États-Unis et dans le monde... Pourquoi ne pas revenir, tout simplement, à l'origine du mal ? Paris Match retrace la carrière de « ce gamin, né dans le Queens » à New York, juste après la Seconde Guerre mondiale, en 1946, et « qui rêvait de gloire en regardant au loin les hautes tours de Manhattan ». Donald Trump « a un temps caressé l'idée de faire des études de cinéma », mais il en aurait été empêché par son père, Fred, à qui il n'avait pourtant pas peur de répondre. Son père a « fait fortune en construisant des HLM à Brooklyn », rappelle Paris Match, mais Donald Trump « vise beaucoup plus haut ». « Il veut faire partie du grand monde qui vit en vase clos et le regarde de haut. » Il devient donc un magnat de l'immobilier, avant de frôler la banqueroute puis d'accéder, finalement, « aux sommets de la célébrité » en participant à l'émission de télé-réalité « The Apprentice ». La suite, on la connaît : élu président des États-Unis en 2016 avant un échec en 2020 suivi de l'invasion du Capitole, puis sa récente victoire, à l'issue d'une campagne électorale au cours de laquelle il a su « jouer contre le système », « attaquer en permanence », « ne jamais reconnaître ses torts », « mentir ». Autant de méthodes apprises, raconte Paris Match, aux côtés d'un avocat sulfureux, Roy Cohn, qu'il a rencontré plus jeune en entrant dans un club privé de Manhattan.À lire aussiProche-Orient : Donald Trump donnera-t-il carte blanche à Benyamin Netanyahu ?Donald Trump, Vladimir Poutine et l'UkraineDe retour à la Maison-Blanche, « il sait ce qui l'attend », « il est préparé », assure Le Nouvel Obs. Tout est compilé, selon l'hebdomadaire, dans les plus de 900 pages du « Projet 2025 », une « feuille de route préparée par une centaine de cercles de réflexion conservateurs ». Au menu, donc, d'après Le Nouvel Obs : « démanteler l'État administratif, défendre la souveraineté et les frontières, remettre la famille au centre de la vie américaine et garantir les droits individuels pour vivre librement ». À cela s'ajoute la volonté de mettre fin à la guerre en Ukraine, et sur ce point, « Donald Trump sera meilleur que vous ne le croyez » : c'est ce que veut penser Boris Johnson. Dans L'Express, l'ancien Premier ministre britannique s'interroge : « Donald Trump, avec tout son ego, tout son orgueil, sa détermination à rendre sa grandeur à l'Amérique, va-t-il laisser la Russie humilier son pays ? Va-t-il inaugurer son mandat en laissant Vladimir Poutine rendre sa grandeur à l'empire soviétique ? ». « Je ne pense pas », répond Boris Johnson. Pourtant, Le Point revient sur la façon dont le président russe « va tenter d'exploiter le retour de Donald Trump à la Maison-Blanche pour étendre son influence mondiale ». Washington travaille sur un accord de paix qui pourrait notamment « valider les conquêtes russes, soit 20% du territoire de l'Ukraine », et empêcher Kiev d'adhérer à l'Otan pendant 20 ans. « Reste un obstacle, ajoute Le Point : les exigences de Vladimir Poutine », qui vont « bien au-delà ».À lire aussiAprès l'élection de Donald Trump, les droits reproductifs des Américaines en péril?Déjà 100 jours après les JO de ParisDonald Trump, qui s'en est par ailleurs pris à une toute autre personnalité, au cours de sa campagne : la boxeuse algérienne Imane Khelif. Au cœur d'une polémique, cet été, lors des JO de Paris, accusée de ne pas vraiment être une femme, la championne olympique est en couverture de M Le magazine du Monde. Retour sur le « harcèlement » qu'elle subit depuis « toute petite », sans avoir empêché Imane Khelif « de devenir une idole nationale en Algérie », rappelle l'hebdomadaire et une icône de la mode. Les Jeux olympiques et paralympiques de Paris, c'était il y a 100 jours déjà. Le magazine L'Équipe a donc choisi de célébrer ce compte à rebours inversé. De se remémorer les bons souvenirs, avec les Phryges, ces « mascottes qui ont renvoyé Footix aux vestiaires », constate le magazine. Avec le recordman du monde de saut à la perche, le Suédois Armand Duplantis, qui redescend doucement de ses 6,26 m. Et puis avec cet article sur les bons perdants : ceux qui ont terminé au pied du podium, à la « place du con ». Eux aussi ont été reçus par le président, en Italie, et ils ont été salués, en Belgique, par le Comité olympique. « Les quatrièmes ont eu une visibilité accrue durant les derniers JO », note L'Équipe, en expliquant que « la commisération tend à céder le pas à une dédramatisation, une approche propre à une génération d'athlètes attentive à son bien-être ». Pour certains sportifs, difficile tout de même de savoir s'il faut mieux en rire qu'en pleurer. Mais en ce qui concerne la fin des Jeux, la petite larme de nostalgie n'est jamais bien loin. C'était l'été et les cheminots avaient même décidé de respecter la trêve olympique.À lire aussiRugby: l'équipe de France s'offre un troisième succès de suite contre la Nouvelle-Zélande

Let's Talk AI
#186 - Adobe AI Tools, Tesla's Cybercab, Nobel Prizes

Let's Talk AI

Play Episode Listen Later Oct 20, 2024 93:54 Transcription Available


Our 186th episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov and guest host Jon Krohn from the SuperDataScience Podcast. Check out Jon's upcoming agent-focused event here - AI Catalyst: Agentic Artificial Intelligence Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Timestamps + Links: (00:00:00) Intro / Banter (00:04:14) News Preview (00:05:28) Response to listener comments / corrections Tools & Apps (00:07:10) Adobe's AI video model is here, and it's already inside Premiere Pro (00:11:52) Adobe teases AI tools that build 3D scenes, animate text, and make distractions disappear (00:15:43) Adobe's Project Super Sonic uses AI to generate sound effects for your videos (00:17:05) YouTube expands AI audio generation tool to all U.S. creators (00:20:29) All Gemini users can now generate images with Imagen 3 (00:22:27) Meta AI will launch in six more countries today, including the UK (00:24:27) OpenAI Unveils Secret Meta Prompt—And It's Very Different From Anthropic's Approach Applications & Business (00:27:46) Tesla's big ‘We, Robot' event criticized for ‘parlor tricks' and vague timelines for robots, Cybercab, Robovan (00:37:25) OpenAI announces content deal with Hearst, including content from Cosmopolitan, Esquire and the San Francisco Chronicle Projects & Open Source (00:47:59) OpenR: An Open-Source AI Framework Enhancing Reasoning in Large Language Models (00:49:54) MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering (00:56:29) OpenAI Releases Swarm: An Experimental AI Framework for Building, Orchestrating, and Deploying Multi-Agent Systems Research & Advancements (00:59:23) Nobel Physics Prize Awarded for Pioneering A.I. Research by 2 Scientists (01:05:22) Nobel Prize in Chemistry Goes to 3 Scientists for Predicting and Creating Proteins (01:09:09) LLMs can't perform “genuine logical reasoning,” Apple researchers suggest (01:13:05) GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models Policy & Safety (01:14:34) Anthropic CEO goes full techno-optimist in 15,000-word paean to AI (01:23:04) Google will help build seven nuclear reactors to power its AI systems (01:24:11) LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations Synthetic Media & Art (01:26:26) Adobe Pushes Content Authenticity Forward With a Free Web App Designed for Creators (01:29:13) Outro

Tertulia De Tebeos -TDT-
TDT Podcast #220: Vengadores Costa Oeste #1

Tertulia De Tebeos -TDT-

Play Episode Listen Later Oct 7, 2024 154:55


¡Vuelven los monográficos a TDT! Fernando, Rafa y Jotaele nos juntamos para charlar de Los vengadores costa oeste o como los conocimos nosotros: Los nuevos vengadores, en este primer monográfico de dos que comprende el material publicado en los dos primeros MLE. No se lo pierdan. 🎼 - Kenny Loggins - Danger Zone 🎼 - Tears for fears - Everybody wants to rule the world 🎼 - Duran Duran - A view to a kill Puedes encontrarnos en: Facebook: https://www.facebook.com/tdtpodcast Twitter: @PodcastTDT tertuliadetebeos@gmail.com tertuliadetebeos.blogspot.com En Instagram: @tdtpodcast_ YouTube: https://www.youtube.com/tdtpodcast

The Knicks Recap: A New York Knicks Podcast
Insider Provides MASSIVE Update On Knicks Plans For Final Roster Spot... (TKR Live) | Knicks News | The Knicks Recap Podcast

The Knicks Recap: A New York Knicks Podcast

Play Episode Listen Later Sep 12, 2024 8:25


In this clip from The Knicks Recap live stream, we talk about what player could take the Knicks final roster spot. According to SNY's Ian Begley, the NY Knicks may give the final roster spot to someone who earns it from training camp and hold on to the MLE. They could still utilize it during the buyout market time later during the season if a trade doesn't materialize in the way they had hoped… Troy Mahabir breaks all of this down! If you enjoy these clips from the LIVE shows and want to see more of them, make sure you subscribe to the channel and leave a comment below! SHOW CHAPTERS: 00:00 - Intro 00:45 - Knicks Filling Last Roster Spot With Training Camp Player 02:15 - Knicks Unlikely To Use Final Roster Spot On Ryan Arcidiacono 04:25 - Knicks Could Make Major Trade & Add Player From Buyout Market 06:17 - NY Must Give Mitch As Much Time To Heal Before Returning 07:25 - Subscriber Comment: Knicks First 10 Games Very Important LISTEN NOW TO GET YOUR KNICKS FIX! Catch the latest special interviews, shorts, fan interactions, and more by following the show! Don't forget to turn on notifications so you don't miss another episode! Rather Watch the latest Knicks Recap episode? Catch us on YouTube here: https://www.youtube.com/@TheKnicksRecap Follow The Knicks Recap on all social media platforms! Twitter: https://twitter.com/TheKnicksRecap Instagram: https://www.instagram.com/TheKnicksRecap/ Reddit: https://www.reddit.com/u/TheKnicksRecap?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button Facebook: https://www.facebook.com/TheKnicksRecap/ Rather Listen to The Knicks Recap on a different platform? Catch us on ALL of your favorite streaming platforms: Apple Podcast: https://apple.co/3SKSl8o Spotify: https://spoti.fi/3QrEfr6 iHeart Radio: https://www.iheart.com/podcast/269-the-knicks-recap-a-new-yor-100895112/ Amazon Music: https://amzn.to/3QoZrOd Other Pod Channels: https://anchor.fm/the-knicks-recap Grab our MERCH featuring some of the graphics you've seen us create to take your Knicks fandom to the NEXT LEVEL: MAIN STORE: https://theknicksrecap.myspreadshop.com/ CashApp: $TheKnicksRecap Have a comment about the show, an interview, or a graphic idea? Reach out to The Knicks Recap on ALL SOCIAL MEDIA PLATFORMS!

The Knicks Recap: A New York Knicks Podcast
Knicks Next MOVE! Top Free Agent Centers STILL Available For NY's FINAL Roster Spot... | Knicks News | The Knicks Recap Podcast

The Knicks Recap: A New York Knicks Podcast

Play Episode Listen Later Aug 28, 2024 12:37


In this clip from The Knicks Recap live stream, we identify the top free agent centers still available that NY can sign with their final roster spot. With Mitchell Robinson's status for opening night in jeporady, NY must use their MLE to sign an impactful FA center to this team… Troy Mahabir breaks all of this down! If you enjoy these clips from the LIVE shows and want to see more of them, make sure you subscribe to the channel and leave a comment below! SHOW CHAPTERS: 00:00 - Intro 01:15 - Top Free Agent Centers STILL Available 02:35 - Free Agent Target: JaVale McGee 04:00 - McGee Isn't On A Roster For Good Reason 05:45 - Free Agent Target: Tristan Thompson 08:38 - Free Agent Target: Bismack Biyombo 10:45 - Biyombo Is A Tom Thibodeau Style Player 11:30 - Bismack Could Thrive Under A Defensive Minded Head Coach LISTEN NOW TO GET YOUR KNICKS FIX! Catch the latest special interviews, shorts, fan interactions, and more by following the show! Don't forget to turn on notifications so you don't miss another episode! Rather Watch the latest Knicks Recap episode? Catch us on YouTube here: https://www.youtube.com/@TheKnicksRecap Follow The Knicks Recap on all social media platforms! Twitter: https://twitter.com/TheKnicksRecap Instagram: https://www.instagram.com/TheKnicksRecap/ Reddit: https://www.reddit.com/u/TheKnicksRecap?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button Facebook: https://www.facebook.com/TheKnicksRecap/ Rather Listen to The Knicks Recap on a different platform? Catch us on ALL of your favorite streaming platforms: Apple Podcast: https://apple.co/3SKSl8o Spotify: https://spoti.fi/3QrEfr6 iHeart Radio: https://www.iheart.com/podcast/269-the-knicks-recap-a-new-yor-100895112/ Amazon Music: https://amzn.to/3QoZrOd Other Pod Channels: https://anchor.fm/the-knicks-recap Grab our MERCH featuring some of the graphics you've seen us create to take your Knicks fandom to the NEXT LEVEL: MAIN STORE: https://theknicksrecap.myspreadshop.com/ CashApp: $TheKnicksRecap Have a comment about the show, an interview, or a graphic idea? Reach out to The Knicks Recap on ALL SOCIAL MEDIA PLATFORMS!

Kaatscast
Catskill Couture: MLE's Sustainable Fashion

Kaatscast

Play Episode Listen Later Aug 27, 2024 20:06


In this episode of Kaatscast, we explore the journey of Emily Li Mandri, founder of the women's accessories brand MLE, based in Saugerties, New York. Emily shares insights into the challenges and rewards of running a fashion brand in Upstate New York, her commitment to eco-conscious materials and sustainable fashion, and the influence of her family's background in apparel. We also hear from her assistant, New Paltz theater grad Kiana Duggan-Haas, about the importance of sustainability in the fashion industry. Tune in for an inspiring discussion on ethical fashion practices, local craftsmanship, and a life/work balance in the Catskills. --- Thanks to this week's sponsors: Briars & Brambles Books, Hanford Mills Museum, and The Mountain Eagle. Kaatscast is made possible through a grant from the Nicholas J. Juried Family Foundation, and through the support of listeners like you! --- 00:00 Introduction to MLE 01:40 Meet the Founder: Emily Li Mandri 03:20 Sustainability in Fashion 05:58 Challenges and Innovations in Sustainable Fashion 12:51 Living and Working in the Catskills 14:44 Building a Local and National Brand 17:42 Conclusion and Final Thoughts

RTL2 : Made In France
L'intégrale - Santa, Mika, Olivia Ruiz dans RTL2 Made In France (25/08/24)

RTL2 : Made In France

Play Episode Listen Later Aug 25, 2024 108:53


Santa - Recommence-moi Charlotte Cardin - Un peu trop Etienne Daho - Saudade Zaoui - Ctrl + Z Robbie Williams - Supreme Cobalt - Trop Tôt Luna Parker - Tes états d'âme... Éric Joseph Kamel & Julien Doré - Beau Mika - Jane Birkin Matmatah - Lambe An Dro Eddy de Pretto - être biennn Manu Larrouy - Carla (Roue Tourne) Aliocha Schneider - Ensemble Superbus - Travel The World Elisa Tovati ft. Tom Dice - Il nous faut Indochine - Le chant des cygnes Eskobar et Emma Daumas - You Got Me Louise Attaque - Ton invitation -M- - Le soldat rose Renaud - Morgane de toi Noé Preszow - Comment Fais-Tu Pour Vivre Axelle Red - Ma prière Pierre Garnier - Nous on sait Raphaël - Caravane Léman - On est plein Jacques Dutronc - Les Cactus (Live) Sylvain Duthu - Les jours qui restent Vivien Savage - La petite Lady Margaux Avril - L'air de rien Marc Lavoine - Je me sens si seul Louane - Les étoiles Noir Désir - Aux sombres héros de l'amer Olivia Ruiz - Elle Panique

The Knicks Recap: A New York Knicks Podcast
Knicks Make MASSIVE Free Agency Decision… | Knicks News | The Knicks Recap Podcast

The Knicks Recap: A New York Knicks Podcast

Play Episode Listen Later Aug 18, 2024 9:46


According to a number of reports, the New York Knicks are currently working out a number of free agents in an attempt to help improve this roster for next season. The main players they have been looking at have been backup centers they can add by using the mid level exception or MLE. One of the players consistently linked to NY over the past few weeks has been free agent center Omer Yurtseven. The Knicks recently worked out Omer but elected not to add him to the roster. But it's clear, NY is planning to make another move… Troy Mahabir breaks all of this down! SHOW CHAPTERS: 00:00 - Intro 01:00 - Knicks Make Major Decision On Omer Yurtseven In FA 02:07 - Knicks Elected Not To Add Yurtseven To Team After Workout 02:53 - NY Working Out A Number Of Free Agents 04:01 - Omer Clearly Failed Audition If NY Didn't Sign Him 05:16 - Achiuwa & Sims are Critical Pieces But NOT Backup Centers 06:05 - 76ers Sign Key Big Man Free Agent 07:45 - Mitchell Robinson Could Miss Start Of Regular Season 08:21 - Knicks Center Have Huge Opportunity In NY Next Season LISTEN NOW TO GET YOUR KNICKS FIX! Catch the latest special interviews, shorts, fan interactions, and more by following the show! Don't forget to turn on notifications so you don't miss another episode! Rather Watch the latest Knicks Recap episode? Catch us on YouTube here: https://www.youtube.com/@TheKnicksRecap Follow The Knicks Recap on all social media platforms! Twitter: https://twitter.com/TheKnicksRecap Instagram: https://www.instagram.com/TheKnicksRecap/ Reddit: https://www.reddit.com/u/TheKnicksRecap?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button Facebook: https://www.facebook.com/TheKnicksRecap/ Rather Listen to The Knicks Recap on a different platform? Catch us on ALL of your favorite streaming platforms: Apple Podcast: https://apple.co/3SKSl8o Spotify: https://spoti.fi/3QrEfr6 iHeart Radio: https://www.iheart.com/podcast/269-the-knicks-recap-a-new-yor-100895112/ Amazon Music: https://amzn.to/3QoZrOd Other Pod Channels: https://anchor.fm/the-knicks-recap Grab our MERCH featuring some of the graphics you've seen us create to take your Knicks fandom to the NEXT LEVEL: MAIN STORE: https://theknicksrecap.myspreadshop.com/ CashApp: $TheKnicksRecap Have a comment about the show, an interview, or a graphic idea? Reach out to The Knicks Recap on ALL SOCIAL MEDIA PLATFORMS!

The Knicks Recap: A New York Knicks Podcast
Knicks Targeting Omer Yurtseven For FINAL Roster Spot… | Knicks News | The Knicks Recap Podcast

The Knicks Recap: A New York Knicks Podcast

Play Episode Listen Later Aug 15, 2024 10:15


The New York Knicks have 1 final roster spot remaining. This offseason, NY has been one of the most successful teams, adding all of the necessary pieces to contend for a championship. However, the one area they have taken criticism has been how they've handled the center position. Isaiah Hartenstein left the Knicks in free agency to join a powerhouse in the West with OKC. That left NY without a starting center they could trust to play 65+ games. NY can't solve its situation at center now but they can help add insurance to the position in case another injury occurs. Using the MLE to sign Omer Yurtseven would be a great move to improve this roster for next season and give confidence to a front office that clearly has little in our current center depth… Troy Mahabir breaks all of this down! SHOW CHAPTERS: 00:00 - Intro 00:41 - Knicks Interested In Adding Omer Yurtseven To The Roster 01:28 - Greece Publication States Knicks Are Targeting Yurtseven 03:02 - Ian Begley Reports That NY Still Looking For A Backup Big 04:34 - Knicks Have Other Options They Can Look At With MLE 06:45 - NY Will Not Use Rookie Centers As Backups 08:23 - Yurtseven Fits A Knicks Need Now & Helps Later 09:20 - Adding Omer Would Ease Concerns Of Fans Worried About 5 Spot LISTEN NOW TO GET YOUR KNICKS FIX! Catch the latest special interviews, shorts, fan interactions, and more by following the show! Don't forget to turn on notifications so you don't miss another episode! Rather Watch the latest Knicks Recap episode? Catch us on YouTube here: https://www.youtube.com/@TheKnicksRecap Follow The Knicks Recap on all social media platforms! Twitter: https://twitter.com/TheKnicksRecap Instagram: https://www.instagram.com/TheKnicksRecap/ Reddit: https://www.reddit.com/u/TheKnicksRecap?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button Facebook: https://www.facebook.com/TheKnicksRecap/ Rather Listen to The Knicks Recap on a different platform? Catch us on ALL of your favorite streaming platforms: Apple Podcast: https://apple.co/3SKSl8o Spotify: https://spoti.fi/3QrEfr6 iHeart Radio: https://www.iheart.com/podcast/269-the-knicks-recap-a-new-yor-100895112/ Amazon Music: https://amzn.to/3QoZrOd Other Pod Channels: https://anchor.fm/the-knicks-recap Grab our MERCH featuring some of the graphics you've seen us create to take your Knicks fandom to the NEXT LEVEL: MAIN STORE: https://theknicksrecap.myspreadshop.com/ CashApp: $TheKnicksRecap Have a comment about the show, an interview, or a graphic idea? Reach out to The Knicks Recap on ALL SOCIAL MEDIA PLATFORMS!

The Knicks Recap: A New York Knicks Podcast
Knicks Make SURPRISING Move! NY Adds More Depth With FINAL Two-Way Roster Spot… | Knicks News | The Knicks Recap Podcast

The Knicks Recap: A New York Knicks Podcast

Play Episode Listen Later Aug 14, 2024 11:18


The Knicks have added more depth to their roster for next season as they have recently announced that Jacob Toppin has been signed as the final two-way player for NY. During Summer League, before getting injured, Toppin was one of the players to watch, looking like every facet of his game has improved. However, even though NY has made this addition to the roster, they still have 1 remaining roster spot open that they can use to add another impactful player to the roster. The Knicks still hold their MLE and can make additional moves with free agent centers available… Troy Mahabir breaks all of this down! SHOW CHAPTERS: 00:00 - Intro 00:36 - Knicks Add Jacob Toppin With Final Two-Way Roster Spot 01:28 - Knicks Only Have 1 Roster Spot Remaining After Toppin Signing 02:41 - Jacob Toppin Has Improved His Game Tremendously 04:42 - Knicks Current Roster For Next Season 06:03 - Knicks Depth For Next Season 07:45 - Knicks Core 9 Are Already Set 09:25 - Knicks Need To Use MLE For Another Tradeable Asset LISTEN NOW TO GET YOUR KNICKS FIX! Catch the latest special interviews, shorts, fan interactions, and more by following the show! Don't forget to turn on notifications so you don't miss another episode! Rather Watch the latest Knicks Recap episode? Catch us on YouTube here: https://www.youtube.com/@TheKnicksRecap Follow The Knicks Recap on all social media platforms! Twitter: https://twitter.com/TheKnicksRecap Instagram: https://www.instagram.com/TheKnicksRecap/ Reddit: https://www.reddit.com/u/TheKnicksRecap?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button Facebook: https://www.facebook.com/TheKnicksRecap/ Rather Listen to The Knicks Recap on a different platform? Catch us on ALL of your favorite streaming platforms: Apple Podcast: https://apple.co/3SKSl8o Spotify: https://spoti.fi/3QrEfr6 iHeart Radio: https://www.iheart.com/podcast/269-the-knicks-recap-a-new-yor-100895112/ Amazon Music: https://amzn.to/3QoZrOd Other Pod Channels: https://anchor.fm/the-knicks-recap Grab our MERCH featuring some of the graphics you've seen us create to take your Knicks fandom to the NEXT LEVEL: MAIN STORE: https://theknicksrecap.myspreadshop.com/ CashApp: $TheKnicksRecap Have a comment about the show, an interview, or a graphic idea? Reach out to The Knicks Recap on ALL SOCIAL MEDIA PLATFORMS!

Speaking Of Reliability: Friends Discussing Reliability Engineering Topics | Warranty | Plant Maintenance

What is MLE? Abstract Chris and Fred discuss the three-letter acronym ‘MLE’ stands for? Well, it stands for ‘maximum likelihood estimate.’ Ever heard of it? Do you know what it means? Key Points Join Chris and Fred as they discuss what the MLE or ‘maximum likelihood estimate’ means … usually when using software to conduct […] The post SOR 991 What is MLE? appeared first on Accendo Reliability.

confidence best fit sor mle accendo reliability
Monocle 24: The Stack
‘Italy Segreta', ‘M Le magazine du Monde', ‘BSKT' and the Olympic Broadcasting Services

Monocle 24: The Stack

Play Episode Listen Later Aug 10, 2024 47:23


We speak with the founder of ‘Italy Segreta', a title about all things Italy. Plus: Marie-Pierre Lannelongue from ‘M Le magazine du Monde'; ‘BSKT', which is all about basketball culture; and Yiannis Exarchos, the CEO of the Olympic Broadcasting Services.See omnystudio.com/listener for privacy information.

The Knicks Recap: A New York Knicks Podcast
Free Agents Knicks Can Target With Mid-Level Exception... (TKR Live) | Knicks News | The Knicks Recap Podcast

The Knicks Recap: A New York Knicks Podcast

Play Episode Listen Later Aug 6, 2024 12:29


In this clip from The Knicks Recap live stream, we discuss what players in free agency NY can target with their MLE and who that player should be. We also take a look at some players NY shouldn't go after but could if they feel they are secure at the center position… Troy Mahabir breaks all of this down! If you enjoy these clips from the LIVE shows and want to see more of them, make sure you subscribe to the channel and leave a comment below! SHOW CHAPTERS: 00:00 - Intro 00:39 - Players NY Can Target With MLE 01:32 - Top FA Centers Available NOW 03:17 - NY Would Be Wise To Avoid JaVale McGee 04:15 - Bismack Biyombo Would Be Great Option Under Thibs 06:15 - Harry Giles Is Another Prospect To Keep An Eye On 09:13 - Non-Center FA NY May Look To Add With MLE 10:50 - Marcus Morris Returning Would Be Great For Locker Room LISTEN NOW TO GET YOUR KNICKS FIX! Catch the latest special interviews, shorts, fan interactions, and more by following the show! Don't forget to turn on notifications so you don't miss another episode! Rather Watch the latest Knicks Recap episode? Catch us on YouTube here: https://www.youtube.com/@TheKnicksRecap Follow The Knicks Recap on all social media platforms! Twitter: https://twitter.com/TheKnicksRecap Instagram: https://www.instagram.com/TheKnicksRecap/ Reddit: https://www.reddit.com/u/TheKnicksRecap?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button Facebook: https://www.facebook.com/TheKnicksRecap/ Rather Listen to The Knicks Recap on a different platform? Catch us on ALL of your favorite streaming platforms: Apple Podcast: https://apple.co/3SKSl8o Spotify: https://spoti.fi/3QrEfr6 iHeart Radio: https://www.iheart.com/podcast/269-the-knicks-recap-a-new-yor-100895112/ Amazon Music: https://amzn.to/3QoZrOd Other Pod Channels: https://anchor.fm/the-knicks-recap Grab our MERCH featuring some of the graphics you've seen us create to take your Knicks fandom to the NEXT LEVEL: MAIN STORE: https://theknicksrecap.myspreadshop.com/ CashApp: $TheKnicksRecap Have a comment about the show, an interview, or a graphic idea? Reach out to The Knicks Recap on ALL SOCIAL MEDIA PLATFORMS!

The Knicks Recap: A New York Knicks Podcast
NBA Insider Provides SHOCKING Update On Knicks Potential TRADE Targets… | Knicks News | The Knicks Recap Podcast

The Knicks Recap: A New York Knicks Podcast

Play Episode Listen Later Aug 6, 2024 12:19


Headed into the regular season, the Knicks roster is mainly completed. NY still has its MLE or Mid-Level Exception available to them to sign another player to help this roster. However, NY may already have its eyes on a move to make near the trade deadline. According to reports, NY will be active during the deadline looking to make a change at the center position. NBA Insider with The Athletic, John Hollinger provided some shocking trade targets NY could possibly target at the deadline to help improve the roster. Including one player that is a known Knick hater… Troy Mahabir breaks all of this down! SHOW CHAPTERS: 00:00 - Intro 00:44 - Insider Provides Shocking Potential Trade Targets For NY 01:48 - NY Looking To Make A Trade At The Deadline For Center 02:35 - Knicks Ideal Trade Target May Be Revealed During Season 03:07 - Draymond Green To The Knicks?! 04:43 - Jusuf Nurkic A Name To Watch For NY... 05:54 - Robert Williams III A Great Trade Option But Has Injury Concerns 07:05 - Too Big Of A Risk For NY To Have Williams & Robinson As Center Duo 08:11 - Larry Nance Jr Could Be Last Option For NY At Trade Deadline 09:17 - Jonas Valanciunas Is The Only Trade Target NY Should Focus On 10:58 - Knicks Have Multiple Plans Under Leon Rose Depending On The Situation LISTEN NOW TO GET YOUR KNICKS FIX! Catch the latest special interviews, shorts, fan interactions, and more by following the show! Don't forget to turn on notifications so you don't miss another episode! Rather Watch the latest Knicks Recap episode? Catch us on YouTube here: https://www.youtube.com/@TheKnicksRecap Follow The Knicks Recap on all social media platforms! Twitter: https://twitter.com/TheKnicksRecap Instagram: https://www.instagram.com/TheKnicksRecap/ Reddit: https://www.reddit.com/u/TheKnicksRecap?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button Facebook: https://www.facebook.com/TheKnicksRecap/ Rather Listen to The Knicks Recap on a different platform? Catch us on ALL of your favorite streaming platforms: Apple Podcast: https://apple.co/3SKSl8o Spotify: https://spoti.fi/3QrEfr6 iHeart Radio: https://www.iheart.com/podcast/269-the-knicks-recap-a-new-yor-100895112/ Amazon Music: https://amzn.to/3QoZrOd Other Pod Channels: https://anchor.fm/the-knicks-recap Grab our MERCH featuring some of the graphics you've seen us create to take your Knicks fandom to the NEXT LEVEL: MAIN STORE: https://theknicksrecap.myspreadshop.com/ CashApp: $TheKnicksRecap Have a comment about the show, an interview, or a graphic idea? Reach out to The Knicks Recap on ALL SOCIAL MEDIA PLATFORMS!

The Knicks Recap: A New York Knicks Podcast
Josh Hart Fine-Tuning 3-Point Shot, New Details On Chuma Okeke's Deal... | Knicks News | The Knicks Recap Podcast

The Knicks Recap: A New York Knicks Podcast

Play Episode Listen Later Aug 3, 2024 9:50


Josh Hart, often seen across the league as a troll/jokester, is normally having fun by poking at others. But when it comes to basketball and working on his game, there is nothing he takes more seriously. This is evident with Hart scheduled to visit a shooting specialist on August 5th. Hart is determined to fix his jump shot and have more confidence when taking it. We will also review new details for Chuma Okeke's deal with the Knicks and how it affects their MLE & open roster spot... Troy Mahabir breaks all of this down! SHOW CHAPTERS: 00:00 - Intro 00:56 - Josh Hart Fine-Tuning Jump Shot 02:18 - Hart Working With Mark Kamljak (Shooting Specialist) 02:57 - Mark & Hart Worked On 3 Pointer Before 76ers Series 04:38 - Hart Could Make Knicks Bench SCARY 05:29 - New Details On Chuma Okeke's Deal 05:49 - Okeke's Signed To Exhibit 10 Deal - What Does That Mean? 06:15 - Knicks STILL Have MLE To Use After Chuma Okeke Signing 08:00 - NY Must Use MLE On Backup Center 08:40 - Okeke Will Likely Never See The Court For NY LISTEN NOW TO GET YOUR KNICKS FIX! Catch the latest special interviews, shorts, fan interactions, and more by following the show! Don't forget to turn on notifications so you don't miss another episode! Rather Watch the latest Knicks Recap episode? Catch us on YouTube here: https://www.youtube.com/@TheKnicksRecap Follow The Knicks Recap on all social media platforms! Twitter: https://twitter.com/TheKnicksRecap Instagram: https://www.instagram.com/TheKnicksRecap/ Reddit: https://www.reddit.com/u/TheKnicksRecap?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button Facebook: https://www.facebook.com/TheKnicksRecap/ Rather Listen to The Knicks Recap on a different platform? Catch us on ALL of your favorite streaming platforms: Apple Podcast: https://apple.co/3SKSl8o Spotify: https://spoti.fi/3QrEfr6 iHeart Radio: https://www.iheart.com/podcast/269-the-knicks-recap-a-new-yor-100895112/ Amazon Music: https://amzn.to/3QoZrOd Other Pod Channels: https://anchor.fm/the-knicks-recap Grab our MERCH featuring some of the graphics you've seen us create to take your Knicks fandom to the NEXT LEVEL: MAIN STORE: https://theknicksrecap.myspreadshop.com/ CashApp: $TheKnicksRecap Have a comment about the show, an interview, or a graphic idea? Reach out to The Knicks Recap on ALL SOCIAL MEDIA PLATFORMS!

MLOps.community
AI in Healthcare // Eric Landry // #249

MLOps.community

Play Episode Listen Later Jul 19, 2024 51:05


Eric Landry is a seasoned AI and Machine Learning leader with extensive expertise in software engineering and practical applications in NLP, document classification, and conversational AI. With technical proficiency in Java, Python, and key ML tools, he leads the Expedia Machine Learning Engineering Guild and has spoken at major conferences like Applied Intelligence 2023 and KDD 2020. AI in Healthcare // MLOps Podcast #249 with Eric Landry, CTO/CAIO @ Zeteo Health. // Abstract Eric Landry discusses the integration of AI in healthcare, highlighting use cases like patient engagement through chatbots and managing medical data. He addresses benchmarking and limiting hallucinations in LLMs, emphasizing privacy concerns and data localization. Landry maintains a hands-on approach to developing AI solutions and navigating the complexities of healthcare innovation. Despite necessary constraints, he underscores the potential for AI to proactively engage patients and improve health outcomes. // Bio Eric Landry is a technology veteran with 25+ years of experience in the healthcare, travel, and computer industries, specializing in machine learning engineering and AI-based solutions. Holding a Masters in SWE (NLP thesis topic) from the University of Texas at Austin, 2005. He has showcased his expertise and leadership in the field with three US patents, published articles on machine learning engineering, and speaking engagements at the 2023 Applied Intelligence Live, 2020 KDD conference, Data Science Salon 2024, and former leader of Expedia's MLE guild. Formerly, Eric was the director of AI Engineering and Conversation Platform at Babylon Health and Expedia. Currently CTO/CAIO at Zeteo Health. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.zeteo.health/ Building Threat Detection Systems: An MLE's Perspective // Jeremy Jordan // MLOps Podcast #134: https://youtu.be/13nOmMJuiAo --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Eric on LinkedIn: https://www.linkedin.com/in/jeric-landry/ Timestamps: [00:00] Eric's preferred coffee [00:16] Takeaways [01:16] Please like, share, leave a review, and subscribe to our MLOps channels! [01:32] ML and AI in 2005 [04:43] Last job at Babylon Health [10:57] Data access solutions [14:35] Prioritize AI ML Team Success [16:39] Eric's current work [20:36] Engage in holistic help [22:13] High-stakes chatbots [27:30] Navigating Communication Across Diverse Communities [31:49] When Bots Go Wrong [34:15] Health care challenges ahead [36:05] Behavioral health tech challenges [39:45] Stress from Apps Notifications [41:11] Combining different guardrails tools [47:16] Navigating Privacy AI [50:12] Wrap up

Chubstep
#469: Hot Diggity Chubstep feat. Patrick Bertoletti

Chubstep

Play Episode Listen Later Jul 18, 2024 33:48


The 2024 Nathan's Famous Hot Dog Eating Champion, MLE #2 ranked eater in the world, and recent blueberry eating world record holder Patrick Bertoletti joins Jrad and Steed on this week's Chubstep. The guys discuss his recent accomplishments, what percentage of blueberry Patrick's body is, Chubstep's experience with hot dogs, how he got started in competitive eating, being the King Hippo and Gerald Ford of competitive eating, the most difficult food competitions so far, getting internet hate for winning while Joey Chestnut wasn't in attendance, how to prepare for a competition, the 4th of July Nathan's after party, the worst foods to bring to a party, the wildest chicken wing competition, and predictions on Joey Chestnut vs Kobayashi.

A Couple of Squares
FOURTH OF JULY RECAP W/ JOE! + TOP 5 GUMS!

A Couple of Squares

Play Episode Listen Later Jul 8, 2024 98:17


The boys are back to recap a wild weekend.  Tom picks up Joe from the airport and Joe gives us the rundown of his last 7 days in the Minnesota wilderness.  Greeny helps piece together a long weekend in Erie.  Whats the best day for the 4th?  Did Joey Jaws leave MLE on purpose?  Copa. Euros. Wimbledon. Tour De France.  Summer League.  Top 5 Gums.

Valley Girls Podcast
Tour de Saugerties: Immersion in Art

Valley Girls Podcast

Play Episode Listen Later Jul 5, 2024 52:08


Join the Valley Girls as we explore Saugerties, NY, through the lens of art and design. In this episode, we declare our love for Saugerties and discuss creativity and different facets of sustainability. First we talk to Barbara Bravo who fills us in on what to expect from the 22nd annual Saugerties Artists Studio Tour, scheduled for August 10-11, 2024. Learn more at www.saugertiesarttour.org. We also chat with jewelry and accessories designer Emily Li Mandri of MLE, whose statement pieces help to inspire and empower, and whose new brick & mortar store is bringing the bling to Main Street. Check out her gorgeous collection at www.madebyMLE.com and instagram.com/madebymle. ~~~~~ Help support Valley Girls by rating us and leaving a review. Follow us from our show page, visit us at valleygirlspodcast.com, and at instagram.com/valleygirlspodny. Episode music by Robert Burke Warren entitled Painting a Vast Blue Sky can be found at robertburkewarren.bandcamp.com/track/paintng-a-vast-blue-sky.

Davey Mac Sports Program
July 4th Hot Dog Special! (07/01/2024)

Davey Mac Sports Program

Play Episode Listen Later Jul 1, 2024 72:33


It's a brand new Davey Mac Sports Program as we celebrate July 4th and Sports itself!   With special guest George Chiger of Major League Eating as he prepares to compete in the Nathan's Famous Hot Dog Eating Contest in Coney Island on Independence Day!   What does Chiger think of the controversy surrounding Joey Chestnut and his suspension from the MLE?   Is Chiger ready to win the competition now that Chestnut is gone?   We'll get all the answers!   Plus, Dave discusses ESPN legend and Dave's forever rival Chris Berman possibly pissing his pants at a celebrity golf tournament!   Also, we chat about the weird situation with Duke's Kyle Filipowski and his potentially evil fiancé!   The DMSP also talks about LeBron James getting his son Bronny drafted to the Lakers!   And we look at Aaron Judge's insane season and today being Bobby Bonilla Day for the Mets!   It's an action-packed and fun DMSP that you need to experience today!   And happy July 4th to ya!   BOOM! 

We Say What They Can't Radio
The Arena! Podcast - Feat MLE & Young Slim

We Say What They Can't Radio

Play Episode Listen Later Jun 24, 2024 46:00


Harmonizing Stories: Jump into the Musical Journey of artist MLE & producer Young Sim T2R. #thearenapodcast #podcastinterviews #thearena #newmusicpromotions

JMO with Josh and Joe Podcast
S3E39 Celtics Are Champs, Crazy US Open, Joey Chestnut Ban, NHL & NFL

JMO with Josh and Joe Podcast

Play Episode Listen Later Jun 20, 2024 68:33


And for the 18th time, the Boston Celtics are crowned NBA Champs, like everyone predicted this year (including Josh). The boys go over all the games and the MVP discussion. Next, could not leave out such a thrilling US Open and how Rory choked again. The boys make a decision live about the Joey Chestnut ban by the MLE and Nathans. And to wrap it up is a little bit of the NFL, NHL, MCWS and a lawsuit.

Major League Eventing Podcast
Mia Farley Returns as a 2x 5* Rider!

Major League Eventing Podcast

Play Episode Listen Later Jun 19, 2024 52:11


Karen and Robby welcome back Mia Farley! Mia was on the MLE podcast almost 2 years ago to the day but this time she comes on as not just a 5* rider BUT as a 5* rider that has gone cross country double clear at Maryland and Kentucky. Mia is now living and training out of a farm in Kentucky and has made the announcement that she plans on taking Phelps to Burghley. Mia will start a Phelps membership to help offset the costs of getting them to Burghley. Besides her big Burghley news, Mia has a new horse named Pina Colada that she has syndication shares available. We wish Mia and Phelps all the best and let's get them to Burghley!!PC: Shannon BrinkmanTo follow Mia's journey:https://www.instagram.com/_miafarley/?hl=enhttps://www.facebook.com/p/Mia-Farley-Eventing-100064203605315/If interested in helping Mia get to Burghley or in syndication, email her at miafarley6@gmail.comPlease support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Save 10% off your Redingote purchase, use "MLE10" at checkout!https://landing.redingoteequestrian.com/mlePatricia Scott Insurance (484)319-8923Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop

Knicks Film School
KFS POD | PART 1 - Cap Or No Cap? (2.0) - Adding Talent & Finding Upgrades

Knicks Film School

Play Episode Listen Later Jun 17, 2024 67:03


In part 1 of 2 of this episode, Jeremy takes Jon & all of us through another edition of Cap Or No Cap? looking at all the different talent acquisition or roster upgrade opportunities that could present themselves this offseason. Here in part 1, they specifically look at the Knicks recent history with rookies, lessons to be learned from recent MLE signings, backup point guard options, LIVE reaction to the Maurice Cheeks news and much more! Watch the video version of this podcast on our YouTube channel! FOLLOW MACRI - @JCMacriNBA FOLLOW JEREMY - @TheCohencidence FOLLOW GMAC - @AndrewJClaudio_ CHECK OUT THE KFS MERCH STORE! Learn more about your ad choices. Visit podcastchoices.com/adchoices

Clotheshorse
Episode 204: The SHEIN-sodes, part 1: IPO WTF, Empty Airplanes, & Duty Free

Clotheshorse

Play Episode Listen Later Jun 17, 2024 105:37


SHEIN has–and is– changing what it means to buy and sell clothing on planet Earth.  And it's not a change for the better. It's a change we should all care about, no matter where WE buy our clothing. Because SHEIN and what it means for the future of making and selling just about any category of stuff WILL impact every one of us: no matter what we wear, where we live, the kind of job we have, or how much money we have. The SHEIN-ification is such a big deal, so impactful for every one of us, that this episode is part 1 in a short series about SHEIN: where it's been, where it's going, and how it is changing everything.In this part of the series, we will be tackling:SHEIN's impending IPO. And WTF is an IPO?How SHEIN grew and grew and grew (blame 2020 and sweatpants).What in the heck is the de minimis loophole and how is this benefiting SHEIN?And, are there really empty airplanes flying back to China every day so they can be loaded back up with SHEIN and Temu parcels?Also, an update on the Fashion Act and how/why we are still in the early stages of the fight to end fast fashion!Thanks to this episode's sponsor, Made by MLE, @madebymle on Instagram.  Use code CLOTHESHORSE to receive 10% off your first order!Additional reading (lots of sources this week):Maxine's statement about the Fashion ActWhat is an IPO?"NEW REPORT FINDS SHEIN EMITS MORE POLLUTION THAN THE COUNTRY OF PARAGUAY," Janelle Sessoms, Fashionista."What's ‘Really Scary' About Shein's Breakneck Growth," Jasmin Malik Chua, Sourcing Journal."NRF rejects Shein membership as retailer pursues U.S. IPO," Gabrielle Fonrouge, CNBC.  Financial Times."Fast fashion retailer Shein hikes prices ahead of IPO," Helen Reid, Reuters."Synthetics Anonymous 2.0: Fashion's persistent plastic problem," Changing Markets Foundation."You're Buying So Much From Temu And Shein The Air Cargo Industry Can't Keep Up," Cyrus Farivar, Forbes."The Time Has Come to Address the De Minimis Loophole," Timothy Lyons, Vermont Law Review."Labor unions, domestic manufacturing groups launch coalition to reform de minimis import loophole," Chelsea Cox, CNBC.And HEY! BUY YOUR TICKETS TO THE CLOTHESHORSE JAMBOREE ASAP!Want to take advantage of the payment plan?Each payment is $50, spread over 4 payments.The first one happens when you buy your ticket.  You will use promo code INSTALLMENT1 at checkout (when you enter your payment info).  You will be charged $50 and you will receive your actual ticket via email immediately. Amanda will send you a link to pay the remaining payments on 6/25, 7/25, and the week of the jamboree.If you want to share your opinion/additional thoughts on the subjects we cover in each episode, feel free to email, whether it's a typed out message or an audio recording:  amanda@clotheshorse.worldDid you enjoy this episode? Consider "buying me a coffee" via Ko-fi: ko-fi.com/clotheshorseFind this episode's transcript (and so much more) at clotheshorsepodcast.comClotheshorse is brought to you with support from the following sustainable small businesses:The Pewter Thimble Is there a little bit of Italy in your soul? Are you an enthusiast of pre-loved decor and accessories? Bring vintage Italian style — and history — into your space with The Pewter Thimble (@thepewterthimble). We source useful and beautiful things, and mend them where needed. We also find gorgeous illustrations, and make them print-worthy. Tarot cards, tea towels and handpicked treasures, available to you from the comfort of your own home. Responsibly sourced from across Rome, lovingly renewed by fairly paid artists and artisans, with something for every budget. Discover more at thepewterthimble.comSt. Evens is an NYC-based vintage shop that is dedicated to bringing you those special pieces you'll reach for again and again. More than just a store, St. Evens is dedicated to sharing the stories and history behind the garments. 10% of all sales are donated to a different charitable organization each month.  New vintage is released every Thursday at wearStEvens.com, with previews of new pieces and more brought to you on Instagram at @wear_st.evens.Deco Denim is a startup based out of San Francisco, selling clothing and accessories that are sustainable, gender fluid, size inclusive and high quality--made to last for years to come. Deco Denim is trying to change the way you think about buying clothes. Founder Sarah Mattes wants to empower people to ask important questions like, “Where was this made? Was this garment made ethically? Is this fabric made of plastic? Can this garment be upcycled and if not, can it be recycled?” Signup at decodenim.com to receive $20 off your first purchase. They promise not to spam you and send out no more than 3 emails a month, with 2 of them surrounding education or a personal note from the Founder. Find them on Instagram as @deco.denim.Vagabond Vintage DTLV is a vintage clothing, accessories & decor reselling business based in Downtown Las Vegas. Not only do we sell in Las Vegas, but we are also located throughout resale markets in San Francisco as well as at a curated boutique called Lux and Ivy located in Indianapolis, Indiana. Jessica, the founder & owner of Vagabond Vintage DTLV, recently opened the first IRL location located in the Arts District of Downtown Las Vegas on August 5th. The shop has a strong emphasis on 60s & 70s garments, single stitch tee shirts & dreamy loungewear....

AWadd Radio
Sebastian Salazar, Drive Down Richmond Highway, NetClix & GAMEDAY

AWadd Radio

Play Episode Listen Later Jun 13, 2024 37:25


AWadd brings us into the final hour of the show as we bring guest Sebastian Salazar onto the show for some Soccer talk as it's the Summer of Soccer. Gary Hess joins us on the show next as we go all around the sports world and we cover local sports as we Drive Down Richmond Highway. Adam and Stub talk about a big update coming from the MLE as the Joey Chestnut situation continues to develop. AWadd closes out the show as always with GAMEDAY as we pick the sporting events we are most excited for tonight. 

Fescoe in the Morning
Joey Chestnut banned from 4th of July tradition

Fescoe in the Morning

Play Episode Listen Later Jun 12, 2024 38:20


Joey Chestnut banned from 4th of July tradition

Kreckman & Lindahl
6/11/24 Hour 2 - Dan Fouts' 73rd birthday was yesterday, Broncos WRs, Kristaps Porzingis' injury, Beef Tweets: Joey Chestnut vs. MLE

Kreckman & Lindahl

Play Episode Listen Later Jun 12, 2024 46:54


00:00 Dan Fouts' 73rd birthday was yesterday.12:50 Broncos WRs.22:35 Kristaps Porzingis' injury.33:30 Beef Tweets: Joey Chestnut vs. MLE.

BSN Denver Nuggets Podcast
Kris Dunn, Dario Saric, and potential free agents for the Denver Nuggets | DNVR Nuggets Podcast

BSN Denver Nuggets Podcast

Play Episode Listen Later Jun 6, 2024 63:45


Can the Denver Nuggets make a splash in free agency this summer? Probably not. But they could add some helpful pieces that could complete their rotation. The DNVR Nuggets podcast team looks at names that might be available for the MLE or vet minimums. Plus, who will win the NBA Finals? Start - 0:00 Could the Nuggets have beaten these teams? - 2:30 Previewing the finals - 6:45 Quick hater's ball - 19:50 MLE Free Agents - 27:20 More MLE targets - 36:00 Mr. Nugget(s) - 40:50 Minimum free agents - 44:00 Kyshawn George - 51:40 Superchats - 59:45 An ALLCITY Network Production PARTY WITH US: https://thednvr.com/events ALL THINGS DNVR: https://linktr.ee/dnvrsports SUBSCRIBE: https://www.youtube.com/c/DNVR_Sports BUY GOLDEN ERA: https://www.triumphbooks.com/golden-era-products-9781637273692.php?page_id=21 Visit Your Front Range Toyota Stores at a location near you - Toyota is the official vehicle of DNVR. Go to https://millerlite.com/dnvr to find delivery options near you. Or you can pick up some Miller Lite pretty much anywhere they sell beer. Tastes like Miller Time. Celebrate Responsibly. Miller Brewing Company, Milwaukee, Wisconsin.  WATCH THE NUGGETS ON ALTITUDE: https://www.fubotv.com/dnvr - Start your free 14-day trial and receive 15% off your first month! Manscaped: Get 20% Off and Free Shipping with code NUGGETS20 at https://www.Manscaped.com Download the Circle K app and join the Inner Circle or visit https://www.circlek.com/inner-circle!  Download the Gametime app, create an account, and use code DNVR for $20 off your first purchase. Terms apply. Check out FOCO merch and collectibles here https://foco.vegb.net/DNVRNugs and use promo code “DNVR10” for 10% off your order on all non Pre Order items. Sign up on the Volo app using code DNVR3 to get Volo Pass for only $10/month for the first 3 months.  Download PubPass now in the App Store or Google Play store and use code DNVR when you sign up for 50% off a 1 year subscription. Exclusively for our listeners, Shady Rays is giving out their best deal of the season. Head to https://shadyrays.com and use code: DNVR for 35% off polarized sunglasses. Try for yourself the shades rated 5 stars by over 300,000 people. When you shop through links in the description, we may earn affiliate commissions. Copyright Disclaimer under section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, education and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Learn more about your ad choices. Visit podcastchoices.com/adchoices

FOX Sports Knoxville
The Morning Show HR2 5.17.24 Russel Biven joins the show

FOX Sports Knoxville

Play Episode Listen Later May 17, 2024 48:23


EA announces release date for College Football 25 Russel Biven joins the show to discuss MLE bologna eating contest Bob calls out The Drive

The Arena! Podcast
"MLE & YOUNG SIM T2R"

The Arena! Podcast

Play Episode Listen Later May 10, 2024 46:00


Harmonizing Stories: Jump into the Musical Journey of artist MLE & producer Young Sim T2R!

Major League Eventing Podcast
MLE Recap - What Have We Been Up To and What Is Coming Up!

Major League Eventing Podcast

Play Episode Listen Later Apr 24, 2024 44:54


This week on the Major League Eventing Podcast, Karen and Robby sit down and talk about all that MLE has been up to and what to look forward to. We get asked all the time about giving everyone an update and we finally got around to doing it! Stay tuned for an exciting year which will be mentioned....hint - it has to do with Corgis and the Baltimore Ravens. We also are very close to  million downloads and can't thank you, our listeners and of course our sponsors for everything.Please support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Save 10% off your Redingote purchase, use "MLE10" at checkout!https://landing.redingoteequestrian.com/mlePatricia Scott Insurance (484)319-8923Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

We are reuniting for the 2nd AI UX demo day in SF on Apr 28. Sign up to demo here! And don't forget tickets for the AI Engineer World's Fair — for early birds who join before keynote announcements!About a year ago there was a lot of buzz around prompt engineering techniques to force structured output. Our friend Simon Willison tweeted a bunch of tips and tricks, but the most iconic one is Riley Goodside making it a matter of life or death:Guardrails (friend of the pod and AI Engineer speaker), Marvin (AI Engineer speaker), and jsonformer had also come out at the time. In June 2023, Jason Liu (today's guest!) open sourced his “OpenAI Function Call and Pydantic Integration Module”, now known as Instructor, which quickly turned prompt engineering black magic into a clean, developer-friendly SDK. A few months later, model providers started to add function calling capabilities to their APIs as well as structured outputs support like “JSON Mode”, which was announced at OpenAI Dev Day (see recap here). In just a handful of months, we went from threatening to kill grandmas to first-class support from the research labs. And yet, Instructor was still downloaded 150,000 times last month. Why?What Instructor looks likeInstructor patches your LLM provider SDKs to offer a new response_model option to which you can pass a structure defined in Pydantic. It currently supports OpenAI, Anthropic, Cohere, and a long tail of models through LiteLLM.What Instructor is forThere are three core use cases to Instructor:* Extracting structured data: Taking an input like an image of a receipt and extracting structured data from it, such as a list of checkout items with their prices, fees, and coupon codes.* Extracting graphs: Identifying nodes and edges in a given input to extract complex entities and their relationships. For example, extracting relationships between characters in a story or dependencies between tasks.* Query understanding: Defining a schema for an API call and using a language model to resolve a request into a more complex one that an embedding could not handle. For example, creating date intervals from queries like “what was the latest thing that happened this week?” to then pass onto a RAG system or similar.Jason called all these different ways of getting data from LLMs “typed responses”: taking strings and turning them into data structures. Structured outputs as a planning toolThe first wave of agents was all about open-ended iteration and planning, with projects like AutoGPT and BabyAGI. Models would come up with a possible list of steps, and start going down the list one by one. It's really easy for them to go down the wrong branch, or get stuck on a single step with no way to intervene.What if these planning steps were returned to us as DAGs using structured output, and then managed as workflows? This also makes it easy to better train model on how to create these plans, as they are much more structured than a bullet point list. Once you have this structure, each piece can be modified individually by different specialized models. You can read some of Jason's experiments here:While LLMs will keep improving (Llama3 just got released as we write this), having a consistent structure for the output will make it a lot easier to swap models in and out. Jason's overall message on how we can move from ReAct loops to more controllable Agent workflows mirrors the “Process” discussion from our Elicit episode:Watch the talkAs a bonus, here's Jason's talk from last year's AI Engineer Summit. He'll also be a speaker at this year's AI Engineer World's Fair!Timestamps* [00:00:00] Introductions* [00:02:23] Early experiments with Generative AI at StitchFix* [00:08:11] Design philosophy behind the Instructor library* [00:11:12] JSON Mode vs Function Calling* [00:12:30] Single vs parallel function calling* [00:14:00] How many functions is too many?* [00:17:39] How to evaluate function calling* [00:20:23] What is Instructor good for?* [00:22:42] The Evolution from Looping to Workflow in AI Engineering* [00:27:03] State of the AI Engineering Stack* [00:28:26] Why Instructor isn't VC backed* [00:31:15] Advice on Pursuing Open Source Projects and Consulting* [00:36:00] The Concept of High Agency and Its Importance* [00:42:44] Prompts as Code and the Structure of AI Inputs and Outputs* [00:44:20] The Emergence of AI Engineering as a Distinct FieldShow notes* Jason on the UWaterloo mafia* Jason on Twitter, LinkedIn, website* Instructor docs* Max Woolf on the potential of Structured Output* swyx on Elo vs Cost* Jason on Anthropic Function Calling* Jason on Rejections, Advice to Young People* Jason on Bad Startup Ideas* Jason on Prompts as Code* Rysana's inversion models* Bryan Bischof's episode* Hamel HusainTranscriptAlessio [00:00:00]: Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.Swyx [00:00:16]: Hello, we're back in the remote studio with Jason Liu from Instructor. Welcome Jason.Jason [00:00:21]: Hey there. Thanks for having me.Swyx [00:00:23]: Jason, you are extremely famous, so I don't know what I'm going to do introducing you, but you're one of the Waterloo clan. There's like this small cadre of you that's just completely dominating machine learning. Actually, can you list like Waterloo alums that you're like, you know, are just dominating and crushing it right now?Jason [00:00:39]: So like John from like Rysana is doing his inversion models, right? I know like Clive Chen from Waterloo. When I started the data science club, he was one of the guys who were like joining in and just like hanging out in the room. And now he was at Tesla working with Karpathy, now he's at OpenAI, you know.Swyx [00:00:56]: He's in my climbing club.Jason [00:00:58]: Oh, hell yeah. I haven't seen him in like six years now.Swyx [00:01:01]: To get in the social scene in San Francisco, you have to climb. So both in career and in rocks. So you started a data science club at Waterloo, we can talk about that, but then also spent five years at Stitch Fix as an MLE. You pioneered the use of OpenAI's LLMs to increase stylist efficiency. So you must have been like a very, very early user. This was like pretty early on.Jason [00:01:20]: Yeah, I mean, this was like GPT-3, okay. So we actually were using transformers at Stitch Fix before the GPT-3 model. So we were just using transformers for recommendation systems. At that time, I was very skeptical of transformers. I was like, why do we need all this infrastructure? We can just use like matrix factorization. When GPT-2 came out, I fine tuned my own GPT-2 to write like rap lyrics and I was like, okay, this is cute. Okay, I got to go back to my real job, right? Like who cares if I can write a rap lyric? When GPT-3 came out, again, I was very much like, why are we using like a post request to review every comment a person leaves? Like we can just use classical models. So I was very against language models for like the longest time. And then when ChatGPT came out, I basically just wrote a long apology letter to everyone at the company. I was like, hey guys, you know, I was very dismissive of some of this technology. I didn't think it would scale well, and I am wrong. This is incredible. And I immediately just transitioned to go from computer vision recommendation systems to LLMs. But funny enough, now that we have RAG, we're kind of going back to recommendation systems.Swyx [00:02:21]: Yeah, speaking of that, I think Alessio is going to bring up the next one.Alessio [00:02:23]: Yeah, I was going to say, we had Bryan Bischof from Hex on the podcast. Did you overlap at Stitch Fix?Jason [00:02:28]: Yeah, he was like one of my main users of the recommendation frameworks that I had built out at Stitch Fix.Alessio [00:02:32]: Yeah, we talked a lot about RecSys, so it makes sense.Swyx [00:02:36]: So now I have adopted that line, RAG is RecSys. And you know, if you're trying to reinvent new concepts, you should study RecSys first, because you're going to independently reinvent a lot of concepts. So your system was called Flight. It's a recommendation framework with over 80% adoption, servicing 350 million requests every day. Wasn't there something existing at Stitch Fix? Why did you have to write one from scratch?Jason [00:02:56]: No, so I think because at Stitch Fix, a lot of the machine learning engineers and data scientists were writing production code, sort of every team's systems were very bespoke. It's like, this team only needs to do like real time recommendations with small data. So they just have like a fast API app with some like pandas code. This other team has to do a lot more data. So they have some kind of like Spark job that does some batch ETL that does a recommendation. And so what happens is each team writes their code differently. And I have to come in and refactor their code. And I was like, oh man, I'm refactoring four different code bases, four different times. Wouldn't it be better if all the code quality was my fault? Let me just write this framework, force everyone else to use it. And now one person can maintain five different systems, rather than five teams having their own bespoke system. And so it was really a need of just sort of standardizing everything. And then once you do that, you can do observability across the entire pipeline and make large sweeping improvements in this infrastructure, right? If we notice that something is slow, we can detect it on the operator layer. Just hey, hey, like this team, you guys are doing this operation is lowering our latency by like 30%. If you just optimize your Python code here, we can probably make an extra million dollars. So let's jump on a call and figure this out. And then a lot of it was doing all this observability work to figure out what the heck is going on and optimize this system from not only just a code perspective, sort of like harassingly or against saying like, we need to add caching here. We're doing duplicated work here. Let's go clean up the systems. Yep.Swyx [00:04:22]: Got it. One more system that I'm interested in finding out more about is your similarity search system using Clip and GPT-3 embeddings and FIASS, where you saved over $50 million in annual revenue. So of course they all gave all that to you, right?Jason [00:04:34]: No, no, no. I mean, it's not going up and down, but you know, I got a little bit, so I'm pretty happy about that. But there, you know, that was when we were doing fine tuning like ResNets to do image classification. And so a lot of it was given an image, if we could predict the different attributes we have in the merchandising and we can predict the text embeddings of the comments, then we can kind of build a image vector or image embedding that can capture both descriptions of the clothing and sales of the clothing. And then we would use these additional vectors to augment our recommendation system. And so with the recommendation system really was just around like, what are similar items? What are complimentary items? What are items that you would wear in a single outfit? And being able to say on a product page, let me show you like 15, 20 more things. And then what we found was like, hey, when you turn that on, you make a bunch of money.Swyx [00:05:23]: Yeah. So, okay. So you didn't actually use GPT-3 embeddings. You fine tuned your own? Because I was surprised that GPT-3 worked off the shelf.Jason [00:05:30]: Because I mean, at this point we would have 3 million pieces of inventory over like a billion interactions between users and clothes. So any kind of fine tuning would definitely outperform like some off the shelf model.Swyx [00:05:41]: Cool. I'm about to move on from Stitch Fix, but you know, any other like fun stories from the Stitch Fix days that you want to cover?Jason [00:05:46]: No, I think that's basically it. I mean, the biggest one really was the fact that I think for just four years, I was so bearish on language models and just NLP in general. I'm just like, none of this really works. Like, why would I spend time focusing on this? I got to go do the thing that makes money, recommendations, bounding boxes, image classification. Yeah. Now I'm like prompting an image model. I was like, oh man, I was wrong.Swyx [00:06:06]: So my Stitch Fix question would be, you know, I think you have a bit of a drip and I don't, you know, my primary wardrobe is free startup conference t-shirts. Should more technology brothers be using Stitch Fix? What's your fashion advice?Jason [00:06:19]: Oh man, I mean, I'm not a user of Stitch Fix, right? It's like, I enjoy going out and like touching things and putting things on and trying them on. Right. I think Stitch Fix is a place where you kind of go because you want the work offloaded. I really love the clothing I buy where I have to like, when I land in Japan, I'm doing like a 45 minute walk up a giant hill to find this weird denim shop. That's the stuff that really excites me. But I think the bigger thing that's really captured is this idea that narrative matters a lot to human beings. Okay. And I think the recommendation system, that's really hard to capture. It's easy to use AI to sell like a $20 shirt, but it's really hard for AI to sell like a $500 shirt. But people are buying $500 shirts, you know what I mean? There's definitely something that we can't really capture just yet that we probably will figure out how to in the future.Swyx [00:07:07]: Well, it'll probably output in JSON, which is what we're going to turn to next. Then you went on a sabbatical to South Park Commons in New York, which is unusual because it's based on USF.Jason [00:07:17]: Yeah. So basically in 2020, really, I was enjoying working a lot as I was like building a lot of stuff. This is where we were making like the tens of millions of dollars doing stuff. And then I had a hand injury. And so I really couldn't code anymore for like a year, two years. And so I kind of took sort of half of it as medical leave, the other half I became more of like a tech lead, just like making sure the systems were like lights were on. And then when I went to New York, I spent some time there and kind of just like wound down the tech work, you know, did some pottery, did some jujitsu. And after GPD came out, I was like, oh, I clearly need to figure out what is going on here because something feels very magical. I don't understand it. So I spent basically like five months just prompting and playing around with stuff. And then afterwards, it was just my startup friends going like, hey, Jason, you know, my investors want us to have an AI strategy. Can you help us out? And it just snowballed and bore more and more until I was making this my full time job. Yeah, got it.Swyx [00:08:11]: You know, you had YouTube University and a journaling app, you know, a bunch of other explorations. But it seems like the most productive or the best known thing that came out of your time there was Instructor. Yeah.Jason [00:08:22]: Written on the bullet train in Japan. I think at some point, you know, tools like Guardrails and Marvin came out. Those are kind of tools that I use XML and Pytantic to get structured data out. But they really were doing things sort of in the prompt. And these are built with sort of the instruct models in mind. Like I'd already done that in the past. Right. At Stitch Fix, you know, one of the things we did was we would take a request note and turn that into a JSON object that we would use to send it to our search engine. Right. So if you said like, I want to, you know, skinny jeans that were this size, that would turn into JSON that we would send to our internal search APIs. But it always felt kind of gross. A lot of it is just like you read the JSON, you like parse it, you make sure the names are strings and ages are numbers and you do all this like messy stuff. But when function calling came out, it was very much sort of a new way of doing things. Right. Function calling lets you define the schema separate from the data and the instructions. And what this meant was you can kind of have a lot more complex schemas and just map them in Pytantic. And then you can just keep those very separate. And then once you add like methods, you can add validators and all that kind of stuff. The one thing I really had with a lot of these libraries, though, was it was doing a lot of the string formatting themselves, which was fine when it was the instruction to models. You just have a string. But when you have these new chat models, you have these chat messages. And I just didn't really feel like not being able to access that for the developer was sort of a good benefit that they would get. And so I just said, let me write like the most simple SDK around the OpenAI SDK, a simple wrapper on the SDK, just handle the response model a bit and kind of think of myself more like requests than actual framework that people can use. And so the goal is like, hey, like this is something that you can use to build your own framework. But let me just do all the boring stuff that nobody really wants to do. People want to build their own frameworks, but people don't want to build like JSON parsing.Swyx [00:10:08]: And the retrying and all that other stuff.Jason [00:10:10]: Yeah.Swyx [00:10:11]: Right. We had this a little bit of this discussion before the show, but like that design principle of going for being requests rather than being Django. Yeah. So what inspires you there? This has come from a lot of prior pain. Are there other open source projects that inspired your philosophy here? Yeah.Jason [00:10:25]: I mean, I think it would be requests, right? Like, I think it is just the obvious thing you install. If you were going to go make HTTP requests in Python, you would obviously import requests. Maybe if you want to do more async work, there's like future tools, but you don't really even think about installing it. And when you do install it, you don't think of it as like, oh, this is a requests app. Right? Like, no, this is just Python. The bigger question is, like, a lot of people ask questions like, oh, why isn't requests like in the standard library? Yeah. That's how I want my library to feel, right? It's like, oh, if you're going to use the LLM SDKs, you're obviously going to install instructor. And then I think the second question would be like, oh, like, how come instructor doesn't just go into OpenAI, go into Anthropic? Like, if that's the conversation we're having, like, that's where I feel like I've succeeded. Yeah. It's like, yeah, so standard, you may as well just have it in the base libraries.Alessio [00:11:12]: And the shape of the request stayed the same, but initially function calling was maybe equal structure outputs for a lot of people. I think now the models also support like JSON mode and some of these things and, you know, return JSON or my grandma is going to die. All of that stuff is maybe to decide how have you seen that evolution? Like maybe what's the metagame today? Should people just forget about function calling for structure outputs or when is structure output like JSON mode the best versus not? We'd love to get any thoughts given that you do this every day.Jason [00:11:42]: Yeah, I would almost say these are like different implementations of like the real thing we care about is the fact that now we have typed responses to language models. And because we have that type response, my IDE is a little bit happier. I get autocomplete. If I'm using the response wrong, there's a little red squiggly line. Like those are the things I care about in terms of whether or not like JSON mode is better. I usually think it's almost worse unless you want to spend less money on like the prompt tokens that the function call represents, primarily because with JSON mode, you don't actually specify the schema. So sure, like JSON load works, but really, I care a lot more than just the fact that it is JSON, right? I think function calling gives you a tool to specify the fact like, okay, this is a list of objects that I want and each object has a name or an age and I want the age to be above zero and I want to make sure it's parsed correctly. That's where kind of function calling really shines.Alessio [00:12:30]: Any thoughts on single versus parallel function calling? So I did a presentation at our AI in Action Discord channel, and obviously showcase instructor. One of the big things that we have before with single function calling is like when you're trying to extract lists, you have to make these funky like properties that are lists to then actually return all the objects. How do you see the hack being put on the developer's plate versus like more of this stuff just getting better in the model? And I know you tweeted recently about Anthropic, for example, you know, some lists are not lists or strings and there's like all of these discrepancies.Jason [00:13:04]: I almost would prefer it if it was always a single function call. Obviously, there is like the agents workflows that, you know, Instructor doesn't really support that well, but are things that, you know, ought to be done, right? Like you could define, I think maybe like 50 or 60 different functions in a single API call. And, you know, if it was like get the weather or turn the lights on or do something else, it makes a lot of sense to have these parallel function calls. But in terms of an extraction workflow, I definitely think it's probably more helpful to have everything be a single schema, right? Just because you can sort of specify relationships between these entities that you can't do in a parallel function calling, you can have a single chain of thought before you generate a list of results. Like there's like small like API differences, right? Where if it's for parallel function calling, if you do one, like again, really, I really care about how the SDK looks and says, okay, do I always return a list of functions or do you just want to have the actual object back out and you want to have like auto complete over that object? Interesting.Alessio [00:14:00]: What's kind of the cap for like how many function definitions you can put in where it still works well? Do you have any sense on that?Jason [00:14:07]: I mean, for the most part, I haven't really had a need to do anything that's more than six or seven different functions. I think in the documentation, they support way more. I don't even know if there's any good evals that have over like two dozen function calls. I think if you're running into issues where you have like 20 or 50 or 60 function calls, I think you're much better having those specifications saved in a vector database and then have them be retrieved, right? So if there are 30 tools, like you should basically be like ranking them and then using the top K to do selection a little bit better rather than just like shoving like 60 functions into a single. Yeah.Swyx [00:14:40]: Yeah. Well, I mean, so I think this is relevant now because previously I think context limits prevented you from having more than a dozen tools anyway. And now that we have million token context windows, you know, a cloud recently with their new function calling release said they can handle over 250 tools, which is insane to me. That's, that's a lot. You're saying like, you know, you don't think there's many people doing that. I think anyone with a sort of agent like platform where you have a bunch of connectors, they wouldn't run into that problem. Probably you're right that they should use a vector database and kind of rag their tools. I know Zapier has like a few thousand, like 8,000, 9,000 connectors that, you know, obviously don't fit anywhere. So yeah, I mean, I think that would be it unless you need some kind of intelligence that chains things together, which is, I think what Alessio is coming back to, right? Like there's this trend about parallel function calling. I don't know what I think about that. Anthropic's version was, I think they use multiple tools in sequence, but they're not in parallel. I haven't explored this at all. I'm just like throwing this open to you as to like, what do you think about all these new things? Yeah.Jason [00:15:40]: It's like, you know, do we assume that all function calls could happen in any order? In which case, like we either can assume that, or we can assume that like things need to happen in some kind of sequence as a DAG, right? But if it's a DAG, really that's just like one JSON object that is the entire DAG rather than going like, okay, the order of the function that return don't matter. That's definitely just not true in practice, right? Like if I have a thing that's like turn the lights on, like unplug the power, and then like turn the toaster on or something like the order doesn't matter. And it's unclear how well you can describe the importance of that reasoning to a language model yet. I mean, I'm sure you can do it with like good enough prompting, but I just haven't any use cases where the function sequence really matters. Yeah.Alessio [00:16:18]: To me, the most interesting thing is the models are better at picking than your ranking is usually. Like I'm incubating a company around system integration. For example, with one system, there are like 780 endpoints. And if you're actually trying to do vector similarity, it's not that good because the people that wrote the specs didn't have in mind making them like semantically apart. You know, they're kind of like, oh, create this, create this, create this. Versus when you give it to a model, like in Opus, you put them all, it's quite good at picking which ones you should actually run. And I'm curious to see if the model providers actually care about some of those workflows or if the agent companies are actually going to build very good rankers to kind of fill that gap.Jason [00:16:58]: Yeah. My money is on the rankers because you can do those so easily, right? You could just say, well, given the embeddings of my search query and the embeddings of the description, I can just train XGBoost and just make sure that I have very high like MRR, which is like mean reciprocal rank. And so the only objective is to make sure that the tools you use are in the top end filtered. Like that feels super straightforward and you don't have to actually figure out how to fine tune a language model to do tool selection anymore. Yeah. I definitely think that's the case because for the most part, I imagine you either have like less than three tools or more than a thousand. I don't know what kind of company said, oh, thank God we only have like 185 tools and this works perfectly, right? That's right.Alessio [00:17:39]: And before we maybe move on just from this, it was interesting to me, you retweeted this thing about Anthropic function calling and it was Joshua Brown's retweeting some benchmark that it's like, oh my God, Anthropic function calling so good. And then you retweeted it and then you tweeted it later and it's like, it's actually not that good. What's your flow? How do you actually test these things? Because obviously the benchmarks are lying, right? Because the benchmarks say it's good and you said it's bad and I trust you more than the benchmark. How do you think about that? And then how do you evolve it over time?Jason [00:18:09]: It's mostly just client data. I actually have been mostly busy with enough client work that I haven't been able to reproduce public benchmarks. And so I can't even share some of the results in Anthropic. I would just say like in production, we have some pretty interesting schemas where it's like iteratively building lists where we're doing like updates of lists, like we're doing in place updates. So like upserts and inserts. And in those situations we're like, oh yeah, we have a bunch of different parsing errors. Numbers are being returned to strings. We were expecting lists of objects, but we're getting strings that are like the strings of JSON, right? So we had to call JSON parse on individual elements. Overall, I'm like super happy with the Anthropic models compared to the OpenAI models. Sonnet is very cost effective. Haiku is in function calling, it's actually better, but I think they just had to sort of file down the edges a little bit where like our tests pass, but then we actually deployed a production. We got half a percent of traffic having issues where if you ask for JSON, it'll try to talk to you. Or if you use function calling, you know, we'll have like a parse error. And so I think that definitely gonna be things that are fixed in like the upcoming weeks. But in terms of like the reasoning capabilities, man, it's hard to beat like 70% cost reduction, especially when you're building consumer applications, right? If you're building something for consultants or private equity, like you're charging $400, it doesn't really matter if it's a dollar or $2. But for consumer apps, it makes products viable. If you can go from four to Sonnet, you might actually be able to price it better. Yeah.Swyx [00:19:31]: I had this chart about the ELO versus the cost of all the models. And you could put trend graphs on each of those things about like, you know, higher ELO equals higher cost, except for Haiku. Haiku kind of just broke the lines, or the ISO ELOs, if you want to call it. Cool. Before we go too far into your opinions on just the overall ecosystem, I want to make sure that we map out the surface area of Instructor. I would say that most people would be familiar with Instructor from your talks and your tweets and all that. You had the number one talk from the AI Engineer Summit.Jason [00:20:03]: Two Liu. Jason Liu and Jerry Liu. Yeah.Swyx [00:20:06]: Yeah. Until I actually went through your cookbook, I didn't realize the surface area. How would you categorize the use cases? You have LLM self-critique, you have knowledge graphs in here, you have PII data sanitation. How do you characterize to people what is the surface area of Instructor? Yeah.Jason [00:20:23]: This is the part that feels crazy because really the difference is LLMs give you strings and Instructor gives you data structures. And once you get data structures, again, you can do every lead code problem you ever thought of. Right. And so I think there's a couple of really common applications. The first one obviously is extracting structured data. This is just be, okay, well, like I want to put in an image of a receipt. I want to give it back out a list of checkout items with a price and a fee and a coupon code or whatever. That's one application. Another application really is around extracting graphs out. So one of the things we found out about these language models is that not only can you define nodes, it's really good at figuring out what are nodes and what are edges. And so we have a bunch of examples where, you know, not only do I extract that, you know, this happens after that, but also like, okay, these two are dependencies of another task. And you can do, you know, extracting complex entities that have relationships. Given a story, for example, you could extract relationships of families across different characters. This can all be done by defining a graph. The last really big application really is just around query understanding. The idea is that like any API call has some schema and if you can define that schema ahead of time, you can use a language model to resolve a request into a much more complex request. One that an embedding could not do. So for example, I have a really popular post called like rag is more than embeddings. And effectively, you know, if I have a question like this, what was the latest thing that happened this week? That embeds to nothing, right? But really like that query should just be like select all data where the date time is between today and today minus seven days, right? What if I said, how did my writing change between this month and last month? Again, embeddings would do nothing. But really, if you could do like a group by over the month and a summarize, then you could again like do something much more interesting. And so this really just calls out the fact that embeddings really is kind of like the lowest hanging fruit. And using something like instructor can really help produce a data structure. And then you can just use your computer science and reason about the data structure. Maybe you say, okay, well, I'm going to produce a graph where I want to group by each month and then summarize them jointly. You can do that if you know how to define this data structure. Yeah.Swyx [00:22:29]: So you kind of run up against like the LangChains of the world that used to have that. They still do have like the self querying, I think they used to call it when we had Harrison on in our episode. How do you see yourself interacting with the other LLM frameworks in the ecosystem? Yeah.Jason [00:22:42]: I mean, if they use instructor, I think that's totally cool. Again, it's like, it's just Python, right? It's like asking like, oh, how does like Django interact with requests? Well, you just might make a request.get in a Django app, right? But no one would say, I like went off of Django because I'm using requests now. They should be ideally like sort of the wrong comparison in terms of especially like the agent workflows. I think the real goal for me is to go down like the LLM compiler route, which is instead of doing like a react type reasoning loop. I think my belief is that we should be using like workflows. If we do this, then we always have a request and a complete workflow. We can fine tune a model that has a better workflow. Whereas it's hard to think about like, how do you fine tune a better react loop? Yeah. You always train it to have less looping, in which case like you wanted to get the right answer the first time, in which case it was a workflow to begin with, right?Swyx [00:23:31]: Can you define workflow? Because I used to work at a workflow company, but I'm not sure this is a good term for everybody.Jason [00:23:36]: I'm thinking workflow in terms of like the prefect Zapier workflow. Like I want to build a DAG, I want you to tell me what the nodes and edges are. And then maybe the edges are also put in with AI. But the idea is that like, I want to be able to present you the entire plan and then ask you to fix things as I execute it, rather than going like, hey, I couldn't parse the JSON, so I'm going to try again. I couldn't parse the JSON, I'm going to try again. And then next thing you know, you spent like $2 on opening AI credits, right? Yeah. Whereas with the plan, you can just say, oh, the edge between node like X and Y does not run. Let me just iteratively try to fix that, fix the one that sticks, go on to the next component. And obviously you can get into a world where if you have enough examples of the nodes X and Y, maybe you can use like a vector database to find a good few shot examples. You can do a lot if you sort of break down the problem into that workflow and executing that workflow, rather than looping and hoping the reasoning is good enough to generate the correct output. Yeah.Swyx [00:24:35]: You know, I've been hammering on Devon a lot. I got access a couple of weeks ago. And obviously for simple tasks, it does well. For the complicated, like more than 10, 20 hour tasks, I can see- That's a crazy comparison.Jason [00:24:47]: We used to talk about like three, four loops. Only once it gets to like hour tasks, it's hard.Swyx [00:24:54]: Yeah. Less than an hour, there's nothing.Jason [00:24:57]: That's crazy.Swyx [00:24:58]: I mean, okay. Maybe my goalposts have shifted. I don't know. That's incredible.Jason [00:25:02]: Yeah. No, no. I'm like sub one minute executions. Like the fact that you're talking about 10 hours is incredible.Swyx [00:25:08]: I think it's a spectrum. I think I'm going to say this every single time I bring up Devon. Let's not reward them for taking longer to do things. Do you know what I mean? I think that's a metric that is easily abusable.Jason [00:25:18]: Sure. Yeah. You know what I mean? But I think if you can monotonically increase the success probability over an hour, that's winning to me. Right? Like obviously if you run an hour and you've made no progress. Like I think when we were in like auto GBT land, there was that one example where it's like, I wanted it to like buy me a bicycle overnight. I spent $7 on credit and I never found the bicycle. Yeah.Swyx [00:25:41]: Yeah. Right. I wonder if you'll be able to purchase a bicycle. Because it actually can do things in real world. It just needs to suspend to you for off and stuff. The point I was trying to make was that I can see it turning plans. I think one of the agents loopholes or one of the things that is a real barrier for agents is LLMs really like to get stuck into a lane. And you know what you're talking about, what I've seen Devon do is it gets stuck in a lane and it will just kind of change plans based on the performance of the plan itself. And it's kind of cool.Jason [00:26:05]: I feel like we've gone too much in the looping route and I think a lot of more plans and like DAGs and data structures are probably going to come back to help fill in some holes. Yeah.Alessio [00:26:14]: What do you think of the interface to that? Do you see it's like an existing state machine kind of thing that connects to the LLMs, the traditional DAG players? Do you think we need something new for like AI DAGs?Jason [00:26:25]: Yeah. I mean, I think that the hard part is going to be describing visually the fact that this DAG can also change over time and it should still be allowed to be fuzzy. I think in like mathematics, we have like plate diagrams and like Markov chain diagrams and like recurrent states and all that. Some of that might come into this workflow world. But to be honest, I'm not too sure. I think right now, the first steps are just how do we take this DAG idea and break it down to modular components that we can like prompt better, have few shot examples for and ultimately like fine tune against. But in terms of even the UI, it's hard to say what it will likely win. I think, you know, people like Prefect and Zapier have a pretty good shot at doing a good job.Swyx [00:27:03]: Yeah. You seem to use Prefect a lot. I actually worked at a Prefect competitor at Temporal and I'm also very familiar with Dagster. What else would you call out as like particularly interesting in the AI engineering stack?Jason [00:27:13]: Man, I almost use nothing. I just use Cursor and like PyTests. Okay. I think that's basically it. You know, a lot of the observability companies have... The more observability companies I've tried, the more I just use Postgres.Swyx [00:27:29]: Really? Okay. Postgres for observability?Jason [00:27:32]: But the issue really is the fact that these observability companies isn't actually doing observability for the system. It's just doing the LLM thing. Like I still end up using like Datadog or like, you know, Sentry to do like latency. And so I just have those systems handle it. And then the like prompt in, prompt out, latency, token costs. I just put that in like a Postgres table now.Swyx [00:27:51]: So you don't need like 20 funded startups building LLM ops? Yeah.Jason [00:27:55]: But I'm also like an old, tired guy. You know what I mean? Like I think because of my background, it's like, yeah, like the Python stuff, I'll write myself. But you know, I will also just use Vercel happily. Yeah. Yeah. So I'm not really into that world of tooling, whereas I think, you know, I spent three good years building observability tools for recommendation systems. And I was like, oh, compared to that, Instructor is just one call. I just have to put time star, time and then count the prompt token, right? Because I'm not doing a very complex looping behavior. I'm doing mostly workflows and extraction. Yeah.Swyx [00:28:26]: I mean, while we're on this topic, we'll just kind of get this out of the way. You famously have decided to not be a venture backed company. You want to do the consulting route. The obvious route for someone as successful as Instructor is like, oh, here's hosted Instructor with all tooling. Yeah. You just said you had a whole bunch of experience building observability tooling. You have the perfect background to do this and you're not.Jason [00:28:43]: Yeah. Isn't that sick? I think that's sick.Swyx [00:28:44]: I mean, I know why, because you want to go free dive.Jason [00:28:47]: Yeah. Yeah. Because I think there's two things. Right. Well, one, if I tell myself I want to build requests, requests is not a venture backed startup. Right. I mean, one could argue whether or not Postman is, but I think for the most part, it's like having worked so much, I'm more interested in looking at how systems are being applied and just having access to the most interesting data. And I think I can do that more through a consulting business where I can come in and go, oh, you want to build perfect memory. You want to build an agent. You want to build like automations over construction or like insurance and supply chain, or like you want to handle writing private equity, mergers and acquisitions reports based off of user interviews. Those things are super fun. Whereas like maintaining the library, I think is mostly just kind of like a utility that I try to keep up, especially because if it's not venture backed, I have no reason to sort of go down the route of like trying to get a thousand integrations. In my mind, I just go like, okay, 98% of the people use open AI. I'll support that. And if someone contributes another platform, that's great. I'll merge it in. Yeah.Swyx [00:29:45]: I mean, you only added Anthropic support this year. Yeah.Jason [00:29:47]: Yeah. You couldn't even get an API key until like this year, right? That's true. Okay. If I add it like last year, I was trying to like double the code base to service, you know, half a percent of all downloads.Swyx [00:29:58]: Do you think the market share will shift a lot now that Anthropic has like a very, very competitive offering?Jason [00:30:02]: I think it's still hard to get API access. I don't know if it's fully GA now, if it's GA, if you can get a commercial access really easily.Alessio [00:30:12]: I got commercial after like two weeks to reach out to their sales team.Jason [00:30:14]: Okay.Alessio [00:30:15]: Yeah.Swyx [00:30:16]: Two weeks. It's not too bad. There's a call list here. And then anytime you run into rate limits, just like ping one of the Anthropic staff members.Jason [00:30:21]: Yeah. Then maybe we need to like cut that part out. So I don't need to like, you know, spread false news.Swyx [00:30:25]: No, it's cool. It's cool.Jason [00:30:26]: But it's a common question. Yeah. Surely just from the price perspective, it's going to make a lot of sense. Like if you are a business, you should totally consider like Sonnet, right? Like the cost savings is just going to justify it if you actually are doing things at volume. And yeah, I think the SDK is like pretty good. Back to the instructor thing. I just don't think it's a billion dollar company. And I think if I raise money, the first question is going to be like, how are you going to get a billion dollar company? And I would just go like, man, like if I make a million dollars as a consultant, I'm super happy. I'm like more than ecstatic. I can have like a small staff of like three people. It's fun. And I think a lot of my happiest founder friends are those who like raised a tiny seed round, became profitable. They're making like 70, 60, 70, like MRR, 70,000 MRR and they're like, we don't even need to raise the seed round. Let's just keep it like between me and my co-founder, we'll go traveling and it'll be a great time. I think it's a lot of fun.Alessio [00:31:15]: Yeah. like say LLMs / AI and they build some open source stuff and it's like I should just raise money and do this and I tell people a lot it's like look you can make a lot more money doing something else than doing a startup like most people that do a company could make a lot more money just working somewhere else than the company itself do you have any advice for folks that are maybe in a similar situation they're trying to decide oh should I stay in my like high paid FAANG job and just tweet this on the side and do this on github should I go be a consultant like being a consultant seems like a lot of work so you got to talk to all these people you know there's a lot to unpackJason [00:31:54]: I think the open source thing is just like well I'm just doing it purely for fun and I'm doing it because I think I'm right but part of being right is the fact that it's not a venture backed startup like I think I'm right because this is all you need right so I think a part of the philosophy is the fact that all you need is a very sharp blade to sort of do your work and you don't actually need to build like a big enterprise so that's one thing I think the other thing too that I've kind of been thinking around just because I have a lot of friends at google that want to leave right now it's like man like what we lack is not money or skill like what we lack is courage you should like you just have to do this a hard thing and you have to do it scared anyways right in terms of like whether or not you do want to do a founder I think that's just a matter of optionality but I definitely recognize that the like expected value of being a founder is still quite low it is right I know as many founder breakups and as I know friends who raised a seed round this year right like that is like the reality and like you know even in from that perspective it's been tough where it's like oh man like a lot of incubators want you to have co-founders now you spend half the time like fundraising and then trying to like meet co-founders and find co-founders rather than building the thing this is a lot of time spent out doing uh things I'm not really good at. I do think there's a rising trend in solo founding yeah.Swyx [00:33:06]: You know I am a solo I think that something like 30 percent of like I forget what the exact status something like 30 percent of starters that make it to like series B or something actually are solo founder I feel like this must have co-founder idea mostly comes from YC and most everyone else copies it and then plenty of companies break up over co-founderJason [00:33:27]: Yeah and I bet it would be like I wonder how much of it is the people who don't have that much like and I hope this is not a diss to anybody but it's like you sort of you go through the incubator route because you don't have like the social equity you would need is just sort of like send an email to Sequoia and be like hey I'm going on this ride you want a ticket on the rocket ship right like that's very hard to sell my message if I was to raise money is like you've seen my twitter my life is sick I've decided to make it much worse by being a founder because this is something I have to do so do you want to come along otherwise I want to fund it myself like if I can't say that like I don't need the money because I can like handle payroll and like hire an intern and get an assistant like that's all fine but I really don't want to go back to meta I want to like get two years to like try to find a problem we're solving that feels like a bad timeAlessio [00:34:12]: Yeah Jason is like I wear a YSL jacket on stage at AI Engineer Summit I don't need your accelerator moneyJason [00:34:18]: And boots, you don't forget the boots. But I think that is a part of it right I think it is just like optionality and also just like I'm a lot older now I think 22 year old Jason would have been probably too scared and now I'm like too wise but I think it's a matter of like oh if you raise money you have to have a plan of spending it and I'm just not that creative with spending that much money yeah I mean to be clear you just celebrated your 30th birthday happy birthday yeah it's awesome so next week a lot older is relative to some some of the folks I think seeing on the career tipsAlessio [00:34:48]: I think Swix had a great post about are you too old to get into AI I saw one of your tweets in January 23 you applied to like Figma, Notion, Cohere, Anthropic and all of them rejected you because you didn't have enough LLM experience I think at that time it would be easy for a lot of people to say oh I kind of missed the boat you know I'm too late not gonna make it you know any advice for people that feel like thatJason [00:35:14]: Like the biggest learning here is actually from a lot of folks in jiu-jitsu they're like oh man like is it too late to start jiu-jitsu like I'll join jiu-jitsu once I get in more shape right it's like there's a lot of like excuses and then you say oh like why should I start now I'll be like 45 by the time I'm any good and say well you'll be 45 anyways like time is passing like if you don't start now you start tomorrow you're just like one more day behind if you're worried about being behind like today is like the soonest you can start right and so you got to recognize that like maybe you just don't want it and that's fine too like if you wanted you would have started I think a lot of these people again probably think of things on a too short time horizon but again you know you're gonna be old anyways you may as well just start now you knowSwyx [00:35:55]: One more thing on I guess the um career advice slash sort of vlogging you always go viral for this post that you wrote on advice to young people and the lies you tell yourself oh yeah yeah you said you were writing it for your sister.Jason [00:36:05]: She was like bummed out about going to college and like stressing about jobs and I was like oh and I really want to hear okay and I just kind of like text-to-sweep the whole thing it's crazy it's got like 50,000 views like I'm mind I mean your average tweet has more but that thing is like a 30-minute read nowSwyx [00:36:26]: So there's lots of stuff here which I agree with I you know I'm also of occasionally indulge in the sort of life reflection phase there's the how to be lucky there's the how to have high agency I feel like the agency thing is always a trend in sf or just in tech circles how do you define having high agencyJason [00:36:42]: I'm almost like past the high agency phase now now my biggest concern is like okay the agency is just like the norm of the vector what also matters is the direction right it's like how pure is the shot yeah I mean I think agency is just a matter of like having courage and doing the thing that's scary right you know if people want to go rock climbing it's like do you decide you want to go rock climbing then you show up to the gym you rent some shoes and you just fall 40 times or do you go like oh like I'm actually more intelligent let me go research the kind of shoes that I want okay like there's flatter shoes and more inclined shoes like which one should I get okay let me go order the shoes on Amazon I'll come back in three days like oh it's a little bit too tight maybe it's too aggressive I'm only a beginner let me go change no I think the higher agent person just like goes and like falls down 20 times right yeah I think the higher agency person is more focused on like process metrics versus outcome metrics right like from pottery like one thing I learned was if you want to be good at pottery you shouldn't count like the number of cups or bowls you make you should just weigh the amount of clay you use right like the successful person says oh I went through 100 pounds of clay right the less agency was like oh I've made six cups and then after I made six cups like there's not really what are you what do you do next no just pounds of clay pounds of clay same with the work here right so you just got to write the tweets like make the commits contribute open source like write the documentation there's no real outcome it's just a process and if you love that process you just get really good at the thing you're doingSwyx [00:38:04]: yeah so just to push back on this because obviously I mostly agree how would you design performance review systems because you were effectively saying we can count lines of code for developers rightJason [00:38:15]: I don't think that would be the actual like I think if you make that an outcome like I can just expand a for loop right I think okay so for performance review this is interesting because I've mostly thought of it from the perspective of science and not engineering I've been running a lot of engineering stand-ups primarily because there's not really that many machine learning folks the process outcome is like experiments and ideas right like if you think about outcome is what you might want to think about an outcome is oh I want to improve the revenue or whatnot but that's really hard but if you're someone who is going out like okay like this week I want to come up with like three or four experiments I might move the needle okay nothing worked to them they might think oh nothing worked like I suck but to me it's like wow you've closed off all these other possible avenues for like research like you're gonna get to the place that you're gonna figure out that direction really soon there's no way you try 30 different things and none of them work usually like 10 of them work five of them work really well two of them work really really well and one thing was like the nail in the head so agency lets you sort of capture the volume of experiments and like experience lets you figure out like oh that other half it's not worth doing right I think experience is going like half these prompting papers don't make any sense just use chain of thought and just you know use a for loop that's basically right it's like usually performance for me is around like how many experiments are you running how oftentimes are you trying.Alessio [00:39:32]: When do you give up on an experiment because a StitchFix you kind of give up on language models I guess in a way as a tool to use and then maybe the tools got better you were right at the time and then the tool improved I think there are similar paths in my engineering career where I try one approach and at the time it doesn't work and then the thing changes but then I kind of soured on that approach and I don't go back to it soonJason [00:39:51]: I see yeah how do you think about that loop so usually when I'm coaching folks and as they say like oh these things don't work I'm not going to pursue them in the future like one of the big things like hey the negative result is a result and this is something worth documenting like this is an academia like if it's negative you don't just like not publish right but then like what do you actually write down like what you should write down is like here are the conditions this is the inputs and the outputs we tried the experiment on and then one thing that's really valuable is basically writing down under what conditions would I revisit these experiments these things don't work because of what we had at the time if someone is reading this two years from now under what conditions will we try again that's really hard but again that's like another skill you kind of learn right it's like you do go back and you do experiments you figure out why it works now I think a lot of it here is just like scaling worked yeah rap lyrics you know that was because I did not have high enough quality data if we phase shift and say okay you don't even need training data oh great then it might just work a different domainAlessio [00:40:48]: Do you have anything in your list that is like it doesn't work now but I want to try it again later? Something that people should maybe keep in mind you know people always like agi when you know when are you going to know the agi is here maybe it's less than that but any stuff that you tried recently that didn't work thatJason [00:41:01]: You think will get there I mean I think the personal assistance and the writing I've shown to myself it's just not good enough yet so I hired a writer and I hired a personal assistant so now I'm gonna basically like work with these people until I figure out like what I can actually like automate and what are like the reproducible steps but like I think the experiment for me is like I'm gonna go pay a person like thousand dollars a month that helped me improve my life and then let me get them to help me figure like what are the components and how do I actually modularize something to get it to work because it's not just like a lot gmail calendar and like notion it's a little bit more complicated than that but we just don't know what that is yet those are two sort of systems that I wish gb4 or opus was actually good enough to just write me an essay but most of the essays are still pretty badSwyx [00:41:44]: yeah I would say you know on the personal assistance side Lindy is probably the one I've seen the most flow was at a speaker at the summit I don't know if you've checked it out or any other sort of agents assistant startupJason [00:41:54]: Not recently I haven't tried lindy they were not ga last time I was considering it yeah yeah a lot of it now it's like oh like really what I want you to do is take a look at all of my meetings and like write like a really good weekly summary email for my clients to remind them that I'm like you know thinking of them and like working for them right or it's like I want you to notice that like my monday is like way too packed and like block out more time and also like email the people to do the reschedule and then try to opt in to move them around and then I want you to say oh jason should have like a 15 minute prep break after form back to back those are things that now I know I can prompt them in but can it do it well like before I didn't even know that's what I wanted to prompt for us defragging a calendar and adding break so I can like eat lunch yeah that's the AGI test yeah exactly compassion right I think one thing that yeah we didn't touch on it before butAlessio [00:42:44]: I think was interesting you had this tweet a while ago about prompts should be code and then there were a lot of companies trying to build prompt engineering tooling kind of trying to turn the prompt into a more structured thing what's your thought today now you want to turn the thinking into DAGs like do prompts should still be code any updated ideasJason [00:43:04]: It's the same thing right I think you know with Instructor it is very much like the output model is defined as a code object that code object is sent to the LLM and in return you get a data structure so the outputs of these models I think should also be code objects and the inputs somewhat should be code objects but I think the one thing that instructor tries to do is separate instruction data and the types of the output and beyond that I really just think that most of it should be still like managed pretty closely to the developer like so much of is changing that if you give control of these systems away too early you end up ultimately wanting them back like many companies I know that I reach out or ones were like oh we're going off of the frameworks because now that we know what the business outcomes we're trying to optimize for these frameworks don't work yeah because we do rag but we want to do rag to like sell you supplements or to have you like schedule the fitness appointment the prompts are kind of too baked into the systems to really pull them back out and like start doing upselling or something it's really funny but a lot of it ends up being like once you understand the business outcomes you care way more about the promptSwyx [00:44:07]: Actually this is fun in our prep for this call we were trying to say like what can you as an independent person say that maybe me and Alessio cannot say or me you know someone at a company say what do you think is the market share of the frameworks the LangChain, the LlamaIndex, the everything...Jason [00:44:20]: Oh massive because not everyone wants to care about the code yeah right I think that's a different question to like what is the business model and are they going to be like massively profitable businesses right making hundreds of millions of dollars that feels like so straightforward right because not everyone is a prompt engineer like there's so much productivity to be captured in like back office optim automations right it's not because they care about the prompts that they care about managing these things yeah but those would be sort of low code experiences you yeah I think the bigger challenge is like okay hundred million dollars probably pretty easy it's just time and effort and they have the manpower and the money to sort of solve those problems again if you go the vc route then it's like you're talking about billions and that's really the goal that stuff for me it's like pretty unclear but again that is to say that like I sort of am building things for developers who want to use infrastructure to build their own tooling in terms of the amount of developers there are in the world versus downstream consumers of these things or even just think of how many companies will use like the adobes and the ibms right because they want something that's fully managed and they want something that they know will work and if the incremental 10% requires you to hire another team of 20 people you might not want to do it and I think that kind of organization is really good for uh those are bigger companiesSwyx [00:45:32]: I just want to capture your thoughts on one more thing which is you said you wanted most of the prompts to stay close to the developer and Hamel Husain wrote this post which I really love called f you show me the prompt yeah I think he cites you in one of those part of the blog post and I think ds pi is kind of like the complete antithesis of that which is I think it's interesting because I also hold the strong view that AI is a better prompt engineer than you are and I don't know how to square that wondering if you have thoughtsJason [00:45:58]: I think something like DSPy can work because there are like very short-term metrics to measure success right it is like did you find the pii or like did you write the multi-hop question the correct way but in these workflows that I've been managing a lot of it are we minimizing churn and maximizing retention yeah that's a very long loop it's not really like a uptuna like training loop right like those things are much more harder to capture so we don't actually have those metrics for that right and obviously we can figure out like okay is the summary good but like how do you measure the quality of the summary it's like that feedback loop it ends up being a lot longer and then again when something changes it's really hard to make sure that it works across these like newer models or again like changes to work for the current process like when we migrate from like anthropic to open ai like there's just a ton of change that are like infrastructure related not necessarily around the prompt itself yeah cool any other ai engineering startups that you think should not exist before we wrap up i mean oh my gosh i mean a lot of it again it's just like every time of investors like how does this make a billion dollars like it doesn't i'm gonna go back to just like tweeting and holding my breath underwater yeah like i don't really pay attention too much to most of this like most of the stuff i'm doing is around like the consumer of like llm calls yep i think people just want to move really fast and they will end up pick these vendors but i don't really know if anything has really like blown me out the water like i only trust myself but that's also a function of just being an old man like i think you know many companies are definitely very happy with using most of these tools anyways but i definitely think i occupy a very small space in the engineering ecosystem.Swyx [00:47:41]: Yeah i would say one of the challenges here you know you call about the dealing in the consumer of llm's space i think that's what ai engineering differs from ml engineering and i think a constant disconnect or cognitive dissonance in this field in the ai engineers that have sprung up is that they are not as good as the ml engineers they are not as qualified i think that you know you are someone who has credibility in the mle space and you are also a very authoritative figure in the ai space and i think so and you know i think you've built the de facto leading library i think yours i think instructors should be part of the standard lib even though i try to not use it like i basically also end up rebuilding instructor right like that's a lot of the back and forth that we had over the past two days i think that's the fundamental thing that we're trying to figure out like there's very small supply of MLEs not everyone's going to have that experience that you had but the global demand for AI is going to far outstrip the existing MLEs.Jason [00:48:36]: So what do we do do we force everyone to go through the standard MLE curriculum or do we make a new one? I'

Clotheshorse
Episode 191: Fast Jewelry, Knockoffs, and Net 60 with Emily Li Mandri of MLE

Clotheshorse

Play Episode Listen Later Feb 12, 2024 139:51


Emily Li Mandri, founder and design behind MLE, joins Amanda to talk about all things accessories and jewelry, includingWhat is costume jewelry? And why is metal content important?The drawbacks of "fast jewelry"What are the challenges of running a small, ethical accessories brand?How are knockoffs and copycats a big part of the jewelry/accessories industry?What happens when bigger brands don't pay their invoices?And so much more! Read more about what is happening with Neighborhood Goods and unpaid brands here: "Neighborhood Goods Has Closed--Vendors Want their Money."Amanda gets things started with thoughts about the "Loneliness Economy," capitalism, and community. It turns out that one of the most revolutionary things we can do is...be active and supportive members of our community!Find Emily and MLE here: @madebyMLE on InstagrammadebyMLE.com (use code CLOTHESHORSE to get 10% off your order)Additional reading:"The Loneliness Economy: How Capitalism Thrives on Isolation," Piyush Patel, Medium."Capitalism starves us of love — we don't have to stand by," Alexandra Kauffman, The Emory Wheel."Capitalism Subverts Community," Robert Neuwirth, Noema."Capitalism has warped our understanding of community — and it's making us vulnerable to manipulation," Valerie Vande Panne, Salon.Register for the February Clotheshorse Webinar/Hang Out Session: Why new clothes are kind a garbage...February 29, 8pm EST.  Free (but please support Clotheshorse via Ko-fi if you enjoy yourself)!Limited to 100 attendees, so register now here.If you want to share your opinion/additional thoughts on the subjects we cover in each episode, feel free to email, whether it's a typed out message or an audio recording:  amanda@clotheshorse.worldOr call the Clotheshorse hotline: 717.925.7417Did you enjoy this episode? Consider "buying me a coffee" via Ko-fi: ko-fi.com/clotheshorseFind this episode's transcript (and so much more) at clotheshorsepodcast.comClotheshorse is brought to you with support from the following sustainable small businesses:​High Energy Vintage is a fun and funky vintage shop located in Somerville, MA, just a few minutes away from downtown Boston. They offer a highly curated selection of bright and colorful clothing and accessories from the 1940s-1990s for people of all genders. Husband-and-wife duo Wiley & Jessamy handpick each piece for quality and style, with a focus on pieces that transcend trends and will find a home in your closet for many years to come! In addition to clothing, the shop also features a large selection of vintage vinyl and old school video games. Find them on instagram @ highenergyvintage, online at highenergyvintage.com, and at markets in and around Boston.The Pewter Thimble Is there a little bit of Italy in your soul? Are you an enthusiast of pre-loved decor and accessories? Bring vintage Italian style — and history — into your space with The Pewter Thimble (@thepewterthimble). We source useful and beautiful things, and mend them where needed. We also find gorgeous illustrations, and make them print-worthy. Tarot cards, tea towels and handpicked treasures, available to you from the comfort of your own home. Responsibly sourced from across Rome, lovingly renewed by fairly paid artists and artisans, with something for every budget. Discover more at thepewterthimble.comSt. Evens is an NYC-based vintage shop that is dedicated to bringing you those special pieces you'll reach for again and again. More than just a store, St. Evens is dedicated to sharing the stories and history behind the garments. 10% of all sales are donated to a different charitable organization each month.  New vintage is released every Thursday at wearStEvens.com, with previews of new pieces and more brought to you on Instagram at @wear_st.evens.Deco Denim is a startup based out of San Francisco, selling clothing and accessories that are sustainable, gender fluid, size inclusive and high quality--made to last for years to come. Deco Denim is trying to change the way you think about buying clothes. Founder Sarah Mattes wants to empower people to ask important questions like, “Where was this made? Was this garment made ethically? Is this fabric made of plastic? Can this garment be upcycled and if not, can it be recycled?” Signup at decodenim.com to receive $20 off your first purchase. They promise not to spam you and send out no more than 3 emails a month, with 2 of them surrounding education or a personal note from the Founder. Find them on Instagram as @deco.denim.Gabriela Antonas is a visual artist, an upcycler, and a fashion designer, but Gabriela Antonas is also a feminist micro business with radical ideals. She's the one woman band, trying to help you understand, why slow fashion is what the earth needs. If you find your self in New Orleans, LA, you may buy her ready-to-wear upcycled garments in person at the store “Slow Down” (2855 Magazine St). Slow Down Nola only sells vintage and slow fashion from local designers. Gabriela's garments are guaranteed to be in stock in person, but they also have a website so you may support this women owned and run business from wherever you are! If you are interested in Gabriela making a one of a kind garment for you DM her on Instagram at @slowfashiongabriela to book a consultation.Vagabond Vintage DTLV is a vintage clothing, accessories & decor reselling business based in Downtown Las Vegas. Not only do we sell in Las Vegas, but we are also located throughout resale markets in San Francisco as well as at a curated boutique called Lux and Ivy located in Indianapolis, Indiana. Jessica, the founder & owner of Vagabond Vintage DTLV, recently opened the first IRL location located in the Arts District of Downtown Las Vegas on August 5th. The shop has a strong emphasis on 60s & 70s garments, single stitch tee shirts & dreamy loungewear. Follow them on instagram, @vagabondvintage.dtlv and keep an eye out for their website coming fall of 2022....