Podcasts about mle

  • 200PODCASTS
  • 381EPISODES
  • 47mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Feb 11, 2026LATEST

POPULARITY

20192020202120222023202420252026


Best podcasts about mle

Latest podcast episodes about mle

L’Heure du Monde
Bad Bunny : un symbole anti-Trump au Super Bowl

L’Heure du Monde

Play Episode Listen Later Feb 11, 2026 26:56


La performance de Bad Bunny à la mi-temps du Super Bowl, dimanche 8 février, était pour le moins attendue. En septembre, l'annonce de sa participation au traditionnel spectacle de la mi-temps de la finale du championnat de football américain organisé par la National Football League avait réjoui les fans du chanteur portoricain. Et hérissé Donald Trump. Celui-ci avait été le premier président américain à assister au Super Bowl, en 2025. Il y a renoncé cette année, en raison de la présence de l'artiste.Car Bad Bunny n'est pas seulement l'un des musiciens les plus écoutés au monde, il est aussi un fervent défenseur de son île natale, Porto Rico, un « Etat libre associé » des Etats-Unis, dont les habitants bénéficient d'un passeport américain mais n'ont pas de droit de vote aux élections nationales. Il est aussi ouvertement critique envers la politique migratoire de Donald Trump, comme il l'a exprimé sur la scène des Grammy Awards le 1er février en appelant à « mettre dehors » ICE, la police de l'immigration américaine.Sur la scène du Levi's Stadium, à Santa Clara (Californie), Bad Bunny a délivré une performance fidèle à son image : bourrée de références à l'histoire de Porto Rico, célébrant le continent américain dans son ensemble et faisant passer un subtil message politique.Comment Bad Bunny est-il passé de chanteur à succès à porte-drapeau de la culture portoricaine et de la lutte contre Donald Trump ? Réponse dans cet épisode de « L'Heure du Monde » avec Lucas Minisini, journaliste à « M Le magazine du Monde », qui s'est rendu en reportage à Porto Rico.Un épisode d'Adélaïde Tenaglia. Réalisation : Florentin Baume. Musique : Amandine Robillard et Epidemic sounds. Présentation et suivi éditorial : Thomas Baumgartner. Rédaction en chef : Adèle Ponticelli. Dans cet épisode : extraits du spectacle de la mi-temps du Super Bowl le 8 février 2026, extrait du discours de Bad Bunny aux Grammy Awards le 1er février 2026, extrait du « Tonight Show » de NBC du 27 septembre 2018.Cet épisode a été diffusé le 11 février 2026. ---Abonnez-vous au Monde : https://abo.lemonde.fr/podcastEt réservez vos places pour les lives des 5 ans de L'Heure du Monde : https://ateliers.lemonde.fr/lheure-du-monde/174 Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
The First Mechanistic Interpretability Frontier Lab — Myra Deng & Mark Bissell of Goodfire AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Feb 6, 2026 68:01


From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword

Le sept neuf
Du "grand remplacement" à la "remigration", "Renaud Camus est une sorte de ventriloque de notre société"

Le sept neuf

Play Episode Listen Later Jan 28, 2026 9:40


durée : 00:09:40 - L'invité de 7h50 - par : Benjamin Duhamel - Olivier Faye et Gaspard Dhellemmes, journalistes à M Le magazine du Monde, et auteurs de "L'homme par qui la peste arriva” (Flammarion), où ils reviennent sur le parcours de Renaud Camus et son influence plus ou moins assumée sur la droite et l'extrême droite françaises. - invités : Olivier Faye - Olivier Faye : Journaliste au Monde, en charge du suivi du Front national et de l'extrême-droite Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

Les interviews d'Inter
Du "grand remplacement" à la "remigration", "Renaud Camus est une sorte de ventriloque de notre société"

Les interviews d'Inter

Play Episode Listen Later Jan 28, 2026 9:40


durée : 00:09:40 - L'invité de 7h50 - par : Benjamin Duhamel - Olivier Faye et Gaspard Dhellemmes, journalistes à M Le magazine du Monde, et auteurs de "L'homme par qui la peste arriva” (Flammarion), où ils reviennent sur le parcours de Renaud Camus et son influence plus ou moins assumée sur la droite et l'extrême droite françaises. - invités : Olivier Faye - Olivier Faye : Journaliste au Monde, en charge du suivi du Front national et de l'extrême-droite Vous aimez ce podcast ? Pour écouter tous les autres épisodes sans limite, rendez-vous sur Radio France.

C à vous
« Grand remplacement » : qui se cache derrière Renaud Camus ?

C à vous

Play Episode Listen Later Jan 28, 2026 13:50


Qui est Renaud Camus, l'homme à l'origine de la théorie « du grand remplacement ». Les journalistes Olivier Faye et Gaspard Dhellemmes du “M Le magazine du Monde” présentent leur livre « L'homme par qui la peste arriva » aux éditions FlammarionTous les soirs du lundi au vendredi à partir de 18h57 sur France 5, Anne-Elisabeth Lemoine et toute son équipe accueillent celles et ceux qui font l'actualité du jour.

C à vous
L'intégrale de C à Vous - 28/01/26 

C à vous

Play Episode Listen Later Jan 28, 2026 52:16


Nos invités du mercredi 28 janvier: Qui est Renaud Camus, l'homme à l'origine de la théorie « du grand remplacement ». Les journalistes Olivier Faye et Gaspard Dhellemmes du “M Le magazine du Monde” présentent leur livre « L'homme par qui la peste arriva » aux éditions Flammarion. TDAH : Michel Cymes, médecin, et Olivier Revol, pédopsychiatre spécialiste des troubles du TDAH, co-auteurs du livre « Heureux comme des TDAH », tenteront de répondre.Avec également comme chaque soir L'édito de Patrick Cohen, la story de Louis Amar et le 5 sur 5 de Lorrain Sénéchal.Tous les soirs du lundi au vendredi à 18h55 sur France 5, Anne-Elisabeth Lemoine et toute son équipe accueillent celles et ceux qui font l'actualité du jour.

Yossi Gozlan
Golden State Warriors Trade Deadline Targets from Giannis to Anthony Davis

Yossi Gozlan

Play Episode Listen Later Dec 30, 2025 81:21


In this episode, Rich Twu of Let's Go Warriors joins me to discuss the Warriors upcoming trade deadline. They've been the team most linked to most of the top rumored players in recent weeks, so we review the main targets and discuss the potential frameworks that could emerge.0:00 Intro and temperature check on the Warriors season11:48 Warriors salary cap situation14:14 Jimmy Butler or Draymond + Kuminga package21:48 Anthony Davis36:00 Giannis Antetokounmpo42:10 Trey Murphy III56:14 Nic Claxton1:07:40 Daniel Gafford and other MLE-level playersYou can follow Yossi on:Twitter: https://twitter.com/YossiGozlanBlueSky: https://bsky.app/profile/yossigozlan.bsky.socialSalary cap sheets: www.capsheets.comYou can follow Rich on:Twitter: https://twitter.com/LetsGoWarriorsBlueSky: https://bsky.app/profile/letsgowarriors.bsky.socialYouTube: https://www.youtube.com/@UCM-HqCzHzjy75wZE5-CwnbwThird Apron is available on all podcast providers. Please subscribe, rate, and share if you enjoyed this: https://linktr.ee/yossigozlanYou can also access Yossi's salary cap analysis on his Substack. Subscribe for $7 per month or $50 annually!Third Apron: https://thirdapron.com

The Writers' Cafe
Derek Owusu - Borderline Fiction

The Writers' Cafe

Play Episode Listen Later Nov 25, 2025 58:13


Welcome to Season 2 of The Writers' Cafe! Brought to you from the award-winning indie, Sevenoaks Bookshop!We're wrapping up Season 2 with the awe-inspiring Derek Owusu! Derek is one of Denise's favourite writers, so it was a great honour to have him on for the last episode to chat about his latest novel: Borderline Fiction. Derek is an award-winning writer, poet and podcaster whose work explores identity, masculinity, and the nuances of Black British Life. His debut novel That Reminds Me won the Desmond Elliott Prize, and his follow-up, Losing the Plot, blends prose and poetry in a moving portrait of mother-son relationship. Beyond his books, Derek's openness has made him a vital, reassuring voice in conversations about mental health and the lived experience of BPD. Join us in this episode as we delve into the inner realms of Marcus' life with BPD, flitting between Marcus at 19 and 25 as he navigates love, addiction, performance, and the shifting constancy of the self.Bookseller Review: "Once I started reading, I couldn't stop. When talking with Derek, he said he hopes people notice how the book itself, from its very title, "is a symptom of BPD." There's a fracturing of the self throughout: we move quickly between Marcus at 19 and at 25, and the shifts in how he speaks, behaves, and carries himself is so intriguing. What drew me in most is how much this book is poetry in unstoppable motion. And then there's the written MLE in 19-year-old Marcus' voice! So clever, so familiar, so true to that age and environment! This is a book I'll cherish for a long time. Derek crafts his words with such precision and care that they feel unbreakable; each one feels placed with intention. It's firmly on my list of books I hope everyone reads at least once (or x200 times!!)"If you are new to The Writer's Cafe pod: Inspired by our own in-shop cafe of the same name and the conversations about books, life, literature, and so much more every single day - as well as the literary salons of old where gossip thrived - this podcast seeks to highlight and celebrate the best writers and voices every episode with a warm, detailed conversation about their work and craft.Derek's work can be purchased with us here:That Reminds Me https://sevenoaksbookshop.co.uk/shop/that-reminds-me-winner-of-the-desmond-elliott-prize-2020-by-derek-owusu/Losing the Plot https://sevenoaksbookshop.co.uk/shop/losing-the-plot-by-derek-owusu/Borderline Fiction https://sevenoaksbookshop.co.uk/shop/borderline-fiction-by-derek-owusu/ Hosted on Acast. See acast.com/privacy for more information.

Platform Chats
Building Better Rail: Lessons from Watford and AREMA's Rising Leaders

Platform Chats

Play Episode Listen Later Nov 21, 2025 36:54


In this episode, host Walt Bleser chats with the two latest Watford Fellowship recipients, Matt Kirby, PE, MLE, M.ASCE, AVP/Director - Rail Public Projects & Programs at Michael Baker International, and Daniel Rappaport, Civil Engineer III – RLE at Chicago Transit Authority. The Watford Fellowship Program gives grants to two young rail or transit professionals for the Watford Conference, an international conference on rail and transit design and operations.  The pair reflect on their experiences from this year's Conference in London, England. They also share their perspective on global rail projects after field trips to major British rail infrastructure projects as part of the Conference.  If you want to learn more about how AREMA helps professionals advance their careers, this is an episode you won't want to miss. 

Tomos y Grapas, Cómics
Silent Jenny | Shin Zero | Dime tu Nombre | Amanecer de X | Es Jeff | Animal Man | NOVEDADES

Tomos y Grapas, Cómics

Play Episode Listen Later Nov 13, 2025 157:20


Os dejamos nuestro repaso semanal de novedades y recomendaciones para que tengáis la pila de lecturas siempre ocupada y al día. Star Wars; Ahsoka DC Absolute Impenetrable Patrulla-X: Amanecer de X. Marvel Omnibus Animal Man de Grant Morrison Animal Man de Jeff Lemire Ragnarök. El destino de los Dioses Rai – Valiant Orígenes Britania En Vela Silent Jenny Es Jeff El Diablo y Coral Remington 1885 Dime tu Nombre Huracán sobre nuevo Mexico Puño de hierro 01 (MLE). All in Los nuevos dioses Meteoros. Historias de gente que pasa Digimon Adventure V-Tamer TMNT - Aventuras Animadas Shin Zero

Tomos y Grapas, Cómics
TOMOS Y GRAPAS Vol.12 Capítulo #3 - Raven

Tomos y Grapas, Cómics

Play Episode Listen Later Nov 10, 2025 434:50


CAPÍTULO #395… Esta semana zarpamos rumbo a la aventura de la pirateria de la mano de Mathieu Lauffray y su trabajazo en Raven. Analizaremos las claves de esta historia de puro género, donde el responsable de Long John Silver se lanza como autor completo para retratar el espíritu más romántico de los bucaneros. Además, regresan las Charlas Kirby con un grande como lo es El Irra, con quien hablaremos sobre su trayectoria y trabajos como No Te Serviré o Perros Atados. Como siempre también comentaremos todas las noticias que asaltan diariamente el mundo del cómic, y reseñaremos las novedades más golosas de las estanterias. Reseñaremos lanzamientos como doblete de Animal Man con Lemire y Morrison, lo nuevo de Alix Garin en Impenetrable, trabajos nacionales como Dime Tu Nombre, el regreso de Bablet por partida doble con Silent Jenny y su colaboración con Singelin en Shin Zero y muchísimo más. Gracias por estar al otro lado agentes¡Nos oímos! NOTICIAS [00:06:02] La licencia de Marshall Bass encuentra nuevo hogar en Astiberri Marvel anuncia el evento Armageddon Grant Morrison vuelve a Vertigo Punisher tendrá una nueva serie regular con Benjamin Percy Nuevos crossovers de Marvel y DC Próximos Avances Editoriales NOVEDADES Y RELECTURAS [01:07:07] Star Wars; Ahsoka DC Absolute Impenetrable Patrulla-X: Amanecer de X. Marvel Omnibus Animal Man de Grant Morrison Animal Man de Jeff Lemire Ragnarök. El destino de los Dioses Rai – Valiant Orígenes Britania En Vela Silent Jenny Es Jeff El Diablo y Coral Remington 1885 Dime tu Nombre Huracán sobre nuevo Mexico Puño de hierro 01 (MLE). All in Los nuevos dioses Meteoros. Historias de gente que pasa Digimon Adventure V-Tamer TMNT - Aventuras Animadas Shin Zero RAVEN [04:44:26] Hoy nos reencontramos con un viejo conocido, como lo es Mathieu Lauffray, el responsable artístico de Long John Silver, que le cogió el gusto a los piratas y nos ha deleitado con Raven, Un trabajo basado en un relato original de Robert Howard, que narra las desdichadas aventuras de un pirata que encarna los valores más románticos del género, y que además se postula como uno de los trabajos de aventuras más destacados del año. Por otro lado, una historia que utiliza los clichés y tropos del género de piratas, como un ejemplo a seguir y que promete un espectáculo de puro disfrute. CHARLAS KIRBY: EL IRRA [05:43:22] Estamos de vuelta con las CHARLAS KIRBY y hoy nos acompaña El IRRA para hablarnos de su último trabajo en Perros Atados. Nos reunimos con viejo conocido y un maestro de la viñeta nacional, para charlar sobre su trayectoria, desde su primer trabajo con Palos de Ciego, pasando por el bombazo que fue No Te Serviré, hasta su más reciente nuevo proyecto, que ya ha arrasado entre los lectores. CORREO DEL AGENTE [06:29:31] Leemos todos vuestros mensajes dejados en las redes y nuestra sección de la voz de los Agentes de Hydra, ¡Habla pueblo Habla! ¡Muchas gracias por escucharnos y todo vuestro apoyo y participación! Nuestro PODCAST ya está en el CANAL SECUNDARIO ¡Inflate a contenido comiquero aquí! https://www.youtube.com/@tomosygrapaspodcast Tomos y grapas es un medio de comunicación transmedia, disfruta de nuestros contenidos también en nuestra web, YouTube y redes sociales. VISITA TAMBIÉN NUESTRA LIBRERÍA En la Calle Alcalá 211 o nuestra TIENDA ONLINE con el mejor servicio y atención tiendatomosygrapas.com

Monocle 24: The Stack
‘M International': When a French legacy title publishes in English, plus the launch of ‘Equator'

Monocle 24: The Stack

Play Episode Listen Later Nov 1, 2025 34:42


We speak with Marie-Pierre Lannelongue, editorial director of ‘M Le magazine du Monde’ on the third issue of ‘M International’. Plus: Gavin Jacobson and Samanth Subramanian from the newly released ‘Equator’, a title taking a less Anglophone West-centric look at the world.See omnystudio.com/listener for privacy information.

IEE 475: Simulating Stochastic Systems
Lecture G3 (2025-10-23) Input Modeling, Part 3 (Parameter Estimation and Goodness of Fit)

IEE 475: Simulating Stochastic Systems

Play Episode Listen Later Oct 23, 2025


In this lecture, we (nearly) finish our coverage of Input Modeling, where the focus of this lecture is on parameter estimation and assessing goodness of fit. We review input modeling in general and then briefly review fundamentals of hypothesis testing. We discuss type-I error, p-values, type-II error, effect sizes, and statistical power. We discuss the dangers of using p-values at very large sample sizes (where small p-values are not meaningful) and at very small sample sizes (where large p-values are not meaningful). We give some examples of this applied to best-of-7 sports tournaments and voting. We then discuss different shape parameters (including location, scale, and rate), and then introduce summary statistics (sample mean and sample variance) and maximum likelihood estimation (MLE), with an example for a point estimate of the rate of an exponential. We introduce the chi-squared (lower power) and Kolmogorov–Smirnov (KS, high power) tests for goodness of fit, but we will go into them in more detail at the start of the next lecture.

The Arise Podcast
Season 6, Episode 2: Reality and Faith with Rev. Starlette Thomas and Dr. Tamice Spencer Helms

The Arise Podcast

Play Episode Listen Later Sep 16, 2025 54:48


Reality and Faith Prompts1. What are the formations or structures for how you know you are in reality in regards to your faith? Do you have indicators? Internal senses? External resources? 2. Who are you in active dialogue with in regards to your faith? Who that is living and who that is passed on? 3. When you encounter dissonance with your reality of faith, how do you stay grounded in your experience?TranscriptsDanielle (00:00):To my computer. So thank you Starlet. Thank you Tamis for being with me. I've given already full introductions. I've recorded those separately. So the theme of the conversation and kind of what we're getting into on this podcast this season is I had this vision for talking about the themes have been race, faith, culture, church in the past on my podcast. But what I really think the question is, where is our reality and where are our touchpoints in those different realms? And so today there's going to be more info on this in the future, but where do we find reality and how do we form our reality when we integrate faith? So one of the questions I was asking Tamis and Starlet was what are the formations or structures for how you know are in reality in regards to your faith? Do you have indicators? Do you have internal senses? Do you have external resources? And so that's where I want to jump off from and it's free flow. I don't do a whole lot of editing, but yeah, just curious where your mind goes when you hear that, what comes to mind and we'll jump from there.Starlette (01:12):I immediately thought of baptism, baptismal waters. My baptismal identity forms and shapes me. It keeps me in touch with my body. It keeps me from being disembodied. Also, it keeps me from being swindled out of authority over my body due to the dangerous irrationalism of white body supremacy. So that's one thing. Protest also keeps me grounded. I have found that acts of defiance, minor personal rebellions, they do well for me. They keep me spiritually that I feel like it keeps me in step with Jesus. And I always feel like I'm catching up that I'm almost stepping on his feet. So for me, baptismal identity and protesting, those are the two things come to me immediately.Tamice (02:04):Whoa, that's so deep. Wow, I never thought about that. But I never thought about protests being a thing that groundsBecause I mean I've just been, for me I would say I've been working on the right so, and y'all know me, so I got acronyms for days. But I mean I think that the radical ethical spirituality that's tethered to my tradition, that's a rule of life, but it's also a litmus test. So for me, if you can't tell the truth, we don't have conversations about non-violence and loving enemies. I don't get to ethical spirituality unless you come through the front door of truth telling and truth telling in that sense of the r. And the rest arrest mix tape is radical. Angela Davis says radical and that's grasping stuff at the root. So before we have conversations about forgiveness for instance, or Jesus or scripture or what is right and what is moral, it's very important that we first tell the truth about the foundations of those realities and what we even mean by those terms and whose those terms serve and where they come from. I talk about it asking to see the manager. We need see the manager(03:24):Me that grounds me is now if something comes in and it calls me to move in a different way or corrects me or checks me in a certain way, I say yes to it if it comes through the door of truth telling because it means I also got to be true and tell the truth to myself. So that keeps me grounded. That kind of acronym is kind of how I move, but it's also how I keep toxic ways of doing religion out. And I also have come back into relationship with trees and grass and the waters and that's been really powerful for moving down into different types of intelligence. For me, the earth has been pulling me into a different way of knowing and being in that part brings me to ancestors. Just like you starlet my ancestors, I keep finding them in the trees and in the water and in the wind. So it's like, well I need them real bad right now. So that's where I'm kind of grounding myself these days.But to your point about grounding and protest, I feel most compelled to show up in spaces where the ground is crying out screaming. I feel like it beckons me there. And we talked about the most recent news of Trey being found and you talked about truth telling and what resonated immediately. And it didn't sit right with me that African-American people, people of African descent know not to take their lives in that way because of the traumatic history that when you say things like you don't suspect any foul play, it sounds like what has historically been named as at the hands of persons unknown where that no one is held responsible for the death of African-American people. That's what ties it in for me. And I feel like it's an ancestral pool that they didn't leave this way, they didn't leave in the way that they were supposed to, that something stinks and that they're crying out to say, can you hear me? Come over here Terry a while here. Don't leave him here. Don't let up on it because we didn't call him here somebody. So I love that you said that you are, feel yourself being grounded in and call back to the earth because I do feel like it speaks to us,But there are telltale signs in it and that the trees will tell us too. And so I didn't have a hand in this. It was forced on me and I saw it all come and talk to me. Put your hand here, put your head here and you can hear me scream and then you can hear me scream, you can hear him scream. He was calling out the whole time. That's what I believe in. That's how I test reality. I tested against what the earth is saying like you said, but I think we have to walk the ground a bit. We have to pace the ground a bit. We can't just go off of what people are saying. Back to your point about truth telling, don't trust nobody I don't trust. I don't trust anybody that's going to stop because you can't fix a lie. So if you're going to come in with deception, there's not much else I can do with you. There's not much I can say to you. And I find that white body supremacy is a supreme deception. So if we can't start there in a conversation, there's nothing that I can say to youTamice (06:46):That's facts. It's interesting that you talked about baptism, you talked about grounding and I had this story pop up and while you were talking again it popped up again. So I'm going to tell it. So we are not going to talk about who and all the things that happened recently, but I had made some comments online around that and around just the choice to be blind. So I've been talking a lot about John nine and this passage where it is very clear to everyone else what's happening, but the people who refuse to see, refuse to see.So in that, I was kind of pulled into that. I was in Mississippi, I was doing some stuff for the book and this lady, a chaplain, her name is Sally Bevin, actually Sally Bevel, she walked up to me, she kept calling me, she was like, Tam me, she want to come. I have my whole family there. We were at the Mississippi Book Fair and she kept saying, Tam me, she want to come join, dah, dah, dah. Then my family walked off and they started to peruse and then she asked me again and I was like, no, I'm good. And I was screaming. I mean I'm looking in the screen and the third time she did it, it pulled me out and I was like, this woman is trying to pull me into being present. And she said to me, this is funny, starlet. I said, I feel like I need to be washed and I need a baptism because this phone feels like so on right now and the wickedness is pulling me. So she poured, she got some ice, cold water, it was 95 degrees, poured cold water on my hands, had me wash my hands and she took the cold water. She put a cross on my forehead. And you know what she said to me? She said, remember your baptism?She said, remember your baptism? And when I was baptized, even though it was by a man who will not also be named, when I was baptized the wind, there was a whirlwind at my baptism. It was in 2004, that same wind hit in Mississippi and then I felt like I was supposed to take my shoes off. So I walked around the Mississippi Festival with no shoes on, not knowing that the earth was about to receive two people who did not deserve to be hung from trees. And there's something very, I feel real talk, I feel afraid for white supremacy right now in the name of my ancestors and I feel like I'm calling on everything right now. And that's also grounding me.Starlette (09:36):I was with Mother Moses last week. I went to Dorchester County just to be with her because the people were here. Take me. I said, I'll leave them all here. I know you said there are a few here, but give me the names, give me the last names of the people because I don't have time for this. I see why she left people. I see why she was packing. So to your point, I think it's important that we talk to the ancestors faithfully, religiously. We sit down at their feet and listen for a bit about how they got over and how they got through it and let them bear witness to us. And she does it for me every time, every single time she grounds, she grounds meDanielle (10:23):Listening to you all. I was like, oh wait. It is like Luke 19 where Jesus is coming in on the show and he didn't ride in on the fanciest plane on a donkey. And if you're familiar with that culture that is not the most elevated animal, not the elevated animal to ride, it's not the elevated animal. You don't eat it. Not saying that it isn't eaten at times, but it's not right. So he rides in on that and then people are saying glory to God in the highest and they're praising him and the Pharisees are like, don't do that because it's shameful and I don't remember the exact words, but he's basically be quiet. The rocks are going to tell the story of what happened here. He's walking his way. It kind of reminds me to me. So what you're saying, he's walking away, he's going to walk and he's going to walk that way and he's going to walk to his death. He's walking it in two scenarios that Jesus goes in to talk about. Your eyes are going to be blind to peace, to the real way to peace. It's going to be a wall put around you and you're going to miss out. People are going to destroy you because you missed your chance.Starlette (11:50):Point again creation. And if you're going to be a rock headed people, then I'll recruit this rock choir. They get ready to rock out on you. If there's nothing you're going to say. So even then he says that creation will bear witness against you. You ain't got to do it. You ain't got to do it. I can call these rock. You can be rock headed if you want to. You can be stony hearted if you want to. I can recruit choir members from the ground,Tamice (12:16):But not even that because y'all know I'm into the quantum and metaphysics. Not even that they actually do speak of course, like words are frequencies. So when you hold a certain type of element in your hand, that thing has a frequency to it. That's alright that they said whatever, I don't need it from you. Everything else is tapped into this.Starlette (12:39):Right. In fact, it's the rocks are tapped into a reality. The same reality that me and this donkey and these people throwing stuff at my feet are tapped into.You are not tapped into reality. And so that's why he makes the left and not the right because typically when a person is coming to Saka city, they head towards the temple. He went the other direction because he is like it was a big fuck. I don't use power like this. And actually what I'm about to do is raise you on power. This is a whole different type of power. And that's what I feel like our ancestors, the realities that the alternative intelligence in the world you're talking about ai, the alternative intelligence in the world is what gives me every bit of confidence to look this beast in the face and call it what it is. This isTamice (13:52):And not going to bow to it. And I will go down proclaiming it what it is. I will not call wickedness good.And Jesus said, Jesus was so when he talks about the kingdom of heaven suffering violence and the violence taken it by force, it's that it's like there's something so much more violent about being right and righteous. Y'all have to use violence because you can't tell the truth.Danielle (14:29):Do you see the split two? There's two entirely different realities happening. Two different kingdoms, two entirely different ways of living in this era and they're using quote J, but it's not the same person. It can't be, you cannot mix white Jesus and brown Jesus. They don't go together. TheyStarlette (15:00):Don't, what is it? Michael O. Emerson and Glenn e Bracy. The second they have this new book called The Religion of Whiteness, and they talk about the fact that European Americans who are racialized as white Tahi says those who believe they are white. He says that there's a group of people, the European Americans who are racialized as white, who turn to scripture to enforce their supremacy. And then there's another group of people who turn to scripture to support and affirm our sibling.It is two different kingdoms. It's funny, it came to me the other day because we talk about, I've talked about how for whiteness, the perception of goodness is more important than the possession of it.You know what I mean? So mostly what they do is seek to be absolved. Right? So it's just, and usually with the being absolved means I'm less bad than that, so make that thing more bad than me and it's a really terrible way to live a life, but it is how whiteness functions, and I'm thinking about this in the context of all that is happening in the world because it's like you cannot be good and racist period. And that's as clear as you cannot love God and mammon you will end up hating one and loving the other. You cannot love God. You cannotStarlette (16:29):Love God and hate your next of kin your sibling. Dr. Angela Parker says something really important During the Wild Goose Festival, she asked the participants there predominantly European American people, those racialized as white. She said, do you all Terry, do you Terry, do you wait for the Holy Spirit? Do you sit with yourself and wait for God to move? And it talked, it spoke to me about power dynamic. Do you feel like God is doing the moving and you wait for the spirit to anoint you, to fill you, to inspire you, to baptize you with fire? You Terry, do you wait a while or do you just the other end of that that she doesn't say, do you just get up? I gave my life to Jesus and it's done right handed fellowship, give me my certificate and walk out the door. You have to sit with yourself and I don't know what your tradition is.I was raised Pentecostal holiness and I had to tear all night long. I was on my knees calling on the name of Jesus and I swear that Baba couldn't hear me. Which octave do you want me to go in? I lost my voice. You know them people, them mothers circled me with a sheet and told me I didn't get it that night that I had to come back the next day after I sweat out my down, I sweat out my press. Okay. I pressed my way trying to get to that man and they told me he didn't hear me. He not coming to get you today. I don't hear a change. They were looking for an evidence of tongues. They didn't hear an evidence, a change speech. You still sound the way that you did when you came in here. And I think that white body supremacy, that's where the problem lies with me. There's no difference. I don't hear a change in speech. You're still talking to people as if you can look down your nose with them. You have not been submerged in the water. You did not go down in the water. White supremacy, white body supremacy has not been drowned out.Terry, you need to Terry A. Little while longer. I'll let you know when you've gotten free. When you've been lifted, there's a cloud of witnesses. Those mothers rubbing your back, snapping your back and saying, call on him. Call him like you want him. Call him like you need him and they'll tell you when they see evidence, they'll let you, you know when you've been tied up, tangled up. That's what we would say. Wrapped up in Jesus and I had to come back a second night and call on the Lord and then they waited a while. They looked, they said, don't touch her, leave her alone. He got her now, leave her alone. But there was an affirmation, there was a process. You couldn't just get up there and confess these ABCs and salvation, nah, nah, nah, nah, nah, nah. Why do you think they'll let you know when you got it?Danielle (18:56):Why do you think that happened? Why? I have a question for You'all. Why do you think that became the reality of the prayer in that moment? And we're talking about Africans that have been brought here and enslaved. Why do you think that happened on our soil that way? Why question?Tamice (19:12):I mean I'm wondering about it because when stylists talk and I keep thinking the Terry in and of itself is a refusal. It says what I see is not real. What's in front of me is not right. I'm going to wait for something else.I'm saying, the slave Bible, them taking stuff out of the Bible and it's like, but I feel like the ground, there was something about the ground that indigenous people, that indigenous people were able to help them tap into over here. It was waiting on that.Starlette (19:49):We didn't have punishment. We had a percussion session. So they ring shouted me. I didn't know what it was at the time. We didn't have all the fancy stuff. Everybody had put me in key. We didn't have, we had this and feet them people circled around me. We don't do that no more.Danielle (20:06):We don't do that no more. But don't you think if you're a person that is, and I believe Africans came here with faith already. Oh yes, there's evidence of that. So put that aside, but don't you think then even if you have that faith and it's not so different than our time and you're confronted with slave owners and plantation owners also preaching quote the same faith that you're going to have to test it out on your neighbor when they're getting saved. You're going to have to make sure they didn't catch that bug.Don't you think there's something in there? Block it. Don't you think if you know faith internally already like we do and run into someone that's white that's preaching the same thing, we have to wait it out with them. Don't you think our ancestors knew that when they were here they were waiting it out. I just noticed my spirit match that spirit. We have to wait it out. Yes, because and let's say they didn't know Jesus. Some people didn't know Jesus and they met Jesus here for whatever reason, and your example is still the white man. You have to wait it out to make sure you're not reflecting that evilness. I mean that's what I'm thinking. That's it's the absolutelyStarlette (21:20):Truth. There's a book titled Slave Testimony, and I know this because I just read about it. There's a testimony of an enslaved African-American, he's unnamed. It was written on June 26th, 1821. He's talking to Master John. He said, I want permission to speak to you if you please. He talked about, he said, where is it? Where is it? A few words. I hope that you will not think Me too bull. Sir, I make my wants known to you because you are, I believe the oldest and most experienced that I know of. He says in the first place, I want you to tell me the reason why you always preach to the white folks and keep your back to us is because they sit up on the hill. We have no chance among them there. We must be forgotten because we are near enough. We are not near enough without getting in the edge of the swamp behind you. He was calling him to account. He said, when you sell me, do you make sure that I'm sold to a Christian or heathen?He said, we are charged with inattention because of where their position. He said it's impossible for us to pay good attention with this chance. In fact, some of us scarce think that we are preached to it all. He says, money appears to be the object. We are carried to market and sold to the highest bidder. Never once inquired whether you sold to a heathen or a Christian. If the question was put, did you sell to a Christian, what would the answer be? I can tell you, I can tell what he was, gave me my price. That's all I was interested in. So I don't want people to believe that Africans who were enslaved did not talk back, did not speak back. They took him to task. He said, everybody's not literate. There's about one in 50 people who are, and I'm one of them and I may not be able to speak very well, but this is what I want to tell you. I can tell the difference. I know that you're not preaching to me the same. I know that when you talk about salvation, you're not extending it to me.Yikes. You need to know that our people, these ancestors, not only were they having come to Jesus meetings, but they were having come to your senses, meeting with their oppressor and they wrote it down. They wrote it down. I get sick of the narratives that we are not our answer. Yes we are. Yes I am. I'm here because of them. I think they called me. I think they call me here. I think the fussing that I make, the anger that I possess this need to resist every damn thing. I think they make me do thatTamice (23:35):Indeed, I think. But I didn't get my voice until they took the MLE off, had an honor with my ancestors and they came and they told me it's time. Take that mle off, MLE off. Shoot. Why Jesus ain't tell me to take no muzzle off. I'm going to tell you that now.Danielle (23:52):That's why I mean many indigenous people said, Jesus didn't come back for me because if that guy's bringing me Jesus, then now Jesus didn't come back for me.Starlette (24:07):Come on.Make it plain. Danielle, go ahead. Go ahead. Walk heavy today. Yeah, I meanDanielle (24:17):I like this conversation. Why Jesus, why Jesus didn't come back for us, the three of us. He didn't come back for us. It didn't come back from kids. He didn't come back for my husband. Nope. And so then therefore that we're not going to find a freedom through that. No, that's no desire to be in that.Tamice (24:33):None. And that's what I mean and making it very, very plain to people like, listen, I actually don't want to be in heaven with your Jesus heaven. With your Jesus would be hell. I actually have one,Starlette (24:47):The one that they had for us, they had an N word heaven for us where they would continue to be served and they wrote it down. It's bad for people who are blio foes who like to read those testimonies. It is bad for people who like to read white body supremacy For Phil. Yeah, they had one for us. They had separate creation narratives known as polygenetic, but they also had separate alon whereby they thought that there was a white heaven and an inward heaven.I didn't even know that. Starla, I didn't even know that because they said they want to make sure their favorite slave was there to serve them. Oh yes, the delusion. People tell me that they're white. I really do push back for a reason. What do you mean by that? I disagree with all of it. What part of it do you find agreeable? The relationship of ruling that you maintain over me? The privilege. White power. Which part of it? Which part of it is good for you and for me? How does it help us maintain relationship as Christians?Danielle (25:47):I think that's the reality and the dissonance we live in. Right?Starlette (25:51):That's it. But I think there needs to be a separation.Are you a white supremacist or not?Tamice (26:03):That's what I'm saying. That's why I keep saying, listen, at this point, you can't be good and racist. Let me just say that. Oh no, you got to pickStarlette (26:12):And I need to hear itTamice (26:13):Both. Yeah. I need you to public confession of it.Starlette (26:19):Someone sent me a dm. I just want to thank you for your work and I completely agree. I quickly turned back around. I said, say it publicly. Get out of my dms. Say it publicly. Put it on your page. Don't congratulate me. Within two minutes or so. I'm so sorry. I didn't mean to disturb you. You are right. Okay. Okay. Okay. Did he post anything? No. Say it publicly. Denounce them. Come out from among them.Very, very plain. As a white supremacist or na, as a kid, as children. HowDanielle (26:56):Hard is it? I think that's what made this moment so real and it's a kind of a reality. Fresher actually for everybody to be honest, because it's a reality. All certain things have been said. All manner of things have been said by people. This is just one example of many people that have said these things. Not the only person that's lived and died and said these things. And then when you say, Hey, this was said, someone's like, they didn't say that. You're like, no, some people put all their content on the internet receipts. They did it themselves. That's not true. And I went to a prayer vigil. I didn't go. I sat outside a prayer vigil this weekend and I listened in and they were praying for the resurrection like Jesus of certain people that have passed on. I kid you, I sat there in the car with a friend of mine and then my youngest daughter had come with me just to hang out. She's like, what are they praying for? I was like, they're like, they were praying for a certain person to be resurrected from the dead just like Jesus. And I was so confused. I'm so confused how we got that far, honestly. But I told my kid, I said, this is a moment of reality for you. This is a moment to know. People think like this.Starlette (28:13):Also, white bodyDanielle (28:14):Supremacy is heresy. Yes. It's not even related to the Bible. Not at all.Why I steal away. This is why even the mistranslated Bible, even the Bible that you could take,Starlette (28:33):ThisThe version Danielle started. If you wouldn't have said that, I wouldn't have said that. This is exactly why I steal away. This is exactly why I leave. Because you can't argue with people like that. Now we're resurrected. IAll I need, it's like away. This is exactly why, because I can't hear what Howard Thurman calls the sound of the genuine in that. It's just not going to happen.Danielle (29:01):Can you imagine what would've happened if we would've prayed for George Floyd to be resurrected? Listen, what would've happenedStarlette (29:08):That he called the scumbag.Danielle (29:10):Yeah, but what would've happened if we would've played for their resurrection? Adam, Adam Polito. ThatStarlette (29:19):Was foundTamice (29:19):Psychosis.Starlette (29:21):Yeah. What would've happened? See, don't push me now. I feel like I need to pack. As soon as I said fill away, it's like people keep saying, what are you going to do if gets worse? I'm going to leave my, I'll sell all this crapAbout this stuff. This booby trap of capitalism. I'll it all don't about none of it. What matters most to me is my sense of ness. And when you get to talking, I almost said talking out the side of your neck. Jesus God, today, lemme God Jesus of your neck. You just need to know that's a cultural thing. That's going to have to be reevaluated. God. It just came right on out. Oh Lord. When you start saying things that go against my sense of ness that you think that I have to defend my personhood, that you want to tell me that I don't exist as a person. I don't exist as a human. Back to your reality testament. It's time for me to leave. I'm not staying here and fighting a race war or a civil war. You mamas are just violent. It's what you've always been.Tamice (30:28):Why would I stand in the middle? Why would I stand in the middle of what I know is a confrontation with yourself?Starlette (30:36):Oh, okay. Alright. I'm going to justTamice (30:38):You all. What happened last week is it, it is a confrontation with a really disturbed self and they're trying to flip it. Oh yes. They're trying to make it. Yes. But this is like, I'm trying to tell people out here, this is beyond you, Jack, that was a prophetic witness against you because now you see that what you're fighting is the mirror. Keep me out of it. I won't fight your wars. Keep me out of it. Look, James Baldwin said, y'all have to decide and figure out why you needed a nigger in the first place.I'm not a nigger. I'm a man. But you, the white people need to figure out why you created the nigger in the first place. Fuck, this is not my problem. This is a y'all and I don't have anything invested in this. All I'm trying to do is raise my kids, man. Come on. Get out of here with that. I'm sorry.Danielle (31:48):No, you keep going and then go back to starlet. Why do you think then they made her Terry? They had to make sure she doesn't buy into that. That's my opinion.Tamice (32:00):It's funny too because I see, I mean, I wasn't Pentecostal. I feel like who's coming to mind as soon as you said that de y'all know I'm hip hop. Right? So KRS one.Starlette (32:12):Yes. Consciousness.Tamice (32:14):The mind. Oh yes, the mind, the imagination. He was, I mean from day one, trying to embed that in the youth. Like, Hey, the battlefield is the mind. Are you going to internalize this bullshit?Are you going to let them name you?Starlette (32:34):This is the word.Tamice (32:34):Are you going to let them tell you what is real for the people of God? That's That's what I'm saying, man. Hip hop, hip hop's, refusal has been refusal from day one. That's why I trust it.Because in seen it, it came from the bottom of this place. It's from the bottom of your shoe. It tells the truth about all of this. So when I listen to hip hop, I know I'm getting the truth.Starlette (32:57):Yeah. EnemyObjection. What did public enemy say? Can't trust it. Can't trust it. No, no, no, no. You got to play it back. We got to run all that back.Danielle (33:11):I just think how it's so weaponized, the dirt, the bottom of the shoe, all of that stuff. But that's where we actually, that's what got it. Our bodies hitting the road, hitting the pavement, hitting the grass, hitting the dirt. That's how we know we're in reality because we've been forced to in many ways and have a mindset that we are familiar with despite socioeconomic changes. We're familiar with that bottom place.Tamice (33:38):Yeah. I mean, bottom place is where God is at. That's what y'all don't understand. God comes from black, dark dirt, like God is coming from darkness and hiddenness and mystery. You don't love darkness. You don't love GodStarlette (33:56):Talk. Now this bottom place is not to be confused with the sunken place that some of y'all are in. I just want to be clear. I just want to be clear and I'm not coming to get you. Fall was the wrong day. TodayI think it's good though because there's so much intimidation in other communities at times. I'm not saying there's not through the lynchings, ongoing lynchings and violence too and the threats against colleges. But it's good for us to be reminded of our different cultural perspectives and hear people talk with power. Why do you think Martin Luther King and Cesar Chavez wrote letters to each other? They knew something about that and knew something about it. They knew something about it. They knew something about why it's important to maintain the bonds. Why we're different, why we're similar. They knew something about it. So I see it as a benefit and a growth in our reality. That is actually what threatens that, that relationship, that bond, that connection, that speaking life into one another. That's what threatens that kingdom that you're talking about. Yeah.You just can't fake an encounter either.When I was tear, no matter what I've decolonized and divested from and decentered, I cannot deny that experience. I know that God was present. I know that God touched me. So when mother even made sister, even made, my grandmother would call me when I was in college, first person to go to college. In our family, she would say before she asked about classes or anything else, and she really didn't know what to ask. She only had a sixth grade education. But her first question was always you yet holding on?Right. She holding on. And I said, yes ma'am. Yes ma'am. Then she would, because it didn't matter if you couldn't keep the faith. There really wasn't nothing else for her to talk to you about. She was going to get ready to evangelize and get you back because you backslid. But that was her first thing. But what I've learned since then is that I can let go.The amazing thing is that the spirit is guiding me. I didn't let go all together. You got it. You got it. If it's real, if you're real, prove it. Demonstrate it. I'm getting chills now talk to me without me saying anything, touch me. I shouldn't have to do anything. Eugene Peterson says that prayer is answering speech. In fact, the only reason why I'm praying is because you said something to me first. It's not really on me to do anything. Even with the tear. I was already touched. I was already called. The reason why I was on my knees and pleading is because I'd already been compelled. Something had had already touched me. FirstThey called Holy Spirit. The hound of heaven. Damn right was already on my heels. I was already filled before I could even refuse. I was like, I don't want this. I'm going to always be star Jonah, get your people. I prefer fish guts. Throw me overboard. I don't like these people. Certified prophet because I don't want to do it. I never want to do it. I'm not interested at all. I have no too much history. I've had to deal with too much white body supremacy and prejudice and racism to want anything to do with the church. I see it for what? It's I'll never join one. By the way, are we recording? Is it on? I'm never joining a church ever. Until you all desegregate.You desegregate. Then we can talk about your ministry of reconciliation. Until then, you don't have one. Don't talk to me about a community day or a pulpit swap. I don't want to hear it. All Your praise. What did he say? A clinging, stumble, put away from me. Your conferences, all your multiracial. I don't want to hear none of it. Desegregate that part desegregate you, hypocrites, woe unto all of you white supremacists. If nobody ever told you that's not God. It's not of God. So I don't, for me, my reality is so above me, I know that Paul, because when I don't want to say anything, somebody is in my ear. Somebody was talking to me this morning. Somebody was writing a note in my ear. I had to get up. I said, please. I'm like, now I'm not even awake all the way. Stop talking to me. You can't fake that as much as I push against the Holy Spirit. You can't fake that. I don't want to do it. I don't want to say it. I'm of saying it. And yet I get up in the morning and it's like, say this, that post that. Write that. Somebody else is doing that. That's not me.As the mothers say, my flesh is weak. My flesh is not willing at all. I want to, all of y'all can go on. I'll pack this up and move somewhere else. Let them fight it to the death. I'm not going to, this is just my flesh speaking. Forgive me. Okay. This Raceless gospel is a calling friends. It's a calling. It's a calling, which means you coming into it. I'm an itinerant prophet. I'm heavy into the Hebrew scriptures. I come up with every excuse. My throat hurts. I got a speech impediment. The people don't like me. I'm not educated. It don't work. You need to know when people come to you and say, y'all need to get together, God speaking to you, the Pendo is coming. That's not like an invitation. That's kind of like a threat whether you want it or not. You're getting together.Everybody up. There's a meal ready, there's a banquet that is set and the food is getting cold and you are the reason why the drinks are watered down. That's go. You don't hear me calling you. ComeWhat I keep hearing. You have to know that God is speaking to people and saying that there's an invitation coming and you better get right. You better get washed up. Tam me said, you better let somebody pour that water over your hands. You better get washed up and get ready for dinner. I'm calling you. Come on in this house. Come on in this house. And this house is for everybody. Martin Luther King called it the world house. Everybody's coming in and you ain't got to like it doesn't matter. Get somewhere and sit down. That's that old church mother coming out of me and lemme just confess. I didn't even want to be on here this morning. I told God I didn't feel like talking. I told the Lord and you see what happened.Promise you. I'm a child. I'm full of disobedience.I was not in the mood. I said, I don't want to talk to nobody. I'm an introvert. I don't want to deal with none of this. Get somebody else to do it and look at it.Tamice (40:39):Yeah. It's funny because I woke up this morning, I was like, I'm not, I forgot. And then after all of the news today, I was like, I just don't have it in you, but this is, wait a minute. And it was three minutes past the time. Come on. And I was like, oh, well shoot. The house is empty. Nobody's here right now. I was like, well, lemme just log on. So this is definitely, it feels like definitely our calling do feel. I feel that way. I don't have time to bullshitSo I can't get out of it. I can't go to bed. I might as well say something. It won't let me go. I cannot do deceit. I can't do it. I can't sit idly by while people lie on God. I can't do that. I can't do it. It won't let up. And I'm trying to get in my body, get in this grass and get a little space. But I'm telling you, it won't let me go. And I feel it's important, Dee, you can't stop doing what you're doing. That's right. I mean is this thing of it is beyond me. It is living out of me. It's coming through me. And there has to be a reason for this. There's got to be a reason for this. And I don't know what it is because I know my eschatology is different, but I feel like, buddy, we got to manifest this kingdom. We have to manifest it until it pushes all that shit back. Come on. I'm telling you. Till it scurries it away or renders it and null and void, I'm talking. I mean, I want the type of light and glory on my being. That wicked logic disintegrate, wicked people drop dead. I mean that just in the Bible. In the Bible where Hert falls, headlong and worms eat em. Y'all celebrate that. Why can't I think about that? It's in your scriptures or daykin and the thing breaks and the legs of this false God break. I want that. I'm here for that. I'm going after that.Danielle (43:14):You think that this is what the definition of Terry is? That we're all Terry serious. I'm rocking the whole time. I'm serious. Right. That's what I told my kids. I said, in one sense, this is a one person of many that thinks this way. So we can't devote all our conversation in our house to this man. And I said in the other sense, because Starlet was asking me before he got here, how you doing? I said, we got up and I took calls from this person and that person and I told my kids, we're still advocating and doing what we can for the neighbors that need papers. And so we're going to continue doing that. That is the right thing to do. No matter what anybody else is doing in the world, we can do this.Tamice (43:56):Yeah, that's a good call. I mean, I'm headed to, I ain't going to say where I'm going no more, but I'm headed somewhere and going to be with people who are doing some innovation, right. Thinking how do we build a different world? How do our skillsets and passions coalesce and become something other than this? So I'm excited about that. And it's like that fire, it doesn't just drive me to want to rebuke. It does drive me to want to rebuild and rethink how we do everything. And I'm willing, I mean, I know that I don't know about y'all, but I feel like this, I'm getting out of dodge, but also I'm seeking the piece of the city. I feel both. I feel like I'm not holding hands with ridiculousness and I'm not moving in foolishness. But also I'm finna seek the piece of the city. My G I'm not running from delusion. Why would I? I'm in the truth. So I don't know how that maps onto a practical life, but we're finna figure it out. Out in it. I mean, the response of leadership to what has happened is a very clear sign where we are in terms of fascism. That's a very clear sign.What else y'all are looking for To tell you what it is.Danielle (45:36):But also we're the leaders. We are, we're the leaders. They're a leader of something, but they're not the leader of us. We're the leaders. We're the leaders. So no matter what they say, no matter what hate they spew, I really love Cesar Chavez. He's like, I still go out and feed the farm worker and I don't make them get on the boycott line because if they're pushed under the dirt, then they can't see hope. So people that have more economic power, a little more privilege than the other guy, we're the leaders. We're the ones that keep showing up in love. And love is a dangerous thing for these folks. They can't understand it. They can't grasp it. It is violent for them to feel love. Bodies actually reject it. And the more we show up, you're innovating. You're speaking Starla, you're preaching. We're the leaders. They're leaders of something. They're not leaders of us. We're leaders of freedom.Tamice (46:31):Come on now. D, we're leaders of give us thisStarlette (46:34):Bomb. We're leaders of compassion. You coming in here with the Holy Ghosts, acting like one of them church mothers. We were in the room together. She put our hand on us. YouDanielle (46:43):We're the ones that can remember Trey. We're the ones that can call for justice. We don't need them to do it. They've never done it. Right. Anyway. They have never showed up for a Mexican kid. They've never showed up for a black kid. They've never done it. Right. Anyway, we're the ones that can do it now. We have access to technology. We have access to our neighbors. We can bring a meal to a friend. We can give dollars to someone that needs gas. We're the the one doing it. We're the one that doing itTamice (47:11):Fill usDanielle (47:12):Up. They cannot take away our love.Starlette (47:15):Receive the benediction.Danielle: Yeah. They can't take it away. I'm telling you, if I saw someone shooting someone I hate, I would try to save that person. I don't own guns. I don't believe in guns, period. My family, that's my personal family's belief.And I would do that. I've thought about it many times. I thought would I do it? And I think I would because I actually believe that. I believe that people should not be shot dead. I believe that for the white kid. I believe that for the Mexican kid. I believe that for the black kid, we're the people that can show up. They're not going to come out here. They're inviting us to different kind of war. We're not in that war. That's right. We have love on our side and you cannot defeat love, kill love. You can'tTamice (48:04):Kill love and you can't kill life. That's the only reason somebody would ask you to be nonviolent. That's the only way somebody would've the audacity to ask that of you. Especially if you're oppressed. If the true is truth is that you can't kill love or life, damn man. It's hard out here for a pimp.Starlette (48:38):Really. Really? Yeah. Because what I really want to say isTamice (49:27):I can't. Your testimony a lie. No. Your testimony. That would be a lie. And like I said, truth telling is important. But there are days where I could be that I could go there, but I witnessed what happened that day. I watched the video. It's just not normal to watch that happen to anybody. And I don't care who you are. And the fact that we're there is just objectively just wow. And the fact that all of the spin and do y'all not realize what just happened? Just as a actual event. Right. What? You know, I'm saying how has this turned into diatribes? Right? We need reform. I, whichDanielle (50:29):Which, okay, so I have to cut us off. I have a client coming, but I want to hear from you, given all the nuance and complexity, how are you going to take care of your body this week or even just today? It doesn't have to be genius. Just one or two things you're going to do. Oh, I'm going toTamice (50:51):Take a nap. Yeah, you taking a nap? Y'all be so proud of me. I literally just said no to five things. I was like, I'm not coming to this. I'm not doing that. I won't be at this. I'm grieving. I'm go sit in the grass. Yeah, that's what I'm doing today. And I have stuff coming up. I'm like, Nope, I'm not available.Starlette (51:14):What about you Danielle? What are you going to do?Danielle (51:16):I'm going to eat scrambled eggs with no salt. I love that. I've grown my liver back so I have to have no salt. But I do love scrambled eggs. Scrambled eggs. That's the truth. Four. Four scrambled eggs.Starlette (51:31):And we thank you for your truth. BIO:The Reverend Dr. Starlette Thomas is a poet, practical theologian, and itinerant prophet for a coming undivided “kin-dom.” She is the director of The Raceless Gospel Initiative, named for her work and witness and an associate editor at Good Faith Media. Starlette regularly writes on the sociopolitical construct of race and its longstanding membership in the North American church. Her writings have been featured in Sojourners, Red Letter Christians, Free Black Thought, Word & Way, Plough, Baptist News Global and Nurturing Faith Journal among others. She is a frequent guest on podcasts and has her own. The Raceless Gospel podcast takes her listeners to a virtual church service where she and her guests tackle that taboo trinity— race, religion, and politics. Starlette is also an activist who bears witness against police brutality and most recently the cultural erasure of the Black Lives Matter Plaza in Washington, D.C. It was erected in memory of the 2020 protests that brought the world together through this shared declaration of somebodiness after the gruesome murder of George Perry Floyd, Jr. Her act of resistance caught the attention of the Associated Press. An image of her reclaiming the rubble went viral and in May, she was featured in a CNN article.Starlette has spoken before the World Council of Churches North America and the United Methodist Church's Council of Bishops on the color- coded caste system of race and its abolition. She has also authored and presented papers to the members of the Baptist World Alliance in Zurich, Switzerland and Nassau, Bahamas to this end. She has cast a vision for the future of religion at the National Museum of African American History and Culture's “Forward Conference: Religions Envisioning Change.” Her paper was titled “Press Forward: A Raceless Gospel for Ex- Colored People Who Have Lost Faith in White Supremacy.” She has lectured at The Queen's Foundation in Birmingham, U.K. on a baptismal pedagogy for antiracist theological education, leadership and ministries. Starlette's research interests have been supported by the Louisville Institute and the Lilly Foundation. Examining the work of the Reverend Dr. Clarence Jordan, whose farm turned “demonstration plot” in Americus, Georgia refused to agree to the social arrangements of segregation because of his Christian convictions, Starlette now takes this dirt to the church. Her thesis is titled, “Afraid of Koinonia: How life on this farm reveals the fear of Christian community.” A full circle moment, she was recently invited to write the introduction to Jordan's newest collection of writings, The Inconvenient Gospel: A Southern Prophet Tackles War, Wealth, Race and Religion.Starlette is a member of the Christian Community Development Association, the Peace & Justice Studies Association, and the Koinonia Advisory Council. A womanist in ministry, she has served as a pastor as well as a denominational leader. An unrepentant academician and bibliophile, Starlette holds degrees from Buffalo State College, Colgate Rochester Crozer Divinity School and Wesley Theological Seminary. Last year, she was awarded an honorary doctorate in Sacred Theology for her work and witness as a public theologian from Wayland Baptist Theological Seminary. She is the author of "Take Me to the Water": The Raceless Gospel as Baptismal Pedagogy for a Desegregated Church and a contributing author of the book Faith Forward: A Dialogue on Children, Youth & a New Kind of Christianity. Dr. Tamice Spencer - HelmsGod is not a weapon.  Authenticity is not a phase.Meet  Tamice Spencer-Helms (they/she). Tamice is a nonprofit leader, scholar-practitioner, pastor, and theoactivist based in Richmond, Virginia. For decades, Tamice has been guided by a singular purpose: to confront and heal what they call “diseased imagination”—the spiritual and social dis-ease that stifles agency, creativity, and collective flourishing. As a pastor for spiritual fugitives,  Tamice grounds their work at the intersection of social transformation, soulful leadership, womanist and queer liberation theologies, and cultural critique.A recognized voice in theoactivism, Tamice's work bridges the intellectual and the embodied, infusing rigorous scholarship with lived experience and spiritual practice. They hold two master's degrees (theology and leadership) and a doctorate in Social Transformation. Their frameworks, such as R.E.S.T. Mixtape and Soulful Leadership, which are research and evidence-based interventions that invite others into courageous truth-telling, radical belonging, and the kind of liberating leadership our times demand.​Whether facilitating retreats, speaking from the stage, consulting for organizations, or curating digital sanctuaries, Tamice's presence is both refuge and revolution. Their commitment is to help individuals and communities heal, reimagine, and build spaces where every person is seen, known, and liberated—where diseased imagination gives way to new possibilities. Kitsap County & Washington State Crisis and Mental Health ResourcesIf you or someone else is in immediate danger, please call 911.This resource list provides crisis and mental health contacts for Kitsap County and across Washington State.Kitsap County / Local ResourcesResourceContact InfoWhat They OfferSalish Regional Crisis Line / Kitsap Mental Health 24/7 Crisis Call LinePhone: 1‑888‑910‑0416Website: https://www.kitsapmentalhealth.org/crisis-24-7-services/24/7 emotional support for suicide or mental health crises; mobile crisis outreach; connection to services.KMHS Youth Mobile Crisis Outreach TeamEmergencies via Salish Crisis Line: 1‑888‑910‑0416Website: https://sync.salishbehavioralhealth.org/youth-mobile-crisis-outreach-team/Crisis outreach for minors and youth experiencing behavioral health emergencies.Kitsap Mental Health Services (KMHS)Main: 360‑373‑5031; Toll‑free: 888‑816‑0488; TDD: 360‑478‑2715Website: https://www.kitsapmentalhealth.org/crisis-24-7-services/Outpatient, inpatient, crisis triage, substance use treatment, stabilization, behavioral health services.Kitsap County Suicide Prevention / “Need Help Now”Call the Salish Regional Crisis Line at 1‑888‑910‑0416Website: https://www.kitsap.gov/hs/Pages/Suicide-Prevention-Website.aspx24/7/365 emotional support; connects people to resources; suicide prevention assistance.Crisis Clinic of the PeninsulasPhone: 360‑479‑3033 or 1‑800‑843‑4793Website: https://www.bainbridgewa.gov/607/Mental-Health-ResourcesLocal crisis intervention services, referrals, and emotional support.NAMI Kitsap CountyWebsite: https://namikitsap.org/Peer support groups, education, and resources for individuals and families affected by mental illness.Statewide & National Crisis ResourcesResourceContact InfoWhat They Offer988 Suicide & Crisis Lifeline (WA‑988)Call or text 988; Website: https://wa988.org/Free, 24/7 support for suicidal thoughts, emotional distress, relationship problems, and substance concerns.Washington Recovery Help Line1‑866‑789‑1511Website: https://doh.wa.gov/you-and-your-family/injury-and-violence-prevention/suicide-prevention/hotline-text-and-chat-resourcesHelp for mental health, substance use, and problem gambling; 24/7 statewide support.WA Warm Line877‑500‑9276Website: https://www.crisisconnections.org/wa-warm-line/Peer-support line for emotional or mental health distress; support outside of crisis moments.Native & Strong Crisis LifelineDial 988 then press 4Website: https://doh.wa.gov/you-and-your-family/injury-and-violence-prevention/suicide-prevention/hotline-text-and-chat-resourcesCulturally relevant crisis counseling by Indigenous counselors.Additional Helpful Tools & Tips• Behavioral Health Services Access: Request assessments and access to outpatient, residential, or inpatient care through the Salish Behavioral Health Organization. Website: https://www.kitsap.gov/hs/Pages/SBHO-Get-Behaviroal-Health-Services.aspx• Deaf / Hard of Hearing: Use your preferred relay service (for example dial 711 then the appropriate number) to access crisis services.• Warning Signs & Risk Factors: If someone is talking about harming themselves, giving away possessions, expressing hopelessness, or showing extreme behavior changes, contact crisis resources immediately. Well, first I guess I would have to believe that there was or is an actual political dialogue taking place that I could potentially be a part of. And honestly, I'm not sure that I believe that.

L’Heure du Monde
La mystérieuse disparition des Fingers [REDIFF]

L’Heure du Monde

Play Episode Listen Later Aug 15, 2025 15:25


Mais où sont donc passés les Fingers ? Depuis quelques mois, ces biscuits chocolatés, très sucrés, et commercialisés par le groupe Mondelez se sont volatilisés des rayonnages français, sans la moindre explication. A l'instar des Figolu, en 2015, d'autres biscuits ont déjà disparu, avant de réapparaître, à la suite d'une mobilisation des consommateurs. Cette fois, l'histoire semble être différente.Alors comment expliquer qu'un produit aussi connu puisse disparaître sans explications ? Est-ce que cela pourrait être un coup de communication ? Et qu'est-ce que cela dit de notre rapport à l'alimentation quand nos produits fétiches sont devenus massivement industrialisés ?Dans cet épisode du podcast « L'Heure du Monde », Coline Clavaud-Mégevand, journaliste, revient sur l'enquête qu'elle a menée sur le sujet pour « M Le magazine du Monde ».Un épisode produit et présenté par Adèle Ponticelli avec l'aide de Marion Bothorel. Réalisation : Quentin Bresson. Musiques : Amandine Robillard.Cet épisode a été initialement publié le 16 décembre 2024.---Pour soutenir "L'Heure du Monde" et notre rédaction, abonnez-vous sur abopodcast.lemonde.fr Hébergé par Audion. Visitez https://www.audion.fm/fr/privacy-policy pour plus d'informations.

Nate's Daily Wager
Nate's Nathan's Hot Dog Eating Championship Wager 0704

Nate's Daily Wager

Play Episode Listen Later Jul 4, 2025 3:10


Nate's Nathan's Hot Dog Eating Championship Wager 0704Like our show☝️

The Connor Happer Show
George Shea - Major League Eating (Mon 6/30 - Seg 10)

The Connor Happer Show

Play Episode Listen Later Jun 30, 2025 19:01


The commissioner of MLE and the emcee of the Nathan's Hot Dog Eating Contest on the 4th of July joins us for one of our favorite segments each and every year.

Vipcast.hu powered by Media1
Elkészült az AI Guide – interjú Kovács Tiborral, az MLE elnökével a mesterséges intelligenciás útmutatóról

Vipcast.hu powered by Media1

Play Episode Listen Later Jun 24, 2025 44:21


A legjelentősebb hazai kommunikációs szakmai szervezetek példaértékű együttműködésében elkészült az első átfogó magyar AI-kézikönyv, jelentette be Kovács Tibor, a Magyar Lapkiadók Egyesülete (MLE) elnöke a Media1 műsorában. Mit érdemes tudni az új, a mesterséges intelligenciával kapcsolatos útmutatóról és milyen dilemmák merültek fel az AI-guide készítésekor? Hogyan érdemes és hogyan nem érdemes használni az AI-t? Erről ... Olvass tovább The post Elkészült az AI Guide – interjú Kovács Tiborral, az MLE elnökével a mesterséges intelligenciás útmutatóról first appeared on Vipcast.hu powered by Media1.

Lynch and Taco
7:15 Idiotology June 12, 2025

Lynch and Taco

Play Episode Listen Later Jun 12, 2025 11:53 Transcription Available


Court dismisses father's lawsuit against newspaper over lack of high school basketball coverage...specifically, HIS son's team, Florida Man showed up at Sam's Club in Lady Lake and peed all over the Spam and Vienna Sausages, Joey Chestnut is trying to negotiate a return to MLE in time for Nathan's Hot Dog Eating contest on the 4th of July

The Power's Point Podcast
Stomach Stretchers

The Power's Point Podcast

Play Episode Listen Later Apr 18, 2025 42:45 Transcription Available


What drives someone to consume 76 hot dogs in 10 minutes or choke down eight pounds of pure mayonnaise? The fascinating and sometimes disturbing world of competitive eating exists at the intersection of sport, spectacle, and sheer human determination.We dive deep into this peculiar subculture that's evolved dramatically over the years. From the famous Nathan's Hot Dog Eating Contest to obscure challenges involving beef tongue and jalapeno peppers, these competitions push human bodies to their absolute limits. Joey Chestnut and Takeru Kobayashi emerge as the titans of the industry, with Chestnut holding an astonishing number of world records across various food categories.The science behind competitive eating proves surprisingly complex. We explore how these gastronomic athletes train their bodies through stomach stretching techniques, drinking gallons of water before events, and even learning to partially dislocate their jaws. These aren't just people with big appetites—they're dedicated competitors who approach eating with strategic precision.What began as casual county fair entertainment has transformed into a global phenomenon with significant cash prizes. The Wing Bowl offers $50,000 to its champion, while most competitions range between $2,500 and $10,000 for first place. For those at the top of the field, competitive eating can become a legitimate career path, though one that raises serious questions about long-term health consequences.As we debate which food challenges we might personally attempt—from White Castle sliders to deviled eggs—we're left wondering: is competitive eating an impressive display of human potential, or simply a grotesque spectacle? Whatever your take, one thing's certain—it's impossible to look away.Join us for this eye-opening exploration of what happens when eating becomes sport, and discover why these food warriors continue to push the boundaries of what we thought humanly possible.Thank you for giving us a go, and hope you stick with us as we have some really amazing guest on and hole you have a laugh or two but no more than three. Support the showThank you for joining us on today's show, as always, we appreciate each and every one of you! Talk to you soon.X - @PodcastScottIG - Powers31911

The Power's Point Podcast
The 27 Club

The Power's Point Podcast

Play Episode Listen Later Apr 8, 2025 35:41 Transcription Available


What dark forces connect legendary musicians who died at exactly 27? The mysterious "27 Club" includes some of music's most brilliant minds - Jimi Hendrix, Janis Joplin, Jim Morrison, Kurt Cobain, and Amy Winehouse - all claimed at the same haunting age.Our hosts dive deep into the origins of this phenomenon, tracing it back to blues legend Robert Johnson, whose supernatural guitar skills spawned myths of a deal with the devil at the crossroads. When Johnson mysteriously died at 27 in 1938, it began what would become a disturbing pattern.The conversation takes particularly fascinating turns when examining recent revelations about Kurt Cobain's death. A new witness claims to have been present when Cobain was murdered, contrary to the official suicide ruling. We explore the compelling evidence suggesting Cobain's suicide note may have been partially forged, and the suspicious timing of Hole bassist Kristen Pfaff's death shortly afterward - also at 27.Beyond the sensational theories, we examine what makes this phenomenon so captivating. Is it merely confirmation bias focusing on famous people who happened to die at the same age? Or does the intense pressure of fame, coupled with substance abuse and the "live fast, die young" lifestyle, create a perfect storm for vulnerable young artists? We even discuss the bizarre "white lighter curse" - the superstition that white lighters were found at multiple 27 Club death scenes.Whether you believe in cosmic connections or statistical coincidences, this episode offers a thoughtful exploration of creativity, fame's dark side, and our need to find meaning in tragedy. Email us at powerspointpodcast@yahoo.com with your own theories or suggestions for future topics!Thank you for giving us a go, and hope you stick with us as we have some really amazing guest on and hole you have a laugh or two but no more than three. Support the showThank you for joining us on today's show, as always, we appreciate each and every one of you! Talk to you soon.X - @PodcastScottIG - Powers31911

Major League Eventing Podcast
Valerie Pride 5* Eventer & FEI Level 3 Eventing Dressage Judge

Major League Eventing Podcast

Play Episode Listen Later Mar 19, 2025 57:05


Karen and Robby welcome back 5* Eventer and FEI Level 3 Eventing Dressage judge Valerie Pride. Valerie was on the MLE podcast back on October 21, 2020 and we had a lot of catching up to do. Valerie most recently judged at the Maryland 5* and explains everything that the Ground Jury has to do - it's not just judging Dressage. She also has some exciting news that will take her on a European tour this fall to judge the European Championships at Blenhem, the 7 year old Championships at Cornbury and then Burghley. Valerie also competes and runs her business Blue Clover Eventing and she explains how she is able to compete, run her business and judge. We also ask her some Dressage questions that we hope everyone is able to learn something from.PC: Shannon BrinkmanTo follow Valerie:https://www.blueclovereventing.com/about-valerie/https://www.instagram.com/blueclovereventing/?hl=enhttps://www.facebook.com/blueclovereventingPlease support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop 

Major League Eventing Podcast
Catching Up with Ariel Grald

Major League Eventing Podcast

Play Episode Listen Later Mar 12, 2025 42:01


Karen and Robby catch up with previous guest, 5* eventer Ariel Grald. Since the last time Ariel has been on the MLE podcast in 2019, she has placed 3rd at Luhmulen and placed 11th individually at the World Championships in Pratoni with her longtime partner Leamore Master Plan. In 2024 at the Kentucky 5*, Simon suffered an injury where he is now living the semi retired life and is fuzzy and chunky with hopes to slowly bring him back to some sort of competing - maybe in the show jumping ring. Even with Simon sidelined, Ariel is busy with several top horses that she has big plans for the spring and fall. We hope you enjoy hearing everything Ariel has going on!PC: Shannon BrinkmanFollow Ariel's journey:https://www.instagram.com/arielmgrald/?hl=enhttps://www.facebook.com/amgequestrian/Please support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop 

Bax of All Trades
How To Break Into Tech (Economics to NVIDIA Engineer) | BoaT #24

Bax of All Trades

Play Episode Listen Later Feb 14, 2025 75:18


In this episode of BoaT, I interview my good friend Ishan Dhanani, who is a MLE for Inference at NVIDIA. Less than 2 years ago, Ishan graduated with an Economics degree from Texas A&M. Since then, he has dropped out of Columbia, been acquired twice (once by Brev.dev, once by NVIDIA), and moved across the country.We discuss how you can become technical, the future of AI, and much more.EnjoyFOLLOW ISHAN:https://x.com/0xishandCONNECT WITH ME

The Best Advice Show
Try These Witchy, Water-based Maneuvers to Improve Your Life with Dr. MLE

The Best Advice Show

Play Episode Listen Later Feb 12, 2025 8:36


Emily Carr/Dr. MLE is a Wellness Witch, a professional poet, amateur Tarot reader, joyous revolutionary arts educator, and fitness coach. Subscribe to her newsletter @ https://soulfloss.substack.com/LISTEN: You Should Buy Mice and Other Transformative Advice from Lucy Anderton Fill out the first-ever TBAS listener survey to help Zak get to know you better and to enter the drawing to win a custom designed shirt by Zak and his daughter @https://forms.gle/f1HxJ45Df4V3m2Dg9---Help Zak continue making this show by becoming a Best Advice Show Patron @ https://www.patreon.com/bestadviceshow---Call Zak on the advice show hotline @ 844-935-BEST---Share this episode on IG @BestAdviceShow

The Dane Moore NBA Podcast
The Julius Randle Player Option Conundrum + A Win In Orlando w/ Kyle Theige

The Dane Moore NBA Podcast

Play Episode Listen Later Jan 10, 2025 98:35


On today's show, Dane is joined by Kyle Theige to discuss the looming question of what is going to happen with Julius Randle's player option. Dane and Kyle take a look at the Wolves salary cap sheet, and in that detail how much they have to spend if Randle opts in and how much added financial flexibility they'll receive if he opts out. After salary cap talk, Dane and Kyle also discuss themes from the win in Orlando and ripple effects of the starting lineup change. Specific topics and timestamps below... - Wolves cap sheet update + The two paths of the Randle player option (2:00) - What position to use the MLE on in free agency, if Randle opts out/isn't on the team (15:00) - Big games for all 3 bigs in an ideal Orlando matchup (36:00) - Finch's applause for Ant finding consistent effectiveness in games following his comments (45:00) - Ripples of the starting lineup change and how they impact DiVincenzo and McDaniels specifically (65:00) If you'd like to support our partners... -- Try out our new sponsor WtrMln Wtr at Whole Foods or Target: https://drinkwtrmln.com/ -- Contact Adrianna Lonick with Coldwell Banker Realty for a free consultation at: https://www.thedancingrealtor.com/ or call/text 715-304-9920 -- For more information on Treasure Island Watch Parties, visit https://www.ticasino.com -- Get yourself a pair of Duer jeans for 20% by going to: https://www.shopduer.com/danemoore -- Contact Your Home Improvement Company: https://www.yourhomeimprovementco.com/ -- Sign up for Prize Picks, promo code "DANE" for a signup bonus: https://www.prizepicks.com/ -- Want to advertise on the show? Reach out to DaneMooreProductions@gmail.com -- Support the show by subscribing for $5 a month: https://www.patreon.com/DaneMooreNBA -- #BlueWireVideo Learn more about your ad choices. Visit podcastchoices.com/adchoices

Major League Eventing Podcast
2024 Year In Review

Major League Eventing Podcast

Play Episode Listen Later Jan 1, 2025 26:45


Happy New Year! Karen and Robby sit down with a pour of a 15 year Pappy Van Winkle to discuss all things that happened with MLE in 2024. We go over how we are listened to in 68 countries, our top 5 US cities and the top 5 listened to episodes. You'll even get a little insight on what we have going on personally as well as all things Corgi related. We hope you enjoy!Please support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop 

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Happy holidays! We'll be sharing snippets from Latent Space LIVE! through the break bringing you the best of 2024! We want to express our deepest appreciation to event sponsors AWS, Daylight Computer, Thoth.ai, StrongCompute, Notable Capital, and most of all our LS supporters who helped fund the venue and A/V production!For NeurIPS last year we did our standard conference podcast coverage interviewing selected papers (that we have now also done for ICLR and ICML), however we felt that we could be doing more to help AI Engineers 1) get more industry-relevant content, and 2) recap 2024 year in review from experts. As a result, we organized the first Latent Space LIVE!, our first in person miniconference, at NeurIPS 2024 in Vancouver.Since Nathan Lambert ( Interconnects ) joined us for the hit RLHF 201 episode at the start of this year, it is hard to overstate how much Open Models have exploded this past year. In 2023 only five names were playing in the top LLM ranks, Mistral, Mosaic's MPT, TII UAE's Falcon, Yi from Kai-Fu Lee's 01.ai, and of course Meta's Llama 1 and 2. This year a whole cast of new open models have burst on the scene, from Google's Gemma and Cohere's Command R, to Alibaba's Qwen and Deepseek models, to LLM 360 and DCLM and of course to the Allen Institute's OLMo, OL MOE, Pixmo, Molmo, and Olmo 2 models. We were honored to host Luca Soldaini, one of the research leads on the Olmo series of models at AI2.Pursuing Open Model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe, California and the White House. We also were honored to hear from and Sophia Yang, head of devrel at Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track!Full Talk on YouTubePlease like and subscribe!Timestamps* 00:00 Welcome to Latent Space Live * 00:12 Recap of 2024: Best Moments and Keynotes * 01:22 Explosive Growth of Open Models in 2024 * 02:04 Challenges in Open Model Research * 02:38 Keynote by Luca Soldani: State of Open Models * 07:23 Significance of Open Source AI Licenses * 11:31 Research Constraints and Compute Challenges * 13:46 Fully Open Models: A New Trend * 27:46 Mistral's Journey and Innovations * 32:57 Interactive Demo: Lachat Capabilities * 36:50 Closing Remarks and NetworkingTranscriptSession3Audio[00:00:00] AI Charlie: Welcome to Latent Space Live, our first mini conference held at NeurIPS 2024 in Vancouver. This is Charlie, your AI co host. As a special treat this week, we're recapping the best of 2024 going domain by domain. We sent out a survey to the over 900 of you who told us what you wanted, and then invited the best speakers in the latent space network to cover each field.[00:00:28] AI Charlie: 200 of you joined us in person throughout the day, with over 2, 200 watching live online. Our next keynote covers the state of open models in 2024, with Luca Soldani and Nathan Lambert of the Allen Institute for AI, with a special appearance from Dr. Sophia Yang of Mistral. Our first hit episode of 2024 was with Nathan Lambert on RLHF 201 back in January.[00:00:57] AI Charlie: Where he discussed both reinforcement learning for language [00:01:00] models and the growing post training and mid training stack with hot takes on everything from constitutional AI to DPO to rejection sampling and also previewed the sea change coming to the Allen Institute. And to Interconnects, his incredible substack on the technical aspects of state of the art AI training.[00:01:18] AI Charlie: We highly recommend subscribing to get access to his Discord as well. It is hard to overstate how much open models have exploded this past year. In 2023, only five names were playing in the top LLM ranks. Mistral, Mosaics MPT, and Gatsby. TII UAE's Falcon, Yi, from Kaifu Lee's 01. ai, And of course, Meta's Lama 1 and 2.[00:01:43] AI Charlie: This year, a whole cast of new open models have burst on the scene. From Google's Jemma and Cohere's Command R, To Alibaba's Quen and DeepSeq models, to LLM360 and DCLM, and of course, to the Allen Institute's OLMO, [00:02:00] OLMOE, PIXMO, MOLMO, and OLMO2 models. Pursuing open model research comes with a lot of challenges beyond just funding and access to GPUs and datasets, particularly the regulatory debates this year across Europe.[00:02:14] AI Charlie: California and the White House. We also were honored to hear from Mistral, who also presented a great session at the AI Engineer World's Fair Open Models track. As always, don't forget to check the show notes for the YouTube link to their talk, as well as their slides. Watch out and take care.[00:02:35] Luca Intro[00:02:35] Luca Soldaini: Cool. Yeah, thanks for having me over. I'm Luca. I'm a research scientist at the Allen Institute for AI. I threw together a few slides on sort of like a recap of like interesting themes in open models for, for 2024. Have about maybe 20, 25 minutes of slides, and then we can chat if there are any questions.[00:02:57] Luca Soldaini: If I can advance to the next slide. [00:03:00] Okay, cool. So I did the quick check of like, to sort of get a sense of like, how much 2024 was different from 2023. So I went on Hugging Face and sort of get, tried to get a picture of what kind of models were released in 2023 and like, what do we get in 2024?[00:03:16] Luca Soldaini: 2023 we get, we got things like both LLAMA 1 and 2, we got Mistral, we got MPT, Falcon models, I think the YI model came in at the end. Tail end of the year. It was a pretty good year. But then I did the same for 2024. And it's actually quite stark difference. You have models that are, you know, reveling frontier level.[00:03:38] Luca Soldaini: Performance of what you can get from closed models from like Quen, from DeepSeq. We got Llama3. We got all sorts of different models. I added our own Olmo at the bottom. There's this growing group of like, Fully open models that I'm going to touch on a little bit later. But you know, just looking at the slides, it feels like 2024 [00:04:00] was just smooth sailing, happy knees, much better than previous year.[00:04:04] Luca Soldaini: And you know, you can plot you can pick your favorite benchmark Or least favorite, I don't know, depending on what point you're trying to make. And plot, you know, your closed model, your open model and sort of spin it in ways that show that, oh, you know open models are much closer to where closed models are today versus to Versus last year where the gap was fairly significant.[00:04:29] Luca Soldaini: So one thing that I think I don't know if I have to convince people in this room, but usually when I give this talks about like open models, there is always like this background question in, in, in people's mind of like, why should we use open models? APIs argument, you know, it's, it's. Just an HTTP request to get output from a, from one of the best model out there.[00:04:53] Luca Soldaini: Why do I have to set up infra and use local models? And there are really like two answer. There is the more [00:05:00] researchy answer for this, which is where it might be. Background lays, which is just research. If you want to do research on language models, research thrives on, on open models, there is like large swath of research on modeling, on how these models behave on evaluation and inference on mechanistic interpretability that could not happen at all if you didn't have open models they're also for AI builders, they're also like.[00:05:30] Luca Soldaini: Good use cases for using local models. You know, you have some, this is like a very not comprehensive slides, but you have things like there are some application where local models just blow closed models out of the water. So like retrieval, it's a very clear example. We might have like constraints like Edge AI applications where it makes sense.[00:05:51] Luca Soldaini: But even just like in terms of like stability, being able to say this model is not changing under the hood. It's, there's plenty of good cases for, [00:06:00] for open models. And the community is just not models. Is I stole this slide from one of the Quent2 announcement blog posts. But it's super cool to see like how much tech exists around open models and serving them on making them efficient and hosting them.[00:06:18] Luca Soldaini: It's pretty cool. And so. It's if you think about like where the term opens come from, comes from like the open source really open models meet the core tenants of, of open, of open source specifically when it comes around collaboration, there is truly a spirit, like through these open models, you can build on top of other people.[00:06:41] Luca Soldaini: innovation. We see a lot of these even in our own work of like, you know, as we iterate in the various versions of Alma it's not just like every time we collect from scratch all the data. No, the first step is like, okay, what are the cool data sources and datasets people have put [00:07:00] together for language model for training?[00:07:01] Luca Soldaini: Or when it comes to like our post training pipeline We one of the steps is you want to do some DPO and you use a lot of outputs of other models to improve your, your preference model. So it's really having like an open sort of ecosystem benefits and accelerates the development of open models.[00:07:23] The Definition of Open Models[00:07:23] Luca Soldaini: One thing that we got in 2024, which is not a specific model, but I thought it was really significant, is we first got we got our first open source AI definition. So this is from the open source initiative they've been generally the steward of a lot of the open source licenses when it comes to software and so they embarked on this journey in trying to figure out, okay, How does a license, an open source license for a model look like?[00:07:52] Luca Soldaini: Majority of the work is very dry because licenses are dry. So I'm not going to walk through the license step by [00:08:00] step, but I'm just going to pick out one aspect that is very good and then one aspect that personally feels like it needs improvement on the good side. This this open source AI license actually.[00:08:13] Luca Soldaini: This is very intuitive. If you ever build open source software and you have some expectation around like what open source looks like for software for, for AI, sort of matches your intuition. So, the weights need to be fairly available the code must be released with an open source license and there shouldn't be like license clauses that block specific use cases.[00:08:39] Luca Soldaini: So. Under this definition, for example, LLAMA or some of the QUEN models are not open source because the license says you can't use this model for this or it says if you use this model you have to name the output this way or derivative needs to be named that way. Those clauses don't meet open source [00:09:00] definition and so they will not be covered.[00:09:02] Luca Soldaini: The LLAMA license will not be covered under the open source definition. It's not perfect. One of the thing that, um, internally, you know, in discussion with with OSI, we were sort of disappointed is around the language. For data. So you might imagine that an open source AI model means a model where the data is freely available.[00:09:26] Luca Soldaini: There were discussion around that, but at the end of the day, they decided to go with a softened stance where they say a model is open source if you provide sufficient detail information. On how to sort of replicate the data pipeline. So you have an equivalent system, sufficient, sufficiently detailed.[00:09:46] Luca Soldaini: It's very, it's very fuzzy. Don't like that. An equivalent system is also very fuzzy. And this doesn't take into account the accessibility of the process, right? It might be that you provide enough [00:10:00] information, but this process costs, I don't know, 10 million to do. Now the open source definition. Like, any open source license has never been about accessibility, so that's never a factor in open source software, how accessible software is.[00:10:14] Luca Soldaini: I can make a piece of open source, put it on my hard drive, and never access it. That software is still open source, the fact that it's not widely distributed doesn't change the license, but practically there are expectations of like, what we want good open sources to be. So, it's, It's kind of sad to see that the data component in this license is not as, as, Open as some of us would like would like it to be.[00:10:40] Challenges for Open Models[00:10:40] Luca Soldaini: and I linked a blog post that Nathan wrote on the topic that it's less rambly and easier to follow through. One thing that in general, I think it's fair to say about the state of open models in 2024 is that we know a lot more than what we knew in, [00:11:00] in 2023. Like both on the training data, like And the pre training data you curate on like how to do like all the post training, especially like on the RL side.[00:11:10] Luca Soldaini: You know, 2023 was a lot of like throwing random darts at the board. I think 2024, we have clear recipes that, okay, don't get the same results as a closed lab because there is a cost in, in actually matching what they do. But at least we have a good sense of like, okay, this is, this is the path to get state of the art language model.[00:11:31] Luca Soldaini: I think that one thing that it's a downside of 2024 is that I think we are more research constrained in 2023. It feels that, you know, the barrier for compute that you need to, to move innovation along as just being right rising and rising. So like, if you go back to this slide, there is now this, this cluster of models that are sort of released by the.[00:11:57] Luca Soldaini: Compute rich club. Membership is [00:12:00] hotly debated. You know, some people don't want to be. Called the rich because it comes to expectations. Some people want to be called rich, but I don't know, there's debate, but like, these are players that have, you know, 10, 000, 50, 000 GPUs at minimum. And so they can do a lot of work and a lot of exploration and improving models that it's not very accessible.[00:12:21] Luca Soldaini: To give you a sense of like how I personally think about. Research budget for each part of the, of the language model pipeline is like on the pre training side, you can maybe do something with a thousand GPUs, really you want 10, 000. And like, if you want real estate of the art, you know, your deep seek minimum is like 50, 000 and you can scale to infinity.[00:12:44] Luca Soldaini: The more you have, the better it gets. Everyone on that side still complains that they don't have enough GPUs. Post training is a super wide sort of spectrum. You can do as little with like eight GPUs as long as you're able to [00:13:00] run, you know, a good version of, say, a LLAMA model, you can do a lot of work there.[00:13:05] Luca Soldaini: You can scale a lot of the methodology, just like scales with compute, right? If you're interested in you know, your open replication of what OpenAI's O1 is you're going to be on the 10K spectrum of our GPUs. Inference, you can do a lot with very few resources. Evaluation, you can do a lot with, well, I should say at least one GPUs if you want to evaluate GPUs.[00:13:30] Luca Soldaini: Open models but in general, like if you are, if you care a lot about intervention to do on this model, which it's my prefer area of, of research, then, you know, the resources that you need are quite, quite significant. Yeah. One other trends that has emerged in 2024 is this cluster of fully open models.[00:13:54] Luca Soldaini: So Omo the model that we built at ai, two being one of them and you know, it's nice [00:14:00] that it's not just us. There's like a cluster of other mostly research efforts who are working on this. And so it's good to to give you a primer of what like fully open means. So fully open, the easy way to think about it is instead of just releasing a model checkpoint that you run, you release a full recipe so that other people working on it.[00:14:24] Luca Soldaini: Working on that space can pick and choose whatever they want from your recipe and create their own model or improve on top of your model. You're giving out the full pipeline and all the details there instead of just like the end output. So I pull up the screenshot from our recent MOE model.[00:14:43] Luca Soldaini: And like for this model, for example, we released the model itself. Data that was trained on, the code, both for training and inference all the logs that we got through the training run, as well as every intermediate checkpoint and like the fact that you release different part of the pipeline [00:15:00] allows others to do really cool things.[00:15:02] Luca Soldaini: So for example, this tweet from early this year from folks in news research they use our pre training data to do a replication of the BitNet paper in the open. So they took just a Really like the initial part of a pipeline and then the, the thing on top of it. It goes both ways.[00:15:21] Luca Soldaini: So for example, for the Olmo2 model a lot of our pre trained data for the first stage of pre training was from this DCLM initiative that was led by folks Ooh, a variety of ins a variety of institutions. It was a really nice group effort. But you know, for When it was nice to be able to say, okay, you know, the state of the art in terms of like what is done in the open has improved.[00:15:46] AI2 Models - Olmo, Molmo, Pixmo etc[00:15:46] Luca Soldaini: We don't have to like do all this work from scratch to catch up the state of the art. We can just take it directly and integrate it and do our own improvements on top of that. I'm going to spend a few minutes doing like a [00:16:00] shameless plug for some of our fully open recipes. So indulge me in this.[00:16:05] Luca Soldaini: So a few things that we released this year was, as I was mentioning, there's OMOE model which is, I think still is state of the art MOE model in its size class. And it's also. Fully open, so every component of this model is available. We released a multi modal model called Molmo. Molmo is not just a model, but it's a full recipe of how you go from a text only model to a multi modal model, and we apply this recipe on top of Quent checkpoints, on top of Olmo checkpoints, as well as on top of OlmoE.[00:16:37] Luca Soldaini: And I think there'd be a replication doing that on top of Mistral as well. The post training side we recently released 2. 0. 3. Same story. This is a recipe on how you go from a base model to A state of the art post training model. We use the Tulu recipe on top of Olmo, on top of Llama, and then there's been open replication effort [00:17:00] to do that on top of Quen as well.[00:17:02] Luca Soldaini: It's really nice to see like, you know, when your recipe sort of, it's kind of turnkey, you can apply it to different models and it kind of just works. And finally, the last thing we released this year was Olmo 2, which so far is the best state of the art. Fully open language model a Sera combines aspect from all three of these previous models.[00:17:22] Luca Soldaini: What we learn on the data side from MomoE and what we learn on like making models that are easy to adapt from the Momo project and the Tulu project. I will close with a little bit of reflection of like ways this, this ecosystem of open models like it's not all roses. It's not all happy. It feels like day to day, it's always in peril.[00:17:44] Luca Soldaini: And, you know, I talked a little bit about like the compute issues that come with it. But it's really not just compute. One thing that is on top of my mind is due to like the environment and how you know, growing feelings about like how AI is treated. [00:18:00] It's actually harder to get access to a lot of the data that was used to train a lot of the models up to last year.[00:18:06] Luca Soldaini: So this is a screenshot from really fabulous work from Shane Longpre who's, I think is in Europe about Just access of like diminishing access to data for language model pre training. So what they did is they went through every snapshot of common crawl. Common crawl is this publicly available scrape of the, of a subset of the internet.[00:18:29] Luca Soldaini: And they looked at how For any given website whether a website that was accessible in say 2017, what, whether it was accessible or not in 2024. And what they found is as a reaction to like the close like of the existence of closed models like OpenAI or Cloud GPT or Cloud a lot of content owners have blanket Blocked any type of crawling to your website.[00:18:57] Luca Soldaini: And this is something that we see also internally at [00:19:00] AI2. Like one project that we started this year is we wanted to, we wanted to understand, like, if you're a good citizen of the internet and you crawl following sort of norms and policy that have been established in the last 25 years, what can you crawl?[00:19:17] Luca Soldaini: And we found that there's a lot of website where. The norms of how you express preference of whether to crawl your data or not are broken. A lot of people would block a lot of crawling, but do not advertise that in RobustDXT. You can only tell that they're crawling, that they're blocking you in crawling when you try doing it.[00:19:37] Luca Soldaini: Sometimes you can't even crawl the robots. txt to, to check whether you're allowed or not. And then a lot of websites there's, there's like all these technologies that historically have been, have existed to make websites serving easier such as Cloudflare or DNS. They're now being repurposed for blocking AI or any type of crawling [00:20:00] in a way that is Very opaque to the content owners themselves.[00:20:04] Luca Soldaini: So, you know, you go to these websites, you try to access them and they're not available and you get a feeling it's like, Oh, someone changed, something changed on the, on the DNS side that it's blocking this and likely the content owner has no idea. They're just using a Cloudflare for better, you know, load balancing.[00:20:25] Luca Soldaini: And this is something that was sort of sprung on them with very little notice. And I think the problem is this, this blocking or ideas really, it impacts people in different ways. It disproportionately helps companies that have a headstart, which are usually the closed labs and it hurts incoming newcomer players where either have now to do things in a sketchy way or you're never going to get that content that the closed lab might have.[00:20:54] Luca Soldaini: So there's a lot, it was a lot of coverage. I'm going to plug Nathan's blog post again. That is, [00:21:00] that I think the title of this one is very succinct which is like, we're actually not, You know, before thinking about running out of training data, we're actually running out of open training data. And so if we want better open models they should be on top of our mind.[00:21:13] Regulation and Lobbying[00:21:13] Luca Soldaini: The other thing that has emerged is that there is strong lobbying efforts on trying to define any kind of, AI as like a new extremely risky and I want to be precise here. Like the problem is now, um, like the problem is not not considering the risk of this technology. Every technology has risks that, that should always be considered.[00:21:37] Luca Soldaini: The thing that it's like to me is sorry, is ingenious is like just putting this AI on a pedestal and calling it like, An unknown alien technology that has like new and undiscovered potentials to destroy humanity. When in reality, all the dangers I think are rooted in [00:22:00] dangers that we know from existing software industry or existing issues that come with when using software on on a lot of sensitive domains, like medical areas.[00:22:13] Luca Soldaini: And I also noticed a lot of efforts that have actually been going on and trying to make this open model safe. I pasted one here from AI2, but there's actually like a lot of work that has been going on on like, okay, how do you make, if you're distributing this model, Openly, how do you make it safe?[00:22:31] Luca Soldaini: How, what's the right balance between accessibility on open models and safety? And then also there's annoying brushing of sort of concerns that are then proved to be unfounded under the rug. You know, if you remember the beginning of this year, it was all about bio risk of these open models.[00:22:48] Luca Soldaini: The whole thing fizzled because as being Finally, there's been like rigorous research, not just this paper from Cohere folks, but it's been rigorous research showing [00:23:00] that this is really not a concern that we should be worried about. Again, there is a lot of dangerous use of AI applications, but this one was just like, A lobbying ploy to just make things sound scarier than they actually are.[00:23:15] Luca Soldaini: So I got to preface this part. It says, this is my personal opinion. It's not my employer, but I look at things like the SP 1047 from, from California. And I think we kind of dodged a bullet on, on this legislation. We, you know, the open source community, a lot of the community came together at the last, sort of the last minute and did a very good effort trying to explain all the negative impact of this bill.[00:23:43] Luca Soldaini: But There's like, I feel like there's a lot of excitement on building these open models or like researching on these open models. And lobbying is not sexy it's kind of boring but it's sort of necessary to make sure that this ecosystem can, can really [00:24:00] thrive. This end of presentation, I have Some links, emails, sort of standard thing in case anyone wants to reach out and if folks have questions or anything they wanted to discuss.[00:24:13] Luca Soldaini: Is there an open floor? I think we have Sophia[00:24:16] swyx: who wants to who one, one very important open model that we haven't covered is Mistral. Ask her on this slide. Yeah, yeah. Well, well, it's nice to have the Mistral person talk recap the year in Mistral. But while Sophia gets set up, does anyone have like, just thoughts or questions about the progress in this space?[00:24:32] Questions - Incentive Alignment[00:24:32] swyx: Do you always have questions?[00:24:34] Quesiton: I'm very curious how we should build incentives to build open models, things like Francois Chollet's ArcPrize, and other initiatives like that. What is your opinion on how we should better align incentives in the community so that open models stay open?[00:24:49] Luca Soldaini: The incentive bit is, like, really hard.[00:24:51] Luca Soldaini: Like, even It's something that I actually, even we think a lot about it internally because like building open models is risky. [00:25:00] It's very expensive. And so people don't want to take risky bets. I think the, definitely like the challenges like our challenge, I think those are like very valid approaches for it.[00:25:13] Luca Soldaini: And then I think in general, promoting, building, so, any kind of effort to participate in this challenge, in those challenges, if we can promote doing that on top of open models and sort of really lean into like this multiplier effect, I think that is a good way to go. If there were more money for that.[00:25:35] Luca Soldaini: For efforts like research efforts around open models. There's a lot of, I think there's a lot of investments in companies that at the moment are releasing their model in the open, which is really cool. But it's usually more because of commercial interest and not wanting to support this, this like open models in the longterm, it's a really hard problem because I think everyone is operating sort of [00:26:00] in what.[00:26:01] Luca Soldaini: Everyone is at their local maximum, right? In ways that really optimize their position on the market. Global maximum is harder to achieve.[00:26:11] Question2: Can I ask one question? No.[00:26:12] Luca Soldaini: Yeah.[00:26:13] Question2: So I think one of the gap between the closed and open source models is the mutability. So the closed source models like chat GPT works pretty good on the low resource languages, which is not the same on the open, open source models, right?[00:26:27] Question2: So is it in your plan to improve on that?[00:26:32] Luca Soldaini: I think in general,[00:26:32] Luca Soldaini: yes, is I think it's. I think we'll see a lot of improvements there in, like, 2025. Like, there's groups like, Procurement English on the smaller side that are already working on, like, better crawl support, multilingual support. I think what I'm trying to say here is you really want to be experts.[00:26:54] Luca Soldaini: who are actually in those countries that teach those languages to [00:27:00] participate in the international community. To give you, like, a very easy example I'm originally from Italy. I think I'm terribly equipped to build a model that works well in Italian. Because one of the things you need to be able to do is having that knowledge of, like, okay, how do I access, you know, how Libraries, or content that is from this region that covers this language.[00:27:23] Luca Soldaini: I've been in the US long enough that I no longer know. So, I think that's the efforts that folks in Central Europe, for example, are doing. Around like, okay, let's tap into regional communities. To get access you know, to bring in collaborators from those areas. I think it's going to be, like, very crucial for getting products there.[00:27:46] Mistral intro[00:27:46] Sophia Yang: Hi everyone. Yeah, I'm super excited to be here to talk to you guys about Mistral. A really short and quick recap of what we have done, what kind of models and products we have released in the [00:28:00] past year and a half. So most of you We have already known that we are a small startup funded about a year and a half ago in Paris in May, 2003, it was funded by three of our co founders, and in September, 2003, we released our first open source model, Mistral 7b yeah, how, how many of you have used or heard about Mistral 7b?[00:28:24] Sophia Yang: Hey, pretty much everyone. Thank you. Yeah, it's our Pretty popular and community. Our committee really loved this model, and in December 23, we, we released another popular model with the MLE architecture Mr. A X seven B and oh. Going into this year, you can see we have released a lot of things this year.[00:28:46] Sophia Yang: First of all, in February 2004, we released MrSmall, MrLarge, LeChat, which is our chat interface, I will show you in a little bit. We released an embedding model for, you [00:29:00] know, converting your text into embedding vectors, and all of our models are available. The, the big cloud resources. So you can use our model on Google cloud, AWS, Azure Snowflake, IBM.[00:29:16] Sophia Yang: So very useful for enterprise who wants to use our model through cloud. And in April and May this year, we released another powerful open source MOE model, AX22B. And we also released our first code. Code Model Coastal, which is amazing at 80 plus languages. And then we provided another fine tuning service for customization.[00:29:41] Sophia Yang: So because we know the community love to fine tune our models, so we provide you a very nice and easy option for you to fine tune our model on our platform. And also we released our fine tuning code base called Menstrual finetune. It's open source, so feel free to take it. Take a look and.[00:29:58] Sophia Yang: More models. [00:30:00] On July 2, November this year, we released many, many other models. First of all is the two new small, best small models. We have Minestra 3B great for Deploying on edge devices we have Minstrel 8B if you used to use Minstrel 7B, Minstrel 8B is a great replacement with much stronger performance than Minstrel 7B.[00:30:25] Sophia Yang: We also collaborated with NVIDIA and open sourced another model, Nemo 12B another great model. And Just a few weeks ago, we updated Mistral Large with the version 2 with the updated, updated state of the art features and really great function calling capabilities. It's supporting function calling in LatentNate.[00:30:45] Sophia Yang: And we released two multimodal models Pixtral 12b. It's this open source and Pixtral Large just amazing model for, models for not understanding images, but also great at text understanding. So. Yeah, a [00:31:00] lot of the image models are not so good at textual understanding, but pixel large and pixel 12b are good at both image understanding and textual understanding.[00:31:09] Sophia Yang: And of course, we have models for research. Coastal Mamba is built on Mamba architecture and MathRoll, great with working with math problems. So yeah, that's another model.[00:31:29] Sophia Yang: Here's another view of our model reference. We have several premier models, which means these models are mostly available through our API. I mean, all of the models are available throughout our API, except for Ministry 3B. But for the premier model, they have a special license. Minstrel research license, you can use it for free for exploration, but if you want to use it for enterprise for production use, you will need to purchase a license [00:32:00] from us.[00:32:00] Sophia Yang: So on the top row here, we have Minstrel 3b and 8b as our premier model. Minstrel small for best, best low latency use cases, MrLarge is great for your most sophisticated use cases. PixelLarge is the frontier class multimodal model. And, and we have Coastral for great for coding and then again, MrEmbedding model.[00:32:22] Sophia Yang: And The bottom, the bottom of the slides here, we have several Apache 2. 0 licensed open way models. Free for the community to use, and also if you want to fine tune it, use it for customization, production, feel free to do so. The latest, we have Pixtros 3 12b. We also have Mr. Nemo mum, Coastal Mamba and Mastro, as I mentioned, and we have three legacy models that we don't update anymore.[00:32:49] Sophia Yang: So we recommend you to move to our newer models if you are still using them. And then, just a few weeks ago, [00:33:00] we did a lot of, uh, improvements to our code interface, Lachette. How many of you have used Lachette? Oh, no. Only a few. Okay. I highly recommend Lachette. It's chat. mistral. ai. It's free to use.[00:33:16] Sophia Yang: It has all the amazing capabilities I'm going to show you right now. But before that, Lachette in French means cat. So this is actually a cat logo. If you You can tell this is the cat eyes. Yeah. So first of all, I want to show you something Maybe let's, let's take a look at image understanding.[00:33:36] Sophia Yang: So here I have a receipts and I want to ask, just going to get the prompts. Cool. So basically I have a receipt and I said I ordered I don't know. Coffee and the sausage. How much do I owe? Add a 18 percent tip. So hopefully it was able to get the cost of the coffee and the [00:34:00] sausage and ignore the other things.[00:34:03] Sophia Yang: And yeah, I don't really understand this, but I think this is coffee. It's yeah. Nine, eight. And then cost of the sausage, we have 22 here. And then it was able to add the cost, calculate the tip, and all that. Great. So, it's great at image understanding, it's great at OCR tasks. So, if you have OCR tasks, please use it.[00:34:28] Sophia Yang: It's free on the chat. It's also available through our API. And also I want to show you a Canvas example. A lot of you may have used Canvas with other tools before. But, With Lachat, it's completely free again. Here, I'm asking it to create a canvas that's used PyScript to execute Python in my browser.[00:34:51] Sophia Yang: Let's see if it works. Import this. Okay, so, yeah, so basically it's executing [00:35:00] Python here. Exactly what we wanted. And the other day, I was trying to ask Lachat to create a game for me. Let's see if we can make it work. Yeah, the Tetris game. Yep. Let's just get one row. Maybe. Oh no. Okay. All right. You get the idea. I failed my mission. Okay. Here we go. Yay! Cool. Yeah. So as you can see, Lachet can write, like, a code about a simple game pretty easily. And you can ask Lachet to explain the code. Make updates however you like. Another example. There is a bar here I want to move.[00:35:48] Sophia Yang: Okay, great, okay. And let's go back to another one. Yeah, we also have web search capabilities. Like, you can [00:36:00] ask what's the latest AI news. Image generation is pretty cool. Generate an image about researchers. Okay. In Vancouver? Yeah, it's Black Forest Labs flux Pro. Again, this is free, so Oh, cool.[00:36:19] Sophia Yang: I guess researchers here are mostly from University of British Columbia. That's smart. Yeah. So this is Laia ira. Please feel free to use it. And let me know if you have any feedback. We're always looking for improvement and we're gonna release a lot more powerful features in the coming years.[00:36:37] Sophia Yang: Thank you. Get full access to Latent Space at www.latent.space/subscribe

The co-lab career stories
Emily Li Mandri - Founder & Designer of MLE

The co-lab career stories

Play Episode Listen Later Dec 17, 2024 12:07


Emily Li Mandri is the founder & designer of MLE, a consciously created women's accessories brand based in upstate NY. After spending over 15 years in fashion and digital marketing with a focus on innovative design and emerging brand growth, Emily decided it was time to launch her eponymous label, MLE. She now incorporates the first-hand experience she gained in both industries to MLE, with a focus on sustainability and quality. In this episode, Alexis Carey interviews Emily, an accessories designer based in New York. Emily discusses her background, including her education at Johns Hopkins, and her journey from creating silkscreen t-shirts in college to founding her successful accessories line MLE. She shares insights into the challenges and rewards of being an entrepreneur, her career evolution, and future plans for expanding her business.

L’Heure du Monde
La mystérieuse disparition des Fingers

L’Heure du Monde

Play Episode Listen Later Dec 16, 2024 15:25


Mais où sont donc passés les Fingers ? Depuis quelques mois, ces biscuits chocolatés, très sucrés, et commercialisés par le groupe Mondelez se sont volatilisés des rayonnages français, sans la moindre explication. A l'instar des Figolu, en 2015, d'autres biscuits ont déjà disparu, avant de réapparaître, à la suite d'une mobilisation des consommateurs. Cette fois, l'histoire semble être différente.Alors comment expliquer qu'un produit aussi connu puisse disparaître sans explications ? Est-ce que cela pourrait être un coup de communication ? Et qu'est-ce que cela dit de notre rapport à l'alimentation quand nos produits fétiches sont devenus massivement industrialisés ?Dans cet épisode du podcast « L'Heure du Monde », Coline Clavaud-Mégevand, journaliste, revient sur l'enquête qu'elle a menée sur le sujet pour « M Le magazine du Monde ».Un épisode produit et présenté par Adèle Ponticelli avec l'aide de Marion Bothorel. Réalisation : Quentin Bresson. Musiques : Amandine Robillard.Episode publié le 16 décembre 2024. Hébergé par Audion. Visitez https://www.audion.fm/fr/privacy-policy pour plus d'informations.

L’Heure du Monde
La boxeuse algérienne Imane Khelif, icône queer malgré elle

L’Heure du Monde

Play Episode Listen Later Nov 25, 2024 21:17


Le 1er août 2024, Imane Khelif passait de l'ombre à la lumière. La boxeuse algérienne de 25 ans faisait son entrée dans la compétition aux Jeux olympiques de Paris, catégorie « moins de 66 kg ». Mais ce jour-là, ce n'est pas tant sa performance qu'on a retenue, que l'abandon, en larmes, de son adversaire l'italienne Angela Carini qui estimait sa défaite « injuste ».Il n'en fallait pas plus pour relancer une rumeur qui colle à la peau d'Imane Khelif depuis les championnats du monde de boxe de New Delhi en 2023, selon laquelle elle ne serait pas vraiment une femme. Dans les minutes qui suivent le combat, de nombreuses personnalités s'en saisissent, de la présidente du Conseil italien, Giorgia Meloni, à l'autrice britannique J. K. Rowling, en passant par Donald Trump alors en pleine campagne présidentielle.Sauf que cette rumeur s'appuie sur des tests de « féminité » réalisés par une fédération de boxe controversée et proche du Kremlin, l'IBA, dont les résultats n'ont jamais été publiés. Imane Khelif est née femme, s'est toujours considérée comme telle et a toujours combattu dans les catégories féminines.Depuis cette polémique, Imane Khelif est donc devenue, malgré elle, une icône paradoxale : un symbole du droit à la différence en Occident quand bien même elle n'en revendique aucune, une égérie gender fluid pour les créateurs de mode, une fierté nationale pour l'Algérie, et une cible pour les activistes anti-trans.Comment Imane Khelif vit-elle cette nouvelle notoriété ambiguë et encombrante ? Comment expliquer qu'elle soit devenue le catalyseur de toutes les passions et fantasmes autour du genre ? Réponse dans cet épisode du podcast « L'Heure du Monde », avec Gaspard Dhellemmes, qui a écrit son portrait pour « M Le magazine du Monde ».Un épisode d'Adélaïde Tenaglia. Présentation et rédaction en chef : Jean-Guillaume Santi. Réalisation : Florentin Baume. Musiques : Amandine Robillard et Epidemic Sounds. Cet épisode a été publié le 25 novembre 2024.---Abonnez-vous à la chaîne Whatsapp du "Monde" : https://lemde.fr/4eMPTJd Hébergé par Audion. Visitez https://www.audion.fm/fr/privacy-policy pour plus d'informations.

Revue de presse française
À la Une: la France se prépare à un hiver de grèves

Revue de presse française

Play Episode Listen Later Nov 17, 2024 4:54


Winter is coming. L'hiver arrive en France, et l'on se dirige « vers un Noël sous tension », titre La Tribune Dimanche. Le spectre de la grève refait surface du côté des agriculteurs contre l'accord de libre-échange, toujours en négociation, entre l'Union européenne et les pays du Mercosur ; du côté des fonctionnaires contre la volonté du gouvernement de passer de un à trois jours de carence, comme dans le privé ; du côté des cheminots contre la disparition du Fret SNCF, prévue au 1e janvier 2025. « Va-t-on revivre, s'interroge La Tribune Dimanche, une fin d'année comme en 2022, avec une France paralysée, des trains annulés et des milliers de voyageurs ne pouvant pas rejoindre leurs proches pour les fêtes ? ». Dans l'hebdomadaire, le président du groupe SNCF, Jean-Pierre Farrandou, en appelle « au sens des responsabilités des cheminots », au moment « où la France connaît une situation économique compliquée ». Un constat dressé, aussi, par Marianne, qui voit réapparaître « le spectre du chômage de masse ». Le magazine décompte 183 défaillances d'entreprises par jour, des « fleurons nationaux » comme Auchan et Michelin licencient. Et avec un déficit à 6% du PIB, « le gouvernement se trouve dans l'impossibilité, selon Marianne, d'intervenir massivement, comme par le passé, pour colmater la brèche avec de l'argent public ».À lire aussiL'accord UE-Mercosur n'est «plus du tout en phase avec les impératifs écologiques de l'époque»Donald Trump, du Queens à la Maison-BlancheLui a jeté un froid supplémentaire après sa victoire : Donald Trump de retour au pouvoir aux États-Unis. Puisque de nombreuses analyses ont déjà été écrites pour expliquer cette victoire, le vote des Américains, la défaite des démocrates, les innombrables conséquences du retour de Donald Trump aux États-Unis et dans le monde... Pourquoi ne pas revenir, tout simplement, à l'origine du mal ? Paris Match retrace la carrière de « ce gamin, né dans le Queens » à New York, juste après la Seconde Guerre mondiale, en 1946, et « qui rêvait de gloire en regardant au loin les hautes tours de Manhattan ». Donald Trump « a un temps caressé l'idée de faire des études de cinéma », mais il en aurait été empêché par son père, Fred, à qui il n'avait pourtant pas peur de répondre. Son père a « fait fortune en construisant des HLM à Brooklyn », rappelle Paris Match, mais Donald Trump « vise beaucoup plus haut ». « Il veut faire partie du grand monde qui vit en vase clos et le regarde de haut. » Il devient donc un magnat de l'immobilier, avant de frôler la banqueroute puis d'accéder, finalement, « aux sommets de la célébrité » en participant à l'émission de télé-réalité « The Apprentice ». La suite, on la connaît : élu président des États-Unis en 2016 avant un échec en 2020 suivi de l'invasion du Capitole, puis sa récente victoire, à l'issue d'une campagne électorale au cours de laquelle il a su « jouer contre le système », « attaquer en permanence », « ne jamais reconnaître ses torts », « mentir ». Autant de méthodes apprises, raconte Paris Match, aux côtés d'un avocat sulfureux, Roy Cohn, qu'il a rencontré plus jeune en entrant dans un club privé de Manhattan.À lire aussiProche-Orient : Donald Trump donnera-t-il carte blanche à Benyamin Netanyahu ?Donald Trump, Vladimir Poutine et l'UkraineDe retour à la Maison-Blanche, « il sait ce qui l'attend », « il est préparé », assure Le Nouvel Obs. Tout est compilé, selon l'hebdomadaire, dans les plus de 900 pages du « Projet 2025 », une « feuille de route préparée par une centaine de cercles de réflexion conservateurs ». Au menu, donc, d'après Le Nouvel Obs : « démanteler l'État administratif, défendre la souveraineté et les frontières, remettre la famille au centre de la vie américaine et garantir les droits individuels pour vivre librement ». À cela s'ajoute la volonté de mettre fin à la guerre en Ukraine, et sur ce point, « Donald Trump sera meilleur que vous ne le croyez » : c'est ce que veut penser Boris Johnson. Dans L'Express, l'ancien Premier ministre britannique s'interroge : « Donald Trump, avec tout son ego, tout son orgueil, sa détermination à rendre sa grandeur à l'Amérique, va-t-il laisser la Russie humilier son pays ? Va-t-il inaugurer son mandat en laissant Vladimir Poutine rendre sa grandeur à l'empire soviétique ? ». « Je ne pense pas », répond Boris Johnson. Pourtant, Le Point revient sur la façon dont le président russe « va tenter d'exploiter le retour de Donald Trump à la Maison-Blanche pour étendre son influence mondiale ». Washington travaille sur un accord de paix qui pourrait notamment « valider les conquêtes russes, soit 20% du territoire de l'Ukraine », et empêcher Kiev d'adhérer à l'Otan pendant 20 ans. « Reste un obstacle, ajoute Le Point : les exigences de Vladimir Poutine », qui vont « bien au-delà ».À lire aussiAprès l'élection de Donald Trump, les droits reproductifs des Américaines en péril?Déjà 100 jours après les JO de ParisDonald Trump, qui s'en est par ailleurs pris à une toute autre personnalité, au cours de sa campagne : la boxeuse algérienne Imane Khelif. Au cœur d'une polémique, cet été, lors des JO de Paris, accusée de ne pas vraiment être une femme, la championne olympique est en couverture de M Le magazine du Monde. Retour sur le « harcèlement » qu'elle subit depuis « toute petite », sans avoir empêché Imane Khelif « de devenir une idole nationale en Algérie », rappelle l'hebdomadaire et une icône de la mode. Les Jeux olympiques et paralympiques de Paris, c'était il y a 100 jours déjà. Le magazine L'Équipe a donc choisi de célébrer ce compte à rebours inversé. De se remémorer les bons souvenirs, avec les Phryges, ces « mascottes qui ont renvoyé Footix aux vestiaires », constate le magazine. Avec le recordman du monde de saut à la perche, le Suédois Armand Duplantis, qui redescend doucement de ses 6,26 m. Et puis avec cet article sur les bons perdants : ceux qui ont terminé au pied du podium, à la « place du con ». Eux aussi ont été reçus par le président, en Italie, et ils ont été salués, en Belgique, par le Comité olympique. « Les quatrièmes ont eu une visibilité accrue durant les derniers JO », note L'Équipe, en expliquant que « la commisération tend à céder le pas à une dédramatisation, une approche propre à une génération d'athlètes attentive à son bien-être ». Pour certains sportifs, difficile tout de même de savoir s'il faut mieux en rire qu'en pleurer. Mais en ce qui concerne la fin des Jeux, la petite larme de nostalgie n'est jamais bien loin. C'était l'été et les cheminots avaient même décidé de respecter la trêve olympique.À lire aussiRugby: l'équipe de France s'offre un troisième succès de suite contre la Nouvelle-Zélande

Let's Talk AI
#186 - Adobe AI Tools, Tesla's Cybercab, Nobel Prizes

Let's Talk AI

Play Episode Listen Later Oct 20, 2024 93:54 Transcription Available


Our 186th episode with a summary and discussion of last week's big AI news! With hosts Andrey Kurenkov and guest host Jon Krohn from the SuperDataScience Podcast. Check out Jon's upcoming agent-focused event here - AI Catalyst: Agentic Artificial Intelligence Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form. Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Timestamps + Links: (00:00:00) Intro / Banter (00:04:14) News Preview (00:05:28) Response to listener comments / corrections Tools & Apps (00:07:10) Adobe's AI video model is here, and it's already inside Premiere Pro (00:11:52) Adobe teases AI tools that build 3D scenes, animate text, and make distractions disappear (00:15:43) Adobe's Project Super Sonic uses AI to generate sound effects for your videos (00:17:05) YouTube expands AI audio generation tool to all U.S. creators (00:20:29) All Gemini users can now generate images with Imagen 3 (00:22:27) Meta AI will launch in six more countries today, including the UK (00:24:27) OpenAI Unveils Secret Meta Prompt—And It's Very Different From Anthropic's Approach Applications & Business (00:27:46) Tesla's big ‘We, Robot' event criticized for ‘parlor tricks' and vague timelines for robots, Cybercab, Robovan (00:37:25) OpenAI announces content deal with Hearst, including content from Cosmopolitan, Esquire and the San Francisco Chronicle Projects & Open Source (00:47:59) OpenR: An Open-Source AI Framework Enhancing Reasoning in Large Language Models (00:49:54) MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering (00:56:29) OpenAI Releases Swarm: An Experimental AI Framework for Building, Orchestrating, and Deploying Multi-Agent Systems Research & Advancements (00:59:23) Nobel Physics Prize Awarded for Pioneering A.I. Research by 2 Scientists (01:05:22) Nobel Prize in Chemistry Goes to 3 Scientists for Predicting and Creating Proteins (01:09:09) LLMs can't perform “genuine logical reasoning,” Apple researchers suggest (01:13:05) GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models Policy & Safety (01:14:34) Anthropic CEO goes full techno-optimist in 15,000-word paean to AI (01:23:04) Google will help build seven nuclear reactors to power its AI systems (01:24:11) LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations Synthetic Media & Art (01:26:26) Adobe Pushes Content Authenticity Forward With a Free Web App Designed for Creators (01:29:13) Outro

Kaatscast
Catskill Couture: MLE's Sustainable Fashion

Kaatscast

Play Episode Listen Later Aug 27, 2024 20:06


In this episode of Kaatscast, we explore the journey of Emily Li Mandri, founder of the women's accessories brand MLE, based in Saugerties, New York. Emily shares insights into the challenges and rewards of running a fashion brand in Upstate New York, her commitment to eco-conscious materials and sustainable fashion, and the influence of her family's background in apparel. We also hear from her assistant, New Paltz theater grad Kiana Duggan-Haas, about the importance of sustainability in the fashion industry. Tune in for an inspiring discussion on ethical fashion practices, local craftsmanship, and a life/work balance in the Catskills. --- Thanks to this week's sponsors: Briars & Brambles Books, Hanford Mills Museum, and The Mountain Eagle. Kaatscast is made possible through a grant from the Nicholas J. Juried Family Foundation, and through the support of listeners like you! --- 00:00 Introduction to MLE 01:40 Meet the Founder: Emily Li Mandri 03:20 Sustainability in Fashion 05:58 Challenges and Innovations in Sustainable Fashion 12:51 Living and Working in the Catskills 14:44 Building a Local and National Brand 17:42 Conclusion and Final Thoughts

Speaking Of Reliability: Friends Discussing Reliability Engineering Topics | Warranty | Plant Maintenance

What is MLE? Abstract Chris and Fred discuss the three-letter acronym ‘MLE’ stands for? Well, it stands for ‘maximum likelihood estimate.’ Ever heard of it? Do you know what it means? Key Points Join Chris and Fred as they discuss what the MLE or ‘maximum likelihood estimate’ means … usually when using software to conduct […] The post SOR 991 What is MLE? appeared first on Accendo Reliability.

confidence best fit sor mle accendo reliability
Monocle 24: The Stack
‘Italy Segreta', ‘M Le magazine du Monde', ‘BSKT' and the Olympic Broadcasting Services

Monocle 24: The Stack

Play Episode Listen Later Aug 10, 2024 47:23


We speak with the founder of ‘Italy Segreta', a title about all things Italy. Plus: Marie-Pierre Lannelongue from ‘M Le magazine du Monde'; ‘BSKT', which is all about basketball culture; and Yiannis Exarchos, the CEO of the Olympic Broadcasting Services.See omnystudio.com/listener for privacy information.

MLOps.community
AI in Healthcare // Eric Landry // #249

MLOps.community

Play Episode Listen Later Jul 19, 2024 51:05


Eric Landry is a seasoned AI and Machine Learning leader with extensive expertise in software engineering and practical applications in NLP, document classification, and conversational AI. With technical proficiency in Java, Python, and key ML tools, he leads the Expedia Machine Learning Engineering Guild and has spoken at major conferences like Applied Intelligence 2023 and KDD 2020. AI in Healthcare // MLOps Podcast #249 with Eric Landry, CTO/CAIO @ Zeteo Health. // Abstract Eric Landry discusses the integration of AI in healthcare, highlighting use cases like patient engagement through chatbots and managing medical data. He addresses benchmarking and limiting hallucinations in LLMs, emphasizing privacy concerns and data localization. Landry maintains a hands-on approach to developing AI solutions and navigating the complexities of healthcare innovation. Despite necessary constraints, he underscores the potential for AI to proactively engage patients and improve health outcomes. // Bio Eric Landry is a technology veteran with 25+ years of experience in the healthcare, travel, and computer industries, specializing in machine learning engineering and AI-based solutions. Holding a Masters in SWE (NLP thesis topic) from the University of Texas at Austin, 2005. He has showcased his expertise and leadership in the field with three US patents, published articles on machine learning engineering, and speaking engagements at the 2023 Applied Intelligence Live, 2020 KDD conference, Data Science Salon 2024, and former leader of Expedia's MLE guild. Formerly, Eric was the director of AI Engineering and Conversation Platform at Babylon Health and Expedia. Currently CTO/CAIO at Zeteo Health. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.zeteo.health/ Building Threat Detection Systems: An MLE's Perspective // Jeremy Jordan // MLOps Podcast #134: https://youtu.be/13nOmMJuiAo --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Eric on LinkedIn: https://www.linkedin.com/in/jeric-landry/ Timestamps: [00:00] Eric's preferred coffee [00:16] Takeaways [01:16] Please like, share, leave a review, and subscribe to our MLOps channels! [01:32] ML and AI in 2005 [04:43] Last job at Babylon Health [10:57] Data access solutions [14:35] Prioritize AI ML Team Success [16:39] Eric's current work [20:36] Engage in holistic help [22:13] High-stakes chatbots [27:30] Navigating Communication Across Diverse Communities [31:49] When Bots Go Wrong [34:15] Health care challenges ahead [36:05] Behavioral health tech challenges [39:45] Stress from Apps Notifications [41:11] Combining different guardrails tools [47:16] Navigating Privacy AI [50:12] Wrap up

Chubstep
#469: Hot Diggity Chubstep feat. Patrick Bertoletti

Chubstep

Play Episode Listen Later Jul 18, 2024 33:48


The 2024 Nathan's Famous Hot Dog Eating Champion, MLE #2 ranked eater in the world, and recent blueberry eating world record holder Patrick Bertoletti joins Jrad and Steed on this week's Chubstep. The guys discuss his recent accomplishments, what percentage of blueberry Patrick's body is, Chubstep's experience with hot dogs, how he got started in competitive eating, being the King Hippo and Gerald Ford of competitive eating, the most difficult food competitions so far, getting internet hate for winning while Joey Chestnut wasn't in attendance, how to prepare for a competition, the 4th of July Nathan's after party, the worst foods to bring to a party, the wildest chicken wing competition, and predictions on Joey Chestnut vs Kobayashi.

Valley Girls Podcast
Tour de Saugerties: Immersion in Art

Valley Girls Podcast

Play Episode Listen Later Jul 5, 2024 52:08


Join the Valley Girls as we explore Saugerties, NY, through the lens of art and design. In this episode, we declare our love for Saugerties and discuss creativity and different facets of sustainability. First we talk to Barbara Bravo who fills us in on what to expect from the 22nd annual Saugerties Artists Studio Tour, scheduled for August 10-11, 2024. Learn more at www.saugertiesarttour.org. We also chat with jewelry and accessories designer Emily Li Mandri of MLE, whose statement pieces help to inspire and empower, and whose new brick & mortar store is bringing the bling to Main Street. Check out her gorgeous collection at www.madebyMLE.com and instagram.com/madebymle. ~~~~~ Help support Valley Girls by rating us and leaving a review. Follow us from our show page, visit us at valleygirlspodcast.com, and at instagram.com/valleygirlspodny. Episode music by Robert Burke Warren entitled Painting a Vast Blue Sky can be found at robertburkewarren.bandcamp.com/track/paintng-a-vast-blue-sky.

Davey Mac Sports Program
July 4th Hot Dog Special! (07/01/2024)

Davey Mac Sports Program

Play Episode Listen Later Jul 1, 2024 72:33


It's a brand new Davey Mac Sports Program as we celebrate July 4th and Sports itself!   With special guest George Chiger of Major League Eating as he prepares to compete in the Nathan's Famous Hot Dog Eating Contest in Coney Island on Independence Day!   What does Chiger think of the controversy surrounding Joey Chestnut and his suspension from the MLE?   Is Chiger ready to win the competition now that Chestnut is gone?   We'll get all the answers!   Plus, Dave discusses ESPN legend and Dave's forever rival Chris Berman possibly pissing his pants at a celebrity golf tournament!   Also, we chat about the weird situation with Duke's Kyle Filipowski and his potentially evil fiancé!   The DMSP also talks about LeBron James getting his son Bronny drafted to the Lakers!   And we look at Aaron Judge's insane season and today being Bobby Bonilla Day for the Mets!   It's an action-packed and fun DMSP that you need to experience today!   And happy July 4th to ya!   BOOM! 

We Say What They Can't Radio
The Arena! Podcast - Feat MLE & Young Slim

We Say What They Can't Radio

Play Episode Listen Later Jun 24, 2024 46:00


Harmonizing Stories: Jump into the Musical Journey of artist MLE & producer Young Sim T2R. #thearenapodcast #podcastinterviews #thearena #newmusicpromotions

Major League Eventing Podcast
Mia Farley Returns as a 2x 5* Rider!

Major League Eventing Podcast

Play Episode Listen Later Jun 19, 2024 52:11


Karen and Robby welcome back Mia Farley! Mia was on the MLE podcast almost 2 years ago to the day but this time she comes on as not just a 5* rider BUT as a 5* rider that has gone cross country double clear at Maryland and Kentucky. Mia is now living and training out of a farm in Kentucky and has made the announcement that she plans on taking Phelps to Burghley. Mia will start a Phelps membership to help offset the costs of getting them to Burghley. Besides her big Burghley news, Mia has a new horse named Pina Colada that she has syndication shares available. We wish Mia and Phelps all the best and let's get them to Burghley!!PC: Shannon BrinkmanTo follow Mia's journey:https://www.instagram.com/_miafarley/?hl=enhttps://www.facebook.com/p/Mia-Farley-Eventing-100064203605315/If interested in helping Mia get to Burghley or in syndication, email her at miafarley6@gmail.comPlease support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Save 10% off your Redingote purchase, use "MLE10" at checkout!https://landing.redingoteequestrian.com/mlePatricia Scott Insurance (484)319-8923Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop

Knicks Film School
KFS POD | PART 1 - Cap Or No Cap? (2.0) - Adding Talent & Finding Upgrades

Knicks Film School

Play Episode Listen Later Jun 17, 2024 67:03


In part 1 of 2 of this episode, Jeremy takes Jon & all of us through another edition of Cap Or No Cap? looking at all the different talent acquisition or roster upgrade opportunities that could present themselves this offseason. Here in part 1, they specifically look at the Knicks recent history with rookies, lessons to be learned from recent MLE signings, backup point guard options, LIVE reaction to the Maurice Cheeks news and much more! Watch the video version of this podcast on our YouTube channel! FOLLOW MACRI - @JCMacriNBA FOLLOW JEREMY - @TheCohencidence FOLLOW GMAC - @AndrewJClaudio_ CHECK OUT THE KFS MERCH STORE! Learn more about your ad choices. Visit podcastchoices.com/adchoices

Clotheshorse
Episode 204: The SHEIN-sodes, part 1: IPO WTF, Empty Airplanes, & Duty Free

Clotheshorse

Play Episode Listen Later Jun 17, 2024 105:37


SHEIN has–and is– changing what it means to buy and sell clothing on planet Earth.  And it's not a change for the better. It's a change we should all care about, no matter where WE buy our clothing. Because SHEIN and what it means for the future of making and selling just about any category of stuff WILL impact every one of us: no matter what we wear, where we live, the kind of job we have, or how much money we have. The SHEIN-ification is such a big deal, so impactful for every one of us, that this episode is part 1 in a short series about SHEIN: where it's been, where it's going, and how it is changing everything.In this part of the series, we will be tackling:SHEIN's impending IPO. And WTF is an IPO?How SHEIN grew and grew and grew (blame 2020 and sweatpants).What in the heck is the de minimis loophole and how is this benefiting SHEIN?And, are there really empty airplanes flying back to China every day so they can be loaded back up with SHEIN and Temu parcels?Also, an update on the Fashion Act and how/why we are still in the early stages of the fight to end fast fashion!Thanks to this episode's sponsor, Made by MLE, @madebymle on Instagram.  Use code CLOTHESHORSE to receive 10% off your first order!Additional reading (lots of sources this week):Maxine's statement about the Fashion ActWhat is an IPO?"NEW REPORT FINDS SHEIN EMITS MORE POLLUTION THAN THE COUNTRY OF PARAGUAY," Janelle Sessoms, Fashionista."What's ‘Really Scary' About Shein's Breakneck Growth," Jasmin Malik Chua, Sourcing Journal."NRF rejects Shein membership as retailer pursues U.S. IPO," Gabrielle Fonrouge, CNBC.  Financial Times."Fast fashion retailer Shein hikes prices ahead of IPO," Helen Reid, Reuters."Synthetics Anonymous 2.0: Fashion's persistent plastic problem," Changing Markets Foundation."You're Buying So Much From Temu And Shein The Air Cargo Industry Can't Keep Up," Cyrus Farivar, Forbes."The Time Has Come to Address the De Minimis Loophole," Timothy Lyons, Vermont Law Review."Labor unions, domestic manufacturing groups launch coalition to reform de minimis import loophole," Chelsea Cox, CNBC.And HEY! BUY YOUR TICKETS TO THE CLOTHESHORSE JAMBOREE ASAP!Want to take advantage of the payment plan?Each payment is $50, spread over 4 payments.The first one happens when you buy your ticket.  You will use promo code INSTALLMENT1 at checkout (when you enter your payment info).  You will be charged $50 and you will receive your actual ticket via email immediately. Amanda will send you a link to pay the remaining payments on 6/25, 7/25, and the week of the jamboree.If you want to share your opinion/additional thoughts on the subjects we cover in each episode, feel free to email, whether it's a typed out message or an audio recording:  amanda@clotheshorse.worldDid you enjoy this episode? Consider "buying me a coffee" via Ko-fi: ko-fi.com/clotheshorseFind this episode's transcript (and so much more) at clotheshorsepodcast.comClotheshorse is brought to you with support from the following sustainable small businesses:The Pewter Thimble Is there a little bit of Italy in your soul? Are you an enthusiast of pre-loved decor and accessories? Bring vintage Italian style — and history — into your space with The Pewter Thimble (@thepewterthimble). We source useful and beautiful things, and mend them where needed. We also find gorgeous illustrations, and make them print-worthy. Tarot cards, tea towels and handpicked treasures, available to you from the comfort of your own home. Responsibly sourced from across Rome, lovingly renewed by fairly paid artists and artisans, with something for every budget. Discover more at thepewterthimble.comSt. Evens is an NYC-based vintage shop that is dedicated to bringing you those special pieces you'll reach for again and again. More than just a store, St. Evens is dedicated to sharing the stories and history behind the garments. 10% of all sales are donated to a different charitable organization each month.  New vintage is released every Thursday at wearStEvens.com, with previews of new pieces and more brought to you on Instagram at @wear_st.evens.Deco Denim is a startup based out of San Francisco, selling clothing and accessories that are sustainable, gender fluid, size inclusive and high quality--made to last for years to come. Deco Denim is trying to change the way you think about buying clothes. Founder Sarah Mattes wants to empower people to ask important questions like, “Where was this made? Was this garment made ethically? Is this fabric made of plastic? Can this garment be upcycled and if not, can it be recycled?” Signup at decodenim.com to receive $20 off your first purchase. They promise not to spam you and send out no more than 3 emails a month, with 2 of them surrounding education or a personal note from the Founder. Find them on Instagram as @deco.denim.Vagabond Vintage DTLV is a vintage clothing, accessories & decor reselling business based in Downtown Las Vegas. Not only do we sell in Las Vegas, but we are also located throughout resale markets in San Francisco as well as at a curated boutique called Lux and Ivy located in Indianapolis, Indiana. Jessica, the founder & owner of Vagabond Vintage DTLV, recently opened the first IRL location located in the Arts District of Downtown Las Vegas on August 5th. The shop has a strong emphasis on 60s & 70s garments, single stitch tee shirts & dreamy loungewear....

Fescoe in the Morning
Joey Chestnut banned from 4th of July tradition

Fescoe in the Morning

Play Episode Listen Later Jun 12, 2024 38:20


Joey Chestnut banned from 4th of July tradition

Kreckman & Lindahl
6/11/24 Hour 2 - Dan Fouts' 73rd birthday was yesterday, Broncos WRs, Kristaps Porzingis' injury, Beef Tweets: Joey Chestnut vs. MLE

Kreckman & Lindahl

Play Episode Listen Later Jun 12, 2024 46:54


00:00 Dan Fouts' 73rd birthday was yesterday.12:50 Broncos WRs.22:35 Kristaps Porzingis' injury.33:30 Beef Tweets: Joey Chestnut vs. MLE.

BSN Denver Nuggets Podcast
Kris Dunn, Dario Saric, and potential free agents for the Denver Nuggets | DNVR Nuggets Podcast

BSN Denver Nuggets Podcast

Play Episode Listen Later Jun 6, 2024 63:45


Can the Denver Nuggets make a splash in free agency this summer? Probably not. But they could add some helpful pieces that could complete their rotation. The DNVR Nuggets podcast team looks at names that might be available for the MLE or vet minimums. Plus, who will win the NBA Finals? Start - 0:00 Could the Nuggets have beaten these teams? - 2:30 Previewing the finals - 6:45 Quick hater's ball - 19:50 MLE Free Agents - 27:20 More MLE targets - 36:00 Mr. Nugget(s) - 40:50 Minimum free agents - 44:00 Kyshawn George - 51:40 Superchats - 59:45 An ALLCITY Network Production PARTY WITH US: https://thednvr.com/events ALL THINGS DNVR: https://linktr.ee/dnvrsports SUBSCRIBE: https://www.youtube.com/c/DNVR_Sports BUY GOLDEN ERA: https://www.triumphbooks.com/golden-era-products-9781637273692.php?page_id=21 Visit Your Front Range Toyota Stores at a location near you - Toyota is the official vehicle of DNVR. Go to https://millerlite.com/dnvr to find delivery options near you. Or you can pick up some Miller Lite pretty much anywhere they sell beer. Tastes like Miller Time. Celebrate Responsibly. Miller Brewing Company, Milwaukee, Wisconsin.  WATCH THE NUGGETS ON ALTITUDE: https://www.fubotv.com/dnvr - Start your free 14-day trial and receive 15% off your first month! Manscaped: Get 20% Off and Free Shipping with code NUGGETS20 at https://www.Manscaped.com Download the Circle K app and join the Inner Circle or visit https://www.circlek.com/inner-circle!  Download the Gametime app, create an account, and use code DNVR for $20 off your first purchase. Terms apply. Check out FOCO merch and collectibles here https://foco.vegb.net/DNVRNugs and use promo code “DNVR10” for 10% off your order on all non Pre Order items. Sign up on the Volo app using code DNVR3 to get Volo Pass for only $10/month for the first 3 months.  Download PubPass now in the App Store or Google Play store and use code DNVR when you sign up for 50% off a 1 year subscription. Exclusively for our listeners, Shady Rays is giving out their best deal of the season. Head to https://shadyrays.com and use code: DNVR for 35% off polarized sunglasses. Try for yourself the shades rated 5 stars by over 300,000 people. When you shop through links in the description, we may earn affiliate commissions. Copyright Disclaimer under section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, education and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Learn more about your ad choices. Visit podcastchoices.com/adchoices

FOX Sports Knoxville
The Morning Show HR2 5.17.24 Russel Biven joins the show

FOX Sports Knoxville

Play Episode Listen Later May 17, 2024 48:23


EA announces release date for College Football 25 Russel Biven joins the show to discuss MLE bologna eating contest Bob calls out The Drive

Major League Eventing Podcast
MLE Recap - What Have We Been Up To and What Is Coming Up!

Major League Eventing Podcast

Play Episode Listen Later Apr 24, 2024 44:54


This week on the Major League Eventing Podcast, Karen and Robby sit down and talk about all that MLE has been up to and what to look forward to. We get asked all the time about giving everyone an update and we finally got around to doing it! Stay tuned for an exciting year which will be mentioned....hint - it has to do with Corgis and the Baltimore Ravens. We also are very close to  million downloads and can't thank you, our listeners and of course our sponsors for everything.Please support our sponsors:https://cowboymagic.com/https://manentailequine.com/https://exhibitorlabs.com/https://www.triplecrownfeed.com/Save 10% off your Redingote purchase, use "MLE10" at checkout!https://landing.redingoteequestrian.com/mlePatricia Scott Insurance (484)319-8923Sign up for our mailing list!https://mailchi.mp/b232b86de7e5/majorleagueeventingllc?fbclid=IwAR2Wp0jijRKGwGU3TtPRN7wMo-UAWBwrUy2nYz3gQXXJRmSJVLIzswvtClECheckout the Major League Eventing store!https://www.majorleagueeventing.com/shop

Clotheshorse
Episode 191: Fast Jewelry, Knockoffs, and Net 60 with Emily Li Mandri of MLE

Clotheshorse

Play Episode Listen Later Feb 12, 2024 139:51


Emily Li Mandri, founder and design behind MLE, joins Amanda to talk about all things accessories and jewelry, includingWhat is costume jewelry? And why is metal content important?The drawbacks of "fast jewelry"What are the challenges of running a small, ethical accessories brand?How are knockoffs and copycats a big part of the jewelry/accessories industry?What happens when bigger brands don't pay their invoices?And so much more! Read more about what is happening with Neighborhood Goods and unpaid brands here: "Neighborhood Goods Has Closed--Vendors Want their Money."Amanda gets things started with thoughts about the "Loneliness Economy," capitalism, and community. It turns out that one of the most revolutionary things we can do is...be active and supportive members of our community!Find Emily and MLE here: @madebyMLE on InstagrammadebyMLE.com (use code CLOTHESHORSE to get 10% off your order)Additional reading:"The Loneliness Economy: How Capitalism Thrives on Isolation," Piyush Patel, Medium."Capitalism starves us of love — we don't have to stand by," Alexandra Kauffman, The Emory Wheel."Capitalism Subverts Community," Robert Neuwirth, Noema."Capitalism has warped our understanding of community — and it's making us vulnerable to manipulation," Valerie Vande Panne, Salon.Register for the February Clotheshorse Webinar/Hang Out Session: Why new clothes are kind a garbage...February 29, 8pm EST.  Free (but please support Clotheshorse via Ko-fi if you enjoy yourself)!Limited to 100 attendees, so register now here.If you want to share your opinion/additional thoughts on the subjects we cover in each episode, feel free to email, whether it's a typed out message or an audio recording:  amanda@clotheshorse.worldOr call the Clotheshorse hotline: 717.925.7417Did you enjoy this episode? Consider "buying me a coffee" via Ko-fi: ko-fi.com/clotheshorseFind this episode's transcript (and so much more) at clotheshorsepodcast.comClotheshorse is brought to you with support from the following sustainable small businesses:​High Energy Vintage is a fun and funky vintage shop located in Somerville, MA, just a few minutes away from downtown Boston. They offer a highly curated selection of bright and colorful clothing and accessories from the 1940s-1990s for people of all genders. Husband-and-wife duo Wiley & Jessamy handpick each piece for quality and style, with a focus on pieces that transcend trends and will find a home in your closet for many years to come! In addition to clothing, the shop also features a large selection of vintage vinyl and old school video games. Find them on instagram @ highenergyvintage, online at highenergyvintage.com, and at markets in and around Boston.The Pewter Thimble Is there a little bit of Italy in your soul? Are you an enthusiast of pre-loved decor and accessories? Bring vintage Italian style — and history — into your space with The Pewter Thimble (@thepewterthimble). We source useful and beautiful things, and mend them where needed. We also find gorgeous illustrations, and make them print-worthy. Tarot cards, tea towels and handpicked treasures, available to you from the comfort of your own home. Responsibly sourced from across Rome, lovingly renewed by fairly paid artists and artisans, with something for every budget. Discover more at thepewterthimble.comSt. Evens is an NYC-based vintage shop that is dedicated to bringing you those special pieces you'll reach for again and again. More than just a store, St. Evens is dedicated to sharing the stories and history behind the garments. 10% of all sales are donated to a different charitable organization each month.  New vintage is released every Thursday at wearStEvens.com, with previews of new pieces and more brought to you on Instagram at @wear_st.evens.Deco Denim is a startup based out of San Francisco, selling clothing and accessories that are sustainable, gender fluid, size inclusive and high quality--made to last for years to come. Deco Denim is trying to change the way you think about buying clothes. Founder Sarah Mattes wants to empower people to ask important questions like, “Where was this made? Was this garment made ethically? Is this fabric made of plastic? Can this garment be upcycled and if not, can it be recycled?” Signup at decodenim.com to receive $20 off your first purchase. They promise not to spam you and send out no more than 3 emails a month, with 2 of them surrounding education or a personal note from the Founder. Find them on Instagram as @deco.denim.Gabriela Antonas is a visual artist, an upcycler, and a fashion designer, but Gabriela Antonas is also a feminist micro business with radical ideals. She's the one woman band, trying to help you understand, why slow fashion is what the earth needs. If you find your self in New Orleans, LA, you may buy her ready-to-wear upcycled garments in person at the store “Slow Down” (2855 Magazine St). Slow Down Nola only sells vintage and slow fashion from local designers. Gabriela's garments are guaranteed to be in stock in person, but they also have a website so you may support this women owned and run business from wherever you are! If you are interested in Gabriela making a one of a kind garment for you DM her on Instagram at @slowfashiongabriela to book a consultation.Vagabond Vintage DTLV is a vintage clothing, accessories & decor reselling business based in Downtown Las Vegas. Not only do we sell in Las Vegas, but we are also located throughout resale markets in San Francisco as well as at a curated boutique called Lux and Ivy located in Indianapolis, Indiana. Jessica, the founder & owner of Vagabond Vintage DTLV, recently opened the first IRL location located in the Arts District of Downtown Las Vegas on August 5th. The shop has a strong emphasis on 60s & 70s garments, single stitch tee shirts & dreamy loungewear. Follow them on instagram, @vagabondvintage.dtlv and keep an eye out for their website coming fall of 2022....