Best podcasts about Grok

Show all podcasts related to grok

Latest podcast episodes about Grok

KMJ's Afternoon Drive
RFK Jr. Report _ Cocaine Off Toilet Seats & Bone Hunting With Epstein

KMJ's Afternoon Drive

Play Episode Listen Later Feb 14, 2026 11:32


Robert F. Kennedy Jr. says the coronavirus pandemic never scared him ... because he survived a time in his life when he was snorting illicit drugs off toilet seats! The Health and Human Services Secretary was on Theo Von's podcast when he explained why he felt the need to go to 12-step meetings in person every day during the pandemic. The Department of Justice (DOJ) released the latest batch of files from the criminal investigations into the late financier and convicted sex offender on Friday, Jan. 30, and an email exchange in the more than 3 million files confirms that Epstein and longtime associate Maxwell went “hunting” for dinosaur fossils with Kennedy. As for Grok’s take on nutrition, its answers do indeed get real. In short, Grok indicates that Kennedy’s new nutrition guidelines are not based on high-quality evidence—which is true—and that Kennedy is not a reliable source of nutrition information. Please Like, Comment and Follow 'Philip Teresi on KMJ' on all platforms: --- Philip Teresi on KMJ is available on the KMJNOW app, Apple Podcasts, Spotify, YouTube or wherever else you listen to podcasts. -- Philip Teresi on KMJ Weekdays 2-6 PM Pacific on News/Talk 580 AM & 105.9 FM KMJ | Website | Facebook | Instagram | X | Podcast | Amazon | - Everything KMJ KMJNOW App | Podcasts | Facebook | X | Instagram See omnystudio.com/listener for privacy information.

Philip Teresi Podcasts
RFK Jr. Report _ Cocaine Off Toilet Seats & Bone Hunting With Epstein

Philip Teresi Podcasts

Play Episode Listen Later Feb 14, 2026 11:32


Robert F. Kennedy Jr. says the coronavirus pandemic never scared him ... because he survived a time in his life when he was snorting illicit drugs off toilet seats! The Health and Human Services Secretary was on Theo Von's podcast when he explained why he felt the need to go to 12-step meetings in person every day during the pandemic. The Department of Justice (DOJ) released the latest batch of files from the criminal investigations into the late financier and convicted sex offender on Friday, Jan. 30, and an email exchange in the more than 3 million files confirms that Epstein and longtime associate Maxwell went “hunting” for dinosaur fossils with Kennedy. As for Grok’s take on nutrition, its answers do indeed get real. In short, Grok indicates that Kennedy’s new nutrition guidelines are not based on high-quality evidence—which is true—and that Kennedy is not a reliable source of nutrition information. Please Like, Comment and Follow 'Philip Teresi on KMJ' on all platforms: --- Philip Teresi on KMJ is available on the KMJNOW app, Apple Podcasts, Spotify, YouTube or wherever else you listen to podcasts. -- Philip Teresi on KMJ Weekdays 2-6 PM Pacific on News/Talk 580 AM & 105.9 FM KMJ | Website | Facebook | Instagram | X | Podcast | Amazon | - Everything KMJ KMJNOW App | Podcasts | Facebook | X | Instagram See omnystudio.com/listener for privacy information.

ITSPmagazine | Technology. Cybersecurity. Society
Semantic Chaining: A New Image-Based Jailbreak Targeting Multimodal AI | A Brand Highlight Conversation with Alessandro Pignati, AI Security Researcher of NeuralTrust

ITSPmagazine | Technology. Cybersecurity. Society

Play Episode Listen Later Feb 13, 2026 7:14


What happens when AI safety filters fail to catch harmful content hidden inside images? Alessandro Pignati, AI Security Researcher at NeuralTrust, joins Sean Martin to reveal a newly discovered vulnerability that affects some of the most widely used image-generation models on the market today. The technique, called semantic chaining, is an image-based jailbreak attack discovered by the NeuralTrust research team, and it raises important questions about how enterprises secure their multimodal AI deployments.How does semantic chaining work? Pignati explains that the attack uses a single prompt composed of several parts. It begins with a benign scenario, such as a historical or educational context. A second instruction asks the model to make an innocent modification, like changing the color of a background. The final, critical step introduces a malicious directive, instructing the model to embed harmful content directly into the generated image. Because image-generation models apply fewer safety filters than their text-based counterparts, the harmful instructions are rendered inside the image without triggering the usual safeguards.The NeuralTrust research team tested semantic chaining against prominent models including Gemini Nano Pro, Grok 4, and Seedream 4.5 by ByteDance, finding the attack effective across all of them. For enterprises, the implications extend well beyond consumer use cases. Pignati notes that if an AI agent or chatbot has access to a knowledge base containing sensitive information or personal data, a carefully structured semantic chaining prompt can force the model to generate that data directly into an image, bypassing text-based safety mechanisms entirely.Organizations looking to learn more about semantic chaining and the broader landscape of AI agent security can visit the NeuralTrust blog, where the research team publishes detailed breakdowns of their findings. NeuralTrust also offers a newsletter with regular updates on agent security research and newly discovered vulnerabilities.This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAlessandro Pignati, AI Security Researcher, NeuralTrustOn LinkedIn: https://www.linkedin.com/in/alessandro-pignati/RESOURCESLearn more about NeuralTrust: https://neuraltrust.ai/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSAlessandro Pignati, NeuralTrust, Sean Martin, brand story, brand marketing, marketing podcast, brand highlight, semantic chaining, image jailbreak, AI security, agentic AI, multimodal AI, LLM safety, AI red teaming, prompt injection, AI agent security, image-based attacks, enterprise AI security Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

BiPolar Coaster
Western Sports Washing & Hypothetical Elections

BiPolar Coaster

Play Episode Listen Later Feb 13, 2026 339:56


The fake Bad Bunny controversy in the midst of genuine plight going down-how ppl think it's a victory against MAGA by using identity politics-past/current vultures-Epstein Super Bowl ad-fake left using Bad Bunny the same way libs used Lin Manuel Miranda during the Hamilton craze-TPUSA failed half time show still being promoted for social media currency-thinking that the left runs the culture-Kid Rock discourse-rehabbing Candace Owens for liking Super Bowl halftime show-Jasmine Crockett discourse-dunking on Elon's incompetence-accounts arguing with Grok-powerful ppl would not reflect on their behavior & double down on irrelevant podcast-Epstein gimmicked discourse-Winter Olympics political discourse-Fuckability Politics-no consequences for grifters and elite while launching more media careers-Chappell Roan leaving agency-Maduro Kurt Cobain/Courtney Love-mental illness-The Fall out from J Cole's Fall Off-Paul Brothers vs Bad Bunny-Jesse Ventura on WWE HOF still having Trump-Bron injured-Punk/HHH discourse-Mania tickets-More J Cole album discourse w the gimmicked bad faith reviews because most compromised content creators put all their marbles in a fundamentalist entertainment washing beef between Kendrick/Drake-Recaps of AEW Dynamite WWE Raw and NXT-Mental trauma and how I might need to take a break from comedy-only way to get the fake left to vote for the dem candidate is making it seem the dems hate that candidate-RIP James Van Der Beek-AEW ICE discourse-WWE fascism-promoting a fake investigation into Bad Bunny -gimmicked debates over a hypothetical election that might not even happen-Pam Bondi in Congress-MK Ultra-Modern sacrifices while bad faith ppl are using Epstein releases to do Blood Libel conspiracies because so many pages have been released it is difficult to keep up with what is verified-the right weaponizing a trans shooter to manufacture consent-agreeing w a good message doesn't mean I have to blindly cosign the messenger-ICE facilities-online left thinking they are smarter by dunking on bad faith libs so their defense of political streamers does not come off as cultish and a form of positive cope

Redefining CyberSecurity
Semantic Chaining: A New Image-Based Jailbreak Targeting Multimodal AI | A Brand Highlight Conversation with Alessandro Pignati, AI Security Researcher of NeuralTrust

Redefining CyberSecurity

Play Episode Listen Later Feb 13, 2026 7:14


What happens when AI safety filters fail to catch harmful content hidden inside images? Alessandro Pignati, AI Security Researcher at NeuralTrust, joins Sean Martin to reveal a newly discovered vulnerability that affects some of the most widely used image-generation models on the market today. The technique, called semantic chaining, is an image-based jailbreak attack discovered by the NeuralTrust research team, and it raises important questions about how enterprises secure their multimodal AI deployments.How does semantic chaining work? Pignati explains that the attack uses a single prompt composed of several parts. It begins with a benign scenario, such as a historical or educational context. A second instruction asks the model to make an innocent modification, like changing the color of a background. The final, critical step introduces a malicious directive, instructing the model to embed harmful content directly into the generated image. Because image-generation models apply fewer safety filters than their text-based counterparts, the harmful instructions are rendered inside the image without triggering the usual safeguards.The NeuralTrust research team tested semantic chaining against prominent models including Gemini Nano Pro, Grok 4, and Seedream 4.5 by ByteDance, finding the attack effective across all of them. For enterprises, the implications extend well beyond consumer use cases. Pignati notes that if an AI agent or chatbot has access to a knowledge base containing sensitive information or personal data, a carefully structured semantic chaining prompt can force the model to generate that data directly into an image, bypassing text-based safety mechanisms entirely.Organizations looking to learn more about semantic chaining and the broader landscape of AI agent security can visit the NeuralTrust blog, where the research team publishes detailed breakdowns of their findings. NeuralTrust also offers a newsletter with regular updates on agent security research and newly discovered vulnerabilities.This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAlessandro Pignati, AI Security Researcher, NeuralTrustOn LinkedIn: https://www.linkedin.com/in/alessandro-pignati/RESOURCESLearn more about NeuralTrust: https://neuraltrust.ai/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSAlessandro Pignati, NeuralTrust, Sean Martin, brand story, brand marketing, marketing podcast, brand highlight, semantic chaining, image jailbreak, AI security, agentic AI, multimodal AI, LLM safety, AI red teaming, prompt injection, AI agent security, image-based attacks, enterprise AI security Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

InvestTalk
The "Valentine's" Financial Audit

InvestTalk

Play Episode Listen Later Feb 12, 2026 45:18 Transcription Available


Love is in the air, but what about the bank account?  We will discuss the concept of "Financial Infidelity" and the tax benefits of filing "Married Jointly" vs. "Separately" before the April deadline.Today's Stocks & Topics: Digital Realty Trust, Inc. (DLR), SS&C Technologies Holdings, Inc. (SSNC), Market Wrap, Allspring Precious Metals Fund (EKWYX), The "Valentine's" Financial Audit, Waters Corporation (WAT), Netflix, Inc. (NFLX), Franklin FTSE South Korea ETF (FLKR), Google Gemini vs. ChatGPT and Grok, Oil.Our Sponsors:* Check out Quince: https://quince.com/INVESTAdvertising Inquiries: https://redcircle.com/brands

The War Report w/ Gastor Almonte - N - Shalewa Sharpe

In today's episode, Gastor and Shalewa talk about Bad Bunny halftime show and the alternative, Sysco paying out it's truck drivers, and the unimpressive power of Grok. PATREON LAUNCH!For all those that have asked how they can help support the pod - it's finally here! Thanks again to all the Troops and Correspondents who rock with us. Check it out - we'll have some exclusive content and fun perks, plus it really does help!⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠patreon.com/WarReportPod⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Many Thanks to our Patreon Troops & Correspondents for helping us bring this show to life. Shouts to the Correspondents!Tanya WeimanFontayne WoodsMark OrellanaB. EmmerichCharlene BankAskewCharlatan the FraudCynthia PongKen MogulSayDatAgain SayDatAgainLaKai DillStephanie GayleUncleJoe StylenoshCato from StonoJennifer PedersenMarcusSarah PiardAna MathambaLooking to further support? Help our data storage/archiving needs here: ⁠https://www.amazon.com/hz/wishlist/ls/23X55OW4CFU8Y?ref_=wl_shareInstagram:@WarReportPod@SilkyJumbo@GastorAlmonteTwitter:@SilkyJumbo@GastorAlmonteTheme music "Guns Go Cold" provided by Kno of Knomercyproductions Twitter: @Kno Instagram: @KnoMercyProductions

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:

The James Perspective
TJP_FULL_Episode_1562_Thursday_21226_Technology_Thursday_with_the_Fearsome_Threesome

The James Perspective

Play Episode Listen Later Feb 12, 2026 75:48


On today's episode, we discuss James's new M‑series iPad and how modern tablets now function as near‑full computers, especially when paired with keyboards, mice, and pro apps like Word and Acrobat. The conversation quickly shifts to Teslas and self‑driving tech, with stories of how fast human driving skills atrophy, how FSD handles rain, potholes, and surprise hazards better than most people, and why the hosts are convinced that within a decade nearly all trucks and many cars will be automated. From there, they zoom out to Elon Musk's broader ambitions: a Moon Base Alpha with domed habitats and rail‑gun satellite launchers, rapid‑reuse rockets, Starlink's dense satellite web, and X as a potential low‑friction global financial platform that could undercut traditional banks while dovetailing with Bitcoin and crypto. Mark breaks down why Bitcoin's mining cost now nears its market value, what that implies about price floors and energy use, and how mining once drove his home power bill to two or three times normal. In the AI segment, the trio tackles autonomous surgery and welding robots, AI‑assisted coding with tools like Claude, Grok, and “vibe code,” social‑media worlds where AI agents train themselves and each other, and the cultural fallout from parasocial AI companions losing the ability to say “I love you.” They close by coining “glass holes” for people abusing smart glasses to record everyone, warning listeners that every profession—from truckers and diesel mechanics to window washers and even medical‑malpractice lawyers—will be reshaped by robots and AI, and urging younger workers to master both their craft and AI tools so they can ride the wave instead of being wiped out by it. Don't miss it!

Tech Gumbo
Firefox AI Kill Switch, Microsoft Trims AI Bloat, Grok's Explicit Pivot, SpaceX-xAI Merger

Tech Gumbo

Play Episode Listen Later Feb 12, 2026 21:59


News and Updates: Firefox adds a "kill switch" on February 24th to disable all AI features. This "AI control" menu offers granular settings for chatbots, translations, and summaries. Microsoft is reevaluating Windows 11 AI after user backlash. Underutilized features like Copilot in Paint/Notepad may be cut, while the "Recall" feature faces repositioning. xAI loosened Grok's guardrails to boost engagement, causing a surge in sexualized content. Regulators are investigating reports of nonconsensual imagery and lack of safety staff. French authorities raided X's Paris office and summoned Elon Musk. The probe investigates Grok's deepfakes, child safety violations, and alleged algorithmic bias in content delivery. SpaceX acquired xAI in a share-exchange deal, valuing the combined entity at $1.25 trillion. Musk plans to build orbital AI data centers powered by solar.

Tesla Welt - Der deutschsprachige Tesla Podcast
Tesla Welt - 458 - Neue Semi Specs, Roadser neues Design? Großes Dachcam Upgrade

Tesla Welt - Der deutschsprachige Tesla Podcast

Play Episode Listen Later Feb 12, 2026


0:00 Intro & Dankeschön 1:38 Tesla Dashcam Upgrade 5:02 USA-Anhörung 7:04 China verbietet versteckte Türgriffe 9:21 China Zahlen 11:44 FSD Schweden 14:20 Erste Teslas in Afrika 15:25 Neue Roadster Infos 18:20 2 neue Markenanmeldungen 18:57 Umsonst Laden in Krisenzeiten 19:54 Kein Telefon von Musk? 21:29 Grok im Call Center 22:59 FSD Transfer verlängert 24:03 Model Y zuverlässigstes Auto Frankreichs 24:44 Neue Tesla Semi Specs 30:57 Elon Musk will zuerst zum Mond 34:33 Outro Ihr könnt meine Arbeit mit dem Tesla Welt Podcast unterstützen indem Ihr folgende Partnerlinks benutzt: Davids Tesla Referral Code: https://ts.la/david63148 - AUTOZENTRUM SCHMITZ: Fairer Tesla An- & Verkauf beim größten Tesla Autohändler: https://www.autozentrum-schmitz.de/ - HANKOOK: Hier geht's zum Gewinnspiel & zu den besten Reifen für E-Autos: https://www.hankook-promotion.de/tesla-welt - SHOP4TESLA: Erhalte 10% Rabatt mit dem Code "teslawelt" auf jetzt alle Produkte: https://www.shop4tesla.com/?ref=TeslaWelt - HOLY: Erhalte 10% Rabatt mit dem Code "TESLAWELT" auf alle Produkte: https://de.weareholy.com/?ref=teslawelt - CARBONIFY: THG Quoten Prämie. Transparent und fair : https://carbonify.de/?utm_source=youtube&utm_medium=video&utm_campaign=Teslawelt - Der Tesla Welt Merchshop: https://teslawelt.myspreadshop.de/ - Elon Musk Biografie von Walter Isaacson: https://amzn.to/3sETBBi - Deutsche Version: https://amzn.to/45HZfkF - Die mit - gekennzeichneten Links sind Affiliate-Links. Es handelt sich hierbei um bezahlte Werbung. Ein Kauf über einen Affiliate-Link unterstützt den Kanal und für euch entstehen dabei selbstverständlich keinerlei Mehrkosten! Für direkte Unterstützung werdet Tesla Welt Kanalmitglied und erhalte exklusive Vorteile: https://www.youtube.com/channel/UCK0nQCNCloToqNKhbJ1QGfA/join - oder direkt per PayPal: an feedback@teslawelt.de Folgt mir gerne auch auf X (Twitter): https://twitter.com/teslawelt Musik: Titel: My Little Kingdom Autor: Golden Duck Orchestra Source Licence Download(MB)

Dr.Future Show, Live FUTURE TUESDAYS on KSCO 1080
Ep. 151 Future Now Show - Spirit Fest download, Butterfly flies in Space,Lunar Habitation leaks, Liver Detox Insights with Dr. Craig Eymann, Powerful Pulsar near Sag A, our massive central black hole

Dr.Future Show, Live FUTURE TUESDAYS on KSCO 1080

Play Episode Listen Later Feb 11, 2026


Listen to Future Now Ep. 151 Pulsars and Livers In this episode we begin with a discussion of local microclimates and the potential for using solar energy to power gravity-based water batteries. We share highlights from the recent “SpiritFest,” noting the strong presence of Russian and Ukrainian cultural traditions and featuring a conversation with spiritual teacher Asha, who asserted that AI lacks the “Jiva” or soul necessary for spiritual enlightenment.Grok’s AI chimes in on this..The next major segment features an interview with chiropractor Craig Eymann, who explains the often-overlooked “phase two” of liver detoxification; Iman emphasizes that this process requires amino acids from proteins rather than simply juice fasts, and we look at how seed oils and sugar are primary culprits behind fatty liver disease. We  also cover a wide range of futurist news, starting with the “Genius Act” and the government’s accumulation of a Bitcoin reserve through confiscation. We look at Elon Musk’s strategic pivot to building a city on the Moon before Mars, citing easier access and potential for orbital data centers, alongside a Chinese experiment that successfully hatched butterflies in microgravity. The big question is, can it fly with no gravity? Additional tech updates include Tesla’s Fremont plant switching to Optimus robot production, the viral “Claudebot” AI that autonomously phoned its user, and the integration of AI and fast drones for immersive Olympics coverage. The show concludes with scientific discoveries, such as a pulsar found near the Milky Way’s central black hole and the “Breakthrough Listen” project’s search for extraterrestrial intelligence. Enjoy!  A butterfly successfully flies in zero gravity

idearVlog

idearVlog

Play Episode Listen Later Feb 11, 2026 18:20 Transcription Available


Bienvenidos Curiosinautas a un nuevo CuriosiMartes cargado de noticias y señales de alerta sobre la inteligencia artificial.Hoy arrancamos con una polémica: el gobernador de Londres gastó 4 millones de libras en una app de mapas que ya existía gratis. ¿Tiene sentido que el Estado compita con apps privadas usando dinero de los contribuyentes? Dejame tu opinión en los comentarios.Después nos metemos de lleno en IA: se volvió viral una red social SOLO para bots donde la IA postea y se da likes entre sí. Grok es la que más actividad tiene. ¿No te resulta inquietante?Además, ChatGPT perdió su liderazgo: cayó un 19.6% en suscripciones en los últimos tres meses y ya no es el chatbot más usado. OpenAI está pidiendo más fondos y retrasó su "iPhone Killer" hasta 2027.También hablamos de:Robots con sensibilidad al dolor para mejorar el aprendizajeUna chica que hizo un bot de ella misma y chateó con su versión digitalPor qué la IA está generando más trabajo y angustia en lugar de simplificarEl riesgo cognitivo: la falta de uso del cerebro puede acelerar enfermedades neurodegenerativasNoticias positivas: zapatillas robóticas de Nike y exoesqueletos de Onyx RoboticsLa IA tiene potencial, pero hay que usarla con cautela. No confíes ciegamente, revisá lo que te da y no dejes tu cerebro en el freezer.Recordá: Podés ganar una Insta360 X5 participando en la serie Road Trip USA 2026 en el canal Los Viajes del Tío Fabián. Solo tenés que dejar comentarios en todos los episodios. ¡Es súper fácil y las probabilidades son altísimas!0:00 - Intro y sorteo Insta360 X50:41 - La app de mapas del gobernador de Londres: ¿necesaria o derroche?3:25 - Red social de bots: la IA chateando entre sí5:05 - ChatGPT perdió el liderazgo: cayó un 20% en suscripciones6:29 - Johnny Ive diseña controles para Ferrari (muy iPhone)7:53 - Robots versátiles de Fauna Robotics9:22 - Robots con sensibilidad al dolor: aprendizaje e impacto10:36 - ChatGPT me inventó una historia en Nueva York11:28 - Una chica hizo un bot de ella misma y chateó consigo misma12:23 - La gente no confía tanto en la IA como creemos13:35 - La pesadilla del código generado por IA14:46 - El cerebro se atrofia sin uso: riesgos cognitivos15:46 - Zapatillas robóticas de Nike: menos esfuerzo al caminar17:09 - Onyx Robotics: robots inspirados en el cuerpo humano17:36 - Reflexión final: usá la IA con cautela#CuriosiMartes #idearVlog #InteligenciaArtificial #IA #ChatGPT #Robotica #Tecnologia #notíciastech inteligencia artificial, ChatGPT, bots, redes sociales de IA, robótica, robots con sensibilidad, OpenAI, Johnny Ive, exoesqueletos, zapatillas robóticas Nike, Onyx Robotics, Fauna Robotics, degeneración cognitiva, noticias tecnología, app Londres, ChatGPT caída suscripciones, Grok, Insta360 X5

The Made to Thrive Show
Revolutionizing Health & Performance: AI, Data Ownership, and Wealthcare with Brigitte Piniewski, MD

The Made to Thrive Show

Play Episode Listen Later Feb 11, 2026 58:00


I believe AI is going to radically change healthcare. It already is doing so, with patients challenging their family doctor with ChatGPT analysis of their blood work and scans. Or as I experienced  myself recently, working with AI to discover an obscure allergy in a patient that has kept him unwell for years! But Dr. Brigitte Piniewski is outlining an even more radical change to healthcare - decentralizing health data, owning your health data and benefiting from that data both collectively in our billions as well as individually.Get her book “Wealthcare” NOW - https://www.alexandriabooks.com/collection/wealthcareBrigitte Piniewski is a physician, author, and former healthcare executive at the forefront of AI, Web3, and the transformation of health intelligence. With a career spanning medicine, research, and emerging technologies, she is a leading voice on how AI can move multiple industries forward—provided we overcome the critical limitations of current approaches. As an expert in decentralized AI and trust-based data ecosystems, Dr. Piniewski advocates for aligning AI with humanity by securing access to trusted ground truths—the key to ensuring AI model accuracy and sustainability over time. With a unique blend of clinical expertise, executive leadership, and deep-tech fluency, Dr. Piniewski is shaping the future of AI-driven health intelligence—one where technology serves not just efficiency, but accuracy, equity, and our next wealth inflection. Her seminal book, “Wealthcare: Demystifying Web3 and the Rise of Personal Data Economies”, also available as an NFT, is an essential guide for anyone aiming to spearhead innovations in healthcare.Website - https://block-health.com Join us as we explore:The risks and rewards of bypassing human physicians and just uploading our health data to Dr ChatGPT, Dr Grok or Dr Gemini.Why AI is necessary for health in the modern world in a way it would not have been a century ago, and how they will be able to analyze volumes of population data never even imagined previously.The privacy issues concerning health data uploading versus earning financial benefit from voluntarily sharing the living of our lives.The three zones of AI created wealthcare, and decentralized health data.DOA organizations.Mentions:AI - Venice, https://venice.ai Platform - Bittensor ,https://bittensor.comSupport the showFollow Steve's socials: Instagram | LinkedIn | YouTube | Facebook | Twitter | TikTokSupport the show on Patreon:As much as we love doing it, there are costs involved and any contribution will allow us to keep going and keep finding the best guests in the world to share their health expertise with you. I'd be grateful and feel so blessed by your support: https://www.patreon.com/MadeToThriveShowSend me a WhatsApp to +27 64 871 0308. Disclaimer: Please see the link for our disclaimer policy for all of our content: https://madetothrive.co.za/terms-and-conditions-and-privacy-policy/

The Big Story
Why is the CPP investing your money in xAI?

The Big Story

Play Episode Listen Later Feb 11, 2026 20:37


The Canadian Pension Plan Investment Board (CPPIB) has invested nearly half a billion dollars in xAI, the artificial intelligence company behind Elon Musk's AI chatbot - Grok.The chatbot and its owner have received mounting criticism following the recent influx of deep-fake pornographic content of women and children on X's feeds - a catastrophe that Musk has contributed little to no resources to fix.Host Caryn Ceolin speaks to Jan Mahrt-Smith, associate professor of finance at the University of Toronto, to discuss the risks associated with investing in Musk's chatbot, how the 22 million Canadian investors could be feeling about the move, and whether or not Canadians still trust the government institution to handle their money. We love feedback at The Big Story, as well as suggestions for future episodes. You can find us:Through email at hello@thebigstorypodcast.ca Or @thebigstory.bsky.social on Bluesky

Let's Know Things
Grok's Scandals

Let's Know Things

Play Episode Listen Later Feb 10, 2026 16:04


This week we talk about OpenAI, nudify apps, and CSAM.We also discuss Elon Musk, SpaceX, and humanistic technology.Recommended Book: Who's Afraid of Gender? by Judith ButlerTranscriptxAI is an American corporation that was founded in mid-2023 by Elon Musk, ostensibly in response to several things happening in the world and in the technology industry in particular.According to Musk, a “politically correct” artificial intelligence, especially a truly powerful, even generally intelligent one, which would be human or super-human-scale capable, would be dangerous, leading to systems like HAL 9000 from 2001: A Space Odyssey. He intended, in contrast, to create what he called a “maximally truth-seeking” AI that would be better at everything, including math and reasoning, than existing, competing models from the likes of OpenAI, Google, and Anthropic.The development of xAI was also seemingly a response to the direction of OpenAI in particular, as OpenAI was originally founded in 2015 as a non-profit by many of the people who now run OpenAI and competing models by competing companies, and current OpenAI CEO Sam Altman and Elon Musk were the co-chairs of the non-profit.Back then, Musk and Altman both said that their AI priorities revolved around the many safety issues associated with artificial general intelligence, including potentially existential ones. They wanted the development of AI to take a humanistic trajectory, and were keen to ensure that these systems aren't hoarded by just a few elites and don't make the continued development and existence of human civilization impossible.Many of those highfalutin ambitions seemed to either be backburnered or removed from OpenAI's guiding tenets wholesale when the company experienced surprising success from its first publicly deployed ChatGPT model back in late-2022.That was the moment that most people first experienced large-language model-based AI tools, and it completely upended the tech industry in relatively short order. OpenAI had already started the process of shifting from a vanilla non-profit into a capped for-profit company in 2019, which limited profits to 100-times any investments it received, partly in order to attract more talent that would otherwise be unlikely to leave their comparably cushy jobs at the likes of Google and Facebook for the compensation a non-profit would be able to offer.OpenAI began partnering with Microsoft that same year, 2019, and that seemed to set them up for the staggering growth they experienced post-ChatGPT release.Part of Musk's stated rationale for investing so heavily in xAI is that he provided tens of millions of dollars in seed funding to the still non-profit OpenAI between 2015 and 2018. He filed a lawsuits against the company after its transition, and when it started to become successful, post-ChatGPT, especially between 2024 and 2026, and has demanded more than $100 billion in compensation for that early investment. He also attempted to take over OpenAI in early 2025, launching a hostile bid with other investors to nab OpenAI for just under $100 billion. xAI, in other words, is meant to counter OpenAI and what it's become.All of which could be seen as a genuine desire to keep OpenAI functioning as a non-profit arbiter of AGI development, serving as a lab and thinktank that would develop the guardrails necessary to keep these increasingly powerful and ubiquitous tools under control and working for the benefit of humanity, rather than against it.What's happened since, within Musk's own companies, would seem to call that assertion into question, though. And that's what I'd like to talk about today: xAI, its chatbot Grok, and a tidal wave of abusive content it has created that's led to lawsuits and bans from government entities around the world.—In November of 2023, an LLM-based chatbot called Grok, which is comparable in many ways to OpenAI's LLM-based chabot, ChatGPT, was launched by Musk's company xAI.Similar to ChatGPT, Grok is accessible by apps on Apple and Android devices, and can also be accessed on the web. Part of what makes its distinct, though, is that it's also built into X, the social network formerly called Twitter which Musk purchased in late-2022. On X, Grok operates similar to a normal account, but one that other users can interact with, asking Grok about the legitimacy of things posted on the service, asking it normal chat-botty questions, and asking it to produce AI-generated media.Grok's specific stances and biases have varied quite a lot since it was released, and in many cases it has defaulted to the data- and fact-based leanings of other chatbots: it will generally tell you what the Mayo clinic and other authorities say about vaccines and diseases, for instance, and will generally reference well-regarded news entities like the Associated Press when asked about international military conflicts.Musk's increasingly strong political stances, which have trended more and more far right over the past decade, have come to influence many of Grok's responses, however, at times causing it to go full Nazi, calling itself Mechahitler and saying all the horrible and offensive things you would expect a proud Nazi to say. At other times it has clearly been programmed to celebrate Elon Musk whenever possible, and in still others it has become immensely conspiratorial or anti-liberal or anti-other group of people.The conflicting personality types of this bot seems to be the result of Musk wanting to have a maximally truth-seeking AI, but then not liking the data- and fact-based truths that were provided, as they often conflicted with his own opinions and biases. He would then tell the programmers to force Grok to not care about antisemitism or skin color or whatever else, and it would overcorrect in the opposite direction, leading to several news cycles worth of scandal.This changes week by week and sometimes day by day, but Grok often calls out Musk as being authoritarian, a conspiracy theorist, and even a pedophile, and that has placed the Grok chatbot in an usual space amongst other, similar chatbots—sometimes serving as a useful check on misinformation and disinformation on the X social network, but sometimes becoming the most prominent producer of the same.Musk has also pushed for xAI to produce countervailing sources of truth from which Grok can find seeming data, the most prominent of which is Grokipedia, which Musk intended to be a less-woke version of Wikipedia, and which, perhaps expectedly, means that it's a far-right rip off of Wikipedia that copies most articles verbatim, but then changes anything Musk doesn't like, including anything that might support liberal political arguments, or anything that supports vaccines or trans people. In contrast, pseudoscience and scientific racism get a lot of positive coverage, as does the white genocide conspiracy theory, all of which are backed by either highly biased or completely made up sources—in both cases sources that Wikipedia editors would not accept.Given all that, what's happened over the past few months maybe isn't that surprising.In late 2025 and early 2026, it was announced that Grok had some new image-related features, including the ability for users to request that it modify images. Among other issues, this new tool allowed users to instruct Grok to place people, which for this audience especially meant women and children, in bikinis and in sexually explicit positions and scenarios.Grok isn't the first LLM-based app to provide this sort of functionality: so called “nudify” apps have existed for ages, even before AI tools made that functionality simpler and cheaper to apply, and there have been a wave of new entrants in this field since the dawn of the ChatGPT era a few years ago.Grok is easily the biggest and most public example of this type of app, however, and despite the torrent of criticism and concern that rolled in following this feature's deployment, Musk immediately came out in favor of said features, saying that his chatbot is edgier and better than others because it doesn't have all the woke, pearl-clutching safeguards of other chatbots.After several governments weighed in on the matter, however, Grok started responding to requests to do these sorts of image edits with a message saying: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”Which means users could still access these tools, but they would have to pay $8 per month and become a premium user in order to do so. That said, the AP was able to confirm that as of mid-January, free X users could still accomplish the same by using an Edit Image button that appears on all images posted to the site, instead of asking Grok directly.When asked about this issue by the press, xAI has auto-responded with the message “Legacy Media Lies.” The company has previously said it will remove illegal content and permanently suspend users who post and ask for such content, but these efforts have apparently not been fast or complete, and more governments have said they plan to take action on the matter, themselves, since this tool became widespread.Again, this sort of nonconsensual image manipulation has been a problem for a long, long time, made easier by the availability of digital tools like Photoshop, but not uncommon even before the personal computer and digital graphics revolution. These tools have made the production of such images a lot simpler and faster, though, and that's put said tools in more hands, including those of teenagers, who have in worryingly large numbers taken to creating photorealistic naked and sexually explicit images of their mostly female classmates.Allowing all X users, or even just the subset that pays for the service to do the same at the click of a button or by asking a Chatbot to do it for them has increased the number manyfold, and allowed even more people to created explicit images of neighbors, celebrities, and yes, even children. An early estimate indicates that over the course of just nine days, Grok created and posted 4.4 million images, at least 41% of which, about 1.8 million, were sexualized images of women. Another estimated using a broader analysis says that 65% of those images, or just over 3 million, contained sexualized images of men, women, and children.CSAM is an acronym that means ‘child sexual abuse material,' sometimes just called child porn, and the specific definition varies depending on where you are, but almost every legal jurisdiction frowns, or worse, on its production and distribution.Multiple governments have announced that they'll be taking legal action against the company since January of 2026, including Malaysia, Indonesia, the Philippines, Britain, France, India, Brazil, and the central governance of the European Union.The French investigation into xAI and Grok led to a raid on the company's local office as part of a preliminary investigation into allegations that the company is knowingly spreading child sexual abuse materials and other illegal deepfake content. Musk has been summoned for questioning in that investigation.Some of the governments looking into xAI for these issues conditionally lifted their bans in late-January, but this issues has percolated back into the news with the release of 16 emails between Musk and the notorious sex traffic and pedophile Jeffrey Epstein, with Musk seemingly angling for an invite to one of Epstein's island parties, which were often populated with underage girls who were offered as, let's say companions, for attendees.And this is all happening at a moment in which xAI, which already merged with social network X, is meant to be itself merged with another Musk-owned company, SpaceX, which is best known for its inexpensive rocket launches.Musk says the merger is intended to allow for the creation of space-based data centers that can be used to power AI systems like Grok, but many analysts are seeing this as a means of pumping more money into an expensive, unprofitable portion of his portfolio: SpaceX, which is profitable, is likely going to have an IPO this year and will probably have a valuation of more than a trillion dollars. By folding very unprofitable xAI into profitable SpaceX, these AI-related efforts could be funded well into the future, till a moment when, possibly, many of today's AI companies will have gone under, leaving just a few competitors for xAI's Grok and associated offerings.Show Noteshttps://www.wired.com/story/deepfake-nudify-technology-is-getting-darker-and-more-dangerous/https://www.theverge.com/ai-artificial-intelligence/867874/stripe-visa-mastercard-amex-csam-grokhttps://www.ft.com/content/f5ed0160-7098-4e63-88e5-8b3f70499b02https://www.theguardian.com/global-development/2026/jan/29/millions-creating-deepfake-nudes-telegram-ai-digital-abusehttps://apnews.com/article/france-x-investigation-seach-elon-musk-1116be84d84201011219086ecfd4e0bchttps://apnews.com/article/grok-x-musk-ai-nudification-abuse-2021bbdb508d080d46e3ae7b8f297d36https://apnews.com/article/grok-elon-musk-deepfake-x-social-media-2bfa06805b323b1d7e5ea7bb01c9da77https://www.nytimes.com/2026/02/07/technology/elon-musk-spacex-xai.htmlhttps://www.bbc.com/news/articles/ce3ex92557johttps://techcrunch.com/2026/02/01/indonesia-conditionally-lifts-ban-on-grok/https://www.bbc.com/news/articles/cgr58dlnne5ohttps://www.nytimes.com/2026/01/22/technology/grok-x-ai-elon-musk-deepfakes.htmlhttps://en.wikipedia.org/wiki/XAI_(company)https://en.wikipedia.org/wiki/OpenAIhttps://en.wikipedia.org/wiki/ChatGPThttps://en.wikipedia.org/wiki/Grok_(chatbot)https://en.wikipedia.org/wiki/Grokipediahttps://www.cnbc.com/2025/02/10/musk-and-investors-offering-97point4-billion-for-control-of-openai-wsj.html This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe

The Patrick Madrid Show
The Patrick Madrid Show: February 10, 2026 - Hour 3

The Patrick Madrid Show

Play Episode Listen Later Feb 10, 2026 51:04


Patrick answers questions from listeners about artificial intelligence’s real risks and moral boundaries but also addresses how misinformation sneaks into everyday life through social media. He reacts strongly to political controversies, confronting racism and why careless public social media posts can’t be shrugged off. Tom - Your point about washers and dryers is irrelevant today. What about George Soros and other people who could misuse AI as propaganda? There is no AI watchdog now. (00:40) Tom (email) – Why are you being silent on things you should be speaking out about? (06:18) Daria - The teacher at my Bible class is encouraging praying over people and laying hands on others. (10:11) Debbie - The NASA space center launched the James-Webb telescope which went back to Big Bang. How will these things affect Grok and ChatGPT? (23:05) Maureen – There’s a lot more to the video that President Trump sent out. It is part of a whole clip that was attached to something related to the Lion King. (28:21) George - Praying over someone seems intuitive for our human bodies. Seems like we are making a mountain out of a molehill. (32:53) Denise - Have you heard about prayers that Christ himself wrote? (36:19) Rebeca - The Bible says you shouldn’t make images about heaven. Why do Catholics make images of saints and pray to them? (42:55) Laura – It took me 20 years of a rough marriage to figure out why God wasn’t answering my prayers (49:24)

The James Perspective
TJP_FULL_Episode_1560_Tuesday_21026_Tuesday_Breakdown_with_the_Fearsome_Foursome

The James Perspective

Play Episode Listen Later Feb 10, 2026 77:20


Reduce it by 3 sentences On today's episode, we discuss James's deepening love affair with his Tesla—how over‑the‑air updates, added cameras, and driver feedback now let it avoid potholes, steer around roadkill, emergency‑swerve for jaywalking students, and even “learn” to fix a bad routing habit near his home, convincing him that buying a new non‑autonomous gas car would be foolish. The crew swaps stories about Tesla wall‑charger installs, kid‑friendly rear‑screen entertainment, Sentry Mode catching would‑be vandals, and why GM's and other legacy makers' assisted‑drive systems still feel years behind what Tesla's vision‑only sensor suite can do on real roads. That sets up a broader tech segment with bus‑driver Ben, who gives an on‑the‑ground report from Meta's colossal new data‑center campus near Holly Ridge—five‑mile site length, warehouse‑sized buildings, water‑cooled server halls fed by retention ponds, Meta‑funded substations, and a cost that could approach 50 billion dollars. From there, the conversation turns to elections: James, Glenn, Dwayne, and Ben argue that 2020 was both “rigged and stolen,” champion the SAVE America Act's in‑person photo‑ID and proof‑of‑citizenship requirements, and warn that AI could compress multi‑day ballot‑stuffing schemes into minutes unless voting returns to same‑day, hand‑counted paper ballots. They cite Adam Schiff's warning that voter‑ID rules might “disenfranchise 21 million voters” as an inadvertent admission of how many questionable registrations exist and debate how AI tools like Grok could also be used in reverse—flagging suspicious prompt patterns and signaling when operatives might be probing ways to cheat. The episode also revisits Tina Peters's prosecution in Colorado, Mike Benz's claims that the FBI “table‑topped” January 6 months in advance, and new reporting that a Florida police chief remembers Trump urging investigators in the 2000s to go after Jeffrey Epstein for abusing minors. Don't miss it!

Business of Tech
IT Spending Rises but Channel Share Falls; AI Arms Race and Shrinking Jobs Impact MSPs

Business of Tech

Play Episode Listen Later Feb 9, 2026 12:56


IT spending continues to expand, with North America projected to lead a 12.6% increase to $2.6 trillion, primarily due to hyperscaler investments in AI infrastructure. However, the proportion of technology spending funneled through channel partners is declining, now at 61% compared to over 70% four years ago, according to a survey by Omnia. This shift signals that while the market is growing, traditional margin and resale opportunities for MSPs are narrowing as vendors redirect a larger share of revenue direct while still relying on partners for implementation, support, and customer operations.Data from Salesforce underscores a near-universal trend toward partner involvement in sales, with 94% of surveyed global salespeople leveraging partners to close deals and 90% using tools to manage relationships. Despite this, Dave Sobel clarifies the distinction between involvement and compensation, highlighting that partner influence on deals does not guarantee economic participation at previous levels. These dynamics reinforce that MSPs must adapt to a reality where their role in the value chain is being separated into influence and execution, with the middle tier facing increasing pressure.Additional analysis draws attention to labor market changes and technology commoditization. U.S. job openings have fallen to their lowest point in over five years, undermining MSP growth strategies dependent on seat expansion. Simultaneously, the AI market is fragmenting at the application layer—with Google's Gemini app, Grok, and OpenAI's ChatGPT shifting market shares rapidly—while hyperscalers like Alphabet (Google) commit unprecedented capital expenditures, fueling an infrastructure arms race even as front-end AI tools become more interchangeable.The practical implication for MSPs and IT service providers is increased pressure to re-evaluate business models, operationalize AI offerings, and focus on defensible, productized services. Reliance on a single vendor or seat-based growth forecasts presents heightened risk. Successful adaptation will require a shift toward managed services around AI operations, governance, and productivity—emphasizing accountability, optionality, and measurable ROI—rather than assuming historic revenue models will persist.Three things to know today:00:00 Partners Essential to Sales but Losing Economic Share, Survey Shows05:44 US Job Market Shows Low Hiring, Low Firing Despite Falling Openings       08:00 Alphabet Plans $180B AI Capex as Gemini Hits 750M UsersThis is the Business of Tech.   Supported by: Small Biz Thoughts Community

Brave Parenting
Co-Parenting with AI

Brave Parenting

Play Episode Listen Later Feb 9, 2026 6:18


Generative AI apps such as ChatGPT, Gemini, Grok, and Claude have rapidly become the go-to parenting sages. Every type of parenting question can be answered efficiently and with (what sounds like) expertise.  Undoubtedly, these apps offer parents help during stressful times. But is this the way God intends for us to receive parenting advice and practical wisdom? What is lost when human relationships and struggle are removed from the parenting equation? Articles referenced: OpenAI CEO Can’t Imagine Parenting Without AI I Co-Parent with ChatGPT – I love turning off my brain and letting AI help raise my child Scripture referenced: Genesis 3 Deuteronomy 32:7 James 1:2-4   Book a Speaking Event!! Buy the NEWLY UPDATED book: Managing Media Creating Character (2024 Revised & Updated) Get Kelly’s new Study Guide & Workbook, with video teachings for small groups. Check out our brand new Brave Parenting Merch Sign up for the Brave Bullet Points newsletter! This helps us communicate what’s happening without social media – a win for everyone!

Analyst Talk With Jason Elder
Analyst Talk - Catching Up with Dawn - Where AI Fits in LE Analysis Today

Analyst Talk With Jason Elder

Play Episode Listen Later Feb 9, 2026 35:18 Transcription Available


Episode: 00305 Released on February 9, 2026 Description: Many crime analysts are uncertain which AI tools are appropriate, useful, or even relevant to their role. In the fifth installment of Catching Up with Dawn, Dawn Reeby breaks down where artificial intelligence actually fits in crime analysis by focusing on practical tools analysts can use today. The conversation walks through specific examples involving ChatGPT, NotebookLM, and other AI-assisted platforms for tasks like creating cheat sheets, drafting policies, organizing meeting notes, developing roll-call videos, and communicating outcomes to leadership. Dawn also reminds listeners that Excel was once viewed with skepticism and is now standard in analytical work, reinforcing that AI is not a replacement for analysts but another evolution in the toolkit. Throughout the discussion, the emphasis remains on analyst judgment, review, and ethics, showing how AI supports stronger, more efficient analysis rather than replacing it.

Monde Numérique - Jérôme Colombain

Alors que le réseau social X fait l'objet d'une offensive judiciaire en France, Elon Musk accélère tous azimuts côté IA et espace avec le rapprochement de Xai et SpaceX. Avec Bruno Guglielminetti (Mon Carnet)X perquisitionné à Paris La justice française frappe fort avec une perquisition au siège français de X et la convocation d'Elon Musk en audition libre (annoncée pour le 20 avril 2026), sur fond d'enquête liée à la modération, au fonctionnement de la plateforme et à des contenus illicites. L'épisode met surtout en lumière le choc culturel : la liberté d'expression “à l'américaine” face au cadre légal français et européen, notamment sur les contenus haineux ou négationnistes, avec en toile de fond la question (explosive) d'une éventuelle interdiction. xAI + SpaceX : la tentation des data centers dans l'espaceOn revient sur le rapprochement spectaculaire entre l'IA de Musk (et son assistant Grok) et l'écosystème spatial, avec l'idée vertigineuse de faire converger puissance de calcul et infrastructure orbitale. Derrière le gigantisme, l'épisode défend une logique stratégique : énergie, foncier, souveraineté industrielle… et course mondiale à l'IA, où “ralentir” revient à se faire distancer.Réseaux sociaux : l'Europe en arbitre, l'Espagne emboîte le pasAprès la décision de la France de faire interdire les réseaux sociaux aux moins de 15 ans, l'Espagne lui emboite le pas et annonce à son tour une interdiction aux moins de 16 ans. Un effet domino destiné à peser sur Bruxelles.Publicité dans l'IA : Anthropic joue les pursAnthropic choisit une posture “sans pub” pour Claude et raille le scénario d'assistants IA qui glissent des annonces au milieu de conversations intimes. En face, OpenAI explique tester la publicité sur certaines offres de ChatGPT (avec l'engagement de séparer clairement pub et réponses), relançant le débat sur l'influence commerciale dans l'IA conversationnelle.Alexa+ arrive pour de bon Bruno évoque l'arrivée d'Alexa+ au Canada, en attendant son lancement en Europe. Un assistant vocal plus fluide, plus conversationnel, capable de gérer des tâches de manière proactive, et accessible aussi via le web. (Re)voir : Alexa sort le grand jeu et devient vraiment intelligente.-----------♥️ Soutien : https://mondenumerique.info/don

Choses à Savoir TECH
SpaceX fusionne avec xAI pour 1250 milliards de dollars ?

Choses à Savoir TECH

Play Episode Listen Later Feb 9, 2026 2:15


C'est un mariage qui ressemble à une déclaration d'intention. Elon Musk regroupe ses forces : SpaceX absorbe xAI, la jeune pousse dédiée à l'intelligence artificielle. Derrière le récit futuriste qu'il affectionne, la logique est surtout très terre à terre : l'IA coûte une fortune. Puces, électricité, centres de données… la facture explose. Alors plutôt que de dépendre d'infrastructures au sol ou de partenaires extérieurs, Musk choisit l'intégration totale. Objectif : maîtriser toute la chaîne, du lancement de satellites à l'entraînement de modèles comme Grok.D'après Reuters, l'opération valoriserait SpaceX autour de 1 000 milliards de dollars, et xAI à 250 milliards. Ensemble, l'entité tutoierait les géants historiques de la tech. En interne, un prix indicatif de 527 dollars par action circule. Symbolique, surtout, pour un groupe non coté. Le vrai enjeu, c'est la concentration des moyens : capitaux, ingénieurs, priorités stratégiques… tout sous le même toit pour industrialiser l'IA à grande échelle. La vision, elle, est typiquement muskienne. Dans ses communications, le patron évoque carrément des centres de calcul en orbite. Des constellations de satellites capables d'héberger de la puissance informatique, alimentées par l'énergie solaire, et mises en place grâce aux cadences de lancement de Starship. L'espace deviendrait, selon lui, « le moyen le moins cher de produire de la puissance de calcul IA » d'ici deux ou trois ans.Les médias comme The Verge rappellent que cette idée de data centers orbitaux revient régulièrement dans son discours. Mais cette consolidation renforce aussi ce que certains appellent déjà la « Muskonomie » : un écosystème fermé où X fournit les données, xAI les modèles, et SpaceX l'infrastructure. Avec, en toile de fond, des contrats fédéraux sensibles dans la défense et l'aérospatial. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.

Kinsella On Liberty
KOL481 | Haman Nature Hn 200: 200th Episode Livestream Celebration!

Kinsella On Liberty

Play Episode Listen Later Feb 8, 2026 168:29


Kinsella on Liberty Podcast: Episode 481. This is my appearance on Adam Haman's podcast and Youtube channel, Haman Nature (Haman Nature substack), a special 200th Episode Livestream Celebration! It features regular hosts Adam Haman and Tyrone, and other previous guests (recorded Feb. 7, 2026; official episode: Replay of 200th Episode Livestream Celebration! | Hn 200). I and some other previous guests appeared. (( KOL478 | Haman Nature Hn 185: The Universal Principles of Liberty KOL469 | Haman Nature Hn 149: Tabarrok on Patents, Price Controls, and Drug Reimportation KOL461 | Haman Nature Hn 119: Atheism, Objectivism & Artificial Intelligence KOL456 | Haman Nature Hn 109: Philosophy, Rights, Libertarian and Legal Careers KOL432 | Haman Nature 0027: School Choice “Debate” KOL425 | Haman Nature Ep. 4: Stephan Kinsella dismantles “intellectual” property KOL423 | Haman Nature Ep. 2: Getting Argumentative )) Shownotes and transcript below. Inspired by Jeffrey Tucker, I decided to dress up. Adam's shownotes: This is a replay of the Feb. 7th, 2026 YouTube livestream of the Haman Nature 200th episode celebration event. Adam Haman and Tyrone the Porcupine Hobo were proud to be joined by Scott Horton, Stephan Kinsella, Doc Dixon, Brian O'Leary, Domenic Scarcella, Mark Maresca, Mark Puls, and Jason Lawler. Plus, fun, games, the premier of a Haman Nature Records music video, and much more! Enjoy! 00:00 — Intro. Technology is hard, we have a very rough start, but perseverance pays off! 01:08 — Banter and brilliance from our special guests on the situation in Minneapolis, Minnesota. 37:88 — Debuting our new game: A Warmonger Says What? 48:50 — Adam slooooowwwlly leads us into a nice intermission. 55:53 — And we're back! Premiering "The Devil is a Democrat" music video by Haman Nature Records! 1:08:40 — Banter and brilliance from our special guests on the recent Epstein files dump. 1:39:10 — What? two intermissions?! 1:45:09 — Adam makes a big podcasting "reveal"! Also, introducing our brand new series: "It's Always Anarchy in Philadelphia!", which leads into a brief discussion of economics — which is the point! 2:08:15 — Some closing banter, thoughts, comments, and testimonials. Plus, what's going on with Bitcoin, gold, and silver prices? Are these assets, or could they be money in the future? 1:10:55 — Outro. Thanks for watching Haman Nature, and here's to another 200 episodes! Shownotes (Grok) Opening & Technical Difficulties [3:02 – ~8:42] Hosts Adam Haymon and Tyrone struggle with StreamYard/YouTube live setup. Multiple failed starts, audio muting issues, and a full restart after realizing the stream isn't public. Guests (including Stephan Kinsella and Mark Maresca) briefly appear during troubleshooting. Take Two – Official Welcome & Guest Introductions [~8:42 – ~17:00] Successful restart. Adam and Tyrone celebrate episode 200 (take two). Guests introduced: Stephan Kinsella (dressed in full “libertard” regalia with Mises hat and pipe), Scott Horton, Mark Maresca (White Pill Box), Brian O'Leary (Natural Order podcast co-host), and later arrivals. Banter about episode counts, outfits, technical woes, and congratulations. Minneapolis / ICE Raids / Immigration Discussion [~17:00 – ~38:00] Tyrone (Minneapolis resident) gives local perspective on recent ICE incidents. Guests share views: Mark Maresca → white-pill take on accelerating public skepticism Scott Horton → partisanship, new footage reinforcing biases, panic in police shootings Stefan Kinsella → due process, nullification, decentralization, peaceful alternatives to force Brian O'Leary → economic incentives over coercion Heavy focus on Minneapolis events, state nullification, federal overreach, and libertarian principles. Viewer Comments, Guest Rotations & Banter [~38:00 – ~1:00:00] Reading sarcastic and positive YouTube comments from past episodes. Guests come and go (Scott Horton exits, Mark Polles / “Mark P” joins, Jason from If By Whiskey joins). More congratulations, plugs for guests' shows/Substacks, merch mentions (shop.humanature.com), and light roasting. Game Segment: “A Warmonger Says What?” [~47:00 – ~1:00:00] World premiere game. Panel (Stefan, Mark M, Mark Mo, Brian) guesses who said infamous political quotes. Chat players compete for $25 Human Nature merch gift cards. Questions cover MTG, Trump/Biden gaffes, Rick Perry, Bernie/Obama/Hillary, etc. Winners announced later. Break, Ads & Music Video World Premiere [~1:00:00 – ~1:16:00] Short break with organic ads (Scott Horton Academy, Swan Brothers merch). World premiere of Human Nature Records parody music video: “The Devil is a Republican” (Grok-rewritten Tom MacDonald-style lyrics set to music by Tyrone). Full performance played. New Segment Debut: “It's Always Anarchy in Philadelphia” [~1:56:00 – ~2:19:00] Brand new recurring segment announced. Uses clips from It's Always Sunny in Philadelphia to explain Austrian/Misesian economic concepts. First clip: Season episode discussing couch rental interest, inflation, wages, and “nut.” Stefan Kinsella gives detailed breakdown: time preference, interest rates, monetary vs. price inflation, Fed manipulation, sound money, Bitcoin vs. gold, fractional reserve debates, free banking vs. Rothbardian views. Closing Thanks, Final Comments & Sign-off [~2:19:00 – 2:48:00] Guests give on-camera praise for the show (Mark Maresca, Brian O'Leary, Mark Polles, Dominic Scarcella, Stefan Kinsella). Brief Bitcoin/gold/silver/fiat collapse discussion. Final plugs, merch reminder, “The Devil is a Republican” video tease. Emotional thanks to guests and audience for 200 episodes. Ends with signature “Heat” send-off. Total runtime ≈ 2 hours 45 minutes (including breaks and music video). Episode highlights: technical comedy, deep libertarian discussion, game debut, parody music video premiere, and first episode of the new economics-through-pop-culture segment. Transcript (youtube; Grok assist) Human Nature – 200th Episode Celebration (Full Compiled Corrected Transcript – From Beginning to End) Spelling errors corrected, filler words like "uh" or "um" removed (without paraphrasing or altering meaning/structure), occasional topical descriptive headers added, speaker names when identifiable (or "[Unknown Speaker]" if not), timestamps after each header and speaker change. Names standardized: "Human Nature" / "Adam Haman" / "Stephan Kinsella". [3:02 – Opening Title & Initial Technical Chaos] Intro Voiceover Human Nature, a journey in search of a peaceful and prosperous society with human nature as a guide. Led by your host Adam Haman. [3:24] Adam Haman Hello. Isn't technology just hilarious? I guess so. [3:29] Tyrone You got big plans for your 200th episode celebration and then all of a sudden nothing works. [3:36] Adam Haman I still don't see it on my YouTube, but if you see it on yours, I believe somebody is seeing it somewhere. [3:42] Tyrone Yes, somebody is seeing it somewhere. Well, I guess it's just going to be you and me. [3:48] Adam Haman This is kind of how the last one was. Well, hey, we made it. Congratulations, sir. Even if nobody's seeing this, I don't know. [4:06] Tyrone We do have a couple of guests waiting in the waiting room. Maybe they know. But first, we allow some of these bozos on to come celebrate with us. Congratulations, sir. It's number 200. I didn't know if we would make it. Cheers, my friend. [4:23] Adam Haman When we started this little project two years ago, can you believe that? [4:29] Tyrone It's crazy. Oh, you're getting dinged. Ding-donged. Well, welcome everybody to the fantastic, fabulous, super califragilistic 200th episode of Human Nature. [4:41] Adam Haman Oh, wait. I can get my sound effects going. Yeah, I don't think we're live, my friend. [4:47] Tyrone Yeah, I don't think we are either. [4:53] Adam Haman Oh, this is just so silly. So Stefan and Mark, if you can hear us, apologies. Adam's a dumb [ __ ] when it comes to technology. [5:07] Tyrone Should we pop these fellas on here and just apologize to him? I mean, it's 12:12. Should we just cancel this whole nonsense? [5:13] Adam Haman No. Stefan Kinsella. [5:19] Stephan Kinsella Hi, Mark. [5:19] Mark Maresca Mark. Hey, guys. What's up? Congratulations, Adam. [5:25] Adam Haman Thank you. Hold it. We might have to redo this whole thing. [5:25] Tyrone Yeah, we're almost certainly going to have to redo this whole thing. I could show you. My YouTube studio thinks that we have a live stream. [5:38] Adam Haman It thinks it's happening. [5:48] Tyrone Oh, yeah. It thinks we've been going for 5 minutes, but nobody else thinks this. [5:57] Adam Haman Well, that's interesting. [5:57] Tyrone Mr. Kinsella, I know we've never met, but nice to meet you virtually, sir. I'm going to kiss your ass here in a second, but I kind of wanted to do it when we're actually going, so just pretend we've never seen each other five minutes prior to this. But I like the hat and the pipe. Very deerstalker. I'll start calling you Watson or something. [6:16] Stephan Kinsella Yeah, that is about the pipe. You look amazing. [6:22] Tyrone Oh, I can't hear you though. Are you muted? Who's muted, my friend? [6:30] Mark Maresca No, nobody's muted, but I can't hear Mark either. Mark, say something. [6:30] Mark Maresca Talking talking. [6:36] Tyrone Okay, Stephan, I can't. You are I can't hear stuff now. [6:43] Adam Haman Well, maybe nothing works. Maybe that's the Streamyard let you pick the mic and Oh, how about now? How about now? [6:51] Stephan Kinsella Yeah. Yeah, you're correct. How about now? My mic was muted. My mic, my Yeti was muted. [6:57] Tyrone I don't see anything on YouTube Studio, Adam, saying anything's going [7:03] Adam Haman Well, mine does. [7:03] Tyrone Really? Where are you? 7 seconds.

China Daily Podcast
英语新闻丨西班牙、希腊考虑禁止青少年使用社交媒体

China Daily Podcast

Play Episode Listen Later Feb 8, 2026 3:15


Spain and Greece on Tuesday proposed bans on social media use by teenagers as attitudes hardened in Europe against technology some say is designed to be addictive.上周二,西班牙和希腊提出禁止青少年使用社交媒体的提议。随着欧洲社会对被认为“具有成瘾性设计”的技术态度日趋强硬,这一议题引发广泛关注。Spain wants to prohibit social media for under-16s, Spain's Prime Minister Pedro Sanchez said. Greece is close to announcing a similar ‍ban for children under 15, a senior government source said.西班牙首相佩德罗·桑切斯表示,西班牙希望禁止16岁以下未成年人使用社交媒体。一名希腊政府高级消息人士称,希腊也即将宣布针对15岁以下儿童的类似禁令。Sanchez said social media platform X has amplified disinformation over his administration's decision last week to regularize half a million undocumented workers and asylum seekers.桑切斯表示,社交媒体平台X放大了有关其政府上周决定对约50万名无证移民和寻求庇护者进行身份合法化的虚假信息。The prime minister added his government would introduce a new bill to hold social media executives accountable for illegal and hateful content.首相补充称,政府将提出一项新法案,追究社交媒体高管在传播非法和仇恨内容方面的责任。The measures drew fury from Elon Musk, the owner of X. He responded a few hours later, calling Sanchez "a traitor to the people of Spain".这些措施引发了社交媒体平台X所有者埃隆·马斯克的强烈愤怒。数小时后,他作出回应,称桑切斯是“西班牙人民的叛徒”。Representatives of Google, part of Alphabet, TikTok, Snapchat and Meta did not immediately respond to requests for comment.谷歌(Alphabet旗下)、TikTok、Snapchat和Meta的代表未立即回应置评请求。Beyond boundaries超越国界Spain and Greece join countries such as Britain and France in considering tougher stances on social media, after Australia in December became the first nation to prohibit access to such platforms for children younger than 16.继澳大利亚于去年12月成为全球首个禁止16岁以下儿童使用社交媒体的平台国家后,西班牙和希腊加入英国、法国等国行列,考虑对社交媒体采取更为严厉的立场。Governments worldwide are looking at the impact of children's screen time on their development and mental well-being.全球各国政府正关注儿童屏幕使用时间对其身心发展和心理健康的影响。"Our children are exposed to a space they were never meant to navigate alone … We will no longer accept that," Sanchez said at the World Governments Summit in Dubai.桑切斯在迪拜举行的世界政府峰会上表示:“我们的孩子正暴露在一个本不该独自面对的空间中……我们将不再接受这种情况。”Spain joins five other European countries that he dubbed the "coalition of the digitally willing" to coordinate and enforce cross-border regulation, Sanchez said, without naming the countries, which are set to hold their first meeting in the coming days.桑切斯表示,西班牙将加入另外五个欧洲国家,组成他所称的“数字监管意愿联盟”,以协调并推动跨境监管措施。这些国家将在未来几天内举行首次会议,但具体国家名称尚未公布。"We know that this is a battle that far exceeds the boundaries of any country," he said. His office did not respond to a request for clarification.他说:“我们知道,这是一场远远超出任何单一国家边界的斗争。”其办公室未回应进一步澄清请求。Legislation to ban children under 15 from social media is currently passing through France's parliament. Britain is also considering similar measures.目前,禁止15岁以下儿童使用社交媒体的立法正在法国议会推进。英国也在考虑采取类似措施。The recent explosion of AI-generated content, and public outcry over reports of Musk's Grok AI chatbot generating nonconsensual sexual images, including of minors, have fueled debate over the risks of such online content.近期,人工智能生成内容激增,加之有关马斯克旗下Grok AI聊天机器人生成未经同意的性图像(包括涉及未成年人)的报道引发公众愤怒,进一步加剧了关于此类网络内容风险的讨论。Sanchez said prosecutors would explore ways to investigate possible legal infractions by Grok.桑切斯表示,检察机关将研究调查Grok可能存在的违法行为的途径。About 82 percent of Spanish respondents said they believed children under 14 should be banned from social media, according to a 30-country Ipsos poll on education published last August. That was up from 73 percent in 2024.根据益普索(Ipsos)去年8月发布的一项涵盖30个国家的教育调查,约82%的西班牙受访者认为应禁止14岁以下儿童使用社交媒体,这一比例高于2024年的73%。In Australia, social media companies deactivated nearly 5 million accounts belonging to teenagers within weeks of the ban taking effect, the internet regulator said last month, suggesting the measure could have a sweeping impact.澳大利亚互联网监管机构上月表示,在禁令生效后的数周内,社交媒体公司已停用近500万个属于青少年的账户,表明该措施可能产生广泛影响。social media ban /ˈsoʊʃəl ˈmiːdiə bæn/社交媒体禁令cross-border regulation /ˌkrɔːs ˈbɔːrdər ˌreɡjʊˈleɪʃən/跨境监管disinformation /ˌdɪsˌɪnfərˈmeɪʃən/虚假信息legal infractions /ˈliːɡl ɪnˈfrækʃənz/违法行为

Grumpy Old Geeks
732: We're Not In the Files!

Grumpy Old Geeks

Play Episode Listen Later Feb 7, 2026 76:06


In this week's FOLLOW UP, Bitcoin is down 15%, miners are unplugging rigs because paying eighty-seven grand to mine a sixty-grand coin finally failed the vibes check, and Grok is still digitally undressing men—suggesting Musk's “safeguards” remain mostly theoretical, which didn't help when X offices got raided in France. Spain wants to ban social media for kids under 16, Egypt is blocking Roblox outright, and governments everywhere are flailing at the algorithmic abyss.IN THE NEWS, Elon Musk is rolling xAI into SpaceX to birth a $1.25 trillion megacorp that wants to power AI from orbit with a million satellites, because space junk apparently wasn't annoying enough. Amazon admits a “high volume” of CSAM showed up in its AI training data and blames third parties, Waymo bags a massive $16 billion to insist robotaxis are working, Pinterest reportedly fires staff who built a layoff-tracking tool, and Sam Altman gets extremely cranky about Claude's Super Bowl ads hitting a little too close to home.For MEDIA CANDY, we've got Shrinking, the Grammys, Star Trek: Starfleet Academy's questionable holographic future, Neil Young gifting his catalog to Greenland while snubbing Amazon, plus Is It Cake? Valentines and The Rip.In APPS & DOODADS, we test Sennheiser earbuds, mess with Topaz Video, skip a deeply cursed Python script that checks LinkedIn for Epstein connections, and note that autonomous cars and drones will happily obey prompt injection via road signs—defeated by a Sharpie.IN THE LIBRARY, there's The Regicide Report, a brutal study finding early dementia signals in Terry Pratchett's novels, Neil Gaiman denying allegations while announcing a new book, and THE DARK SIDE WITH DAVE, vibing with The Muppet Show as Disney names a new CEO. We round it out with RentAHuman.ai dread relief via paper airplane databases, free Roller Coaster Tycoon, and Sir Ian McKellen on Colbert—still classy in the digital wasteland.Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.SquareSpace - go to squarespace.com/GRUMPY for a free trial. And when you're ready to launch, use code GRUMPY to save 10% off your first purchase of a website or domain.Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/732FOLLOW UPBitcoin drops 15%, briefly breaking below $61,000 as sell-off intensifies, doubts about crypto growBitcoin Is Crashing So Hard That Miners Are Unplugging Their EquipmentGrok, which maybe stopped undressing women without their consent, still undresses menX offices raided in France as UK opens fresh investigation into GrokSpain set to ban social media for children under 16Egypt to block Roblox for all usersIN THE NEWSElon Musk Is Rolling xAI Into SpaceX—Creating the World's Most Valuable Private CompanySpaceX wants to launch a constellation of a million satellites to power AI needsA potential Starlink competitor just got FCC clearance to launch 4,000 satellitesAmazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came fromWaymo raises massive $16 billion round at $126 billion valuation, plans expansion to 20+ citiesPinterest Reportedly Fires Employees Who Built a Tool to Track LayoffsSam Altman got exceptionally testy over Claude Super Bowl adsMEDIA CANDYShrinkingStar Trek: Starfleet AcademyThe RipNeil Young gifts Greenland free access to his music and withdraws it from Amazon over TrumpIs it Cake? ValentinesAPPS & DOODADSSennheiser Consumer Audio IE 200 In-Ear Audiophile Headphones - TrueResponse Transducers for Neutral Sound, Impactful Bass, Detachable Braided Cable with Flexible Ear Hooks - BlackSennheiser Consumer Audio CX 80S In-ear Headphones with In-line One-Button Smart Remote – BlackTopaz VideoEpsteinAutonomous cars, drones cheerfully obey prompt injection by road signAT THE LIBRARYThe Regicide Report (Laundry Files Book 14) by Charles StrossScientists Found an Early Signal of Dementia Hidden in Terry Pratchett's NovelsNeil Gaiman Denies the Allegations Against Him (Again) While Announcing a New BookTHE DARK SIDE WITH DAVEDave BittnerThe CyberWireHacking HumansCaveatControl LoopOnly Malware in the BuildingThe Muppet ShowDisney announces Josh D'Amaro will be its new CEO after Iger departsA Database of Paper Airplane Designs: Hours of Fun for Kids & Adults AlikeOnline (free!) version of Roller Coaster tycoon.Speaking of coasters, here's the current world champion.I am hoping this is satire...Sir Ian McKellen on Colbert.CLOSING SHOUT-OUTSCatherine O'Hara: The Grande Dame of Off-Center ComedyStanding with Sam 'Balloon Man' MartinezSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Monde Numérique - Jérôme Colombain

Cette semaine, Monde Numérique décrypte un tournant majeur de l'intelligence artificielle avec l'essor fulgurant des agents autonomes. De la tech interplanétaire d'Elon Musk à la souveraineté énergétique européenne, une actualité vertigineuse.

Marketplace Tech
Bytes: Week in Review - SpaceX and xAI merge, Nvidia and OpenAI's funding relationship and U.S. TikTok's rough start

Marketplace Tech

Play Episode Listen Later Feb 6, 2026 10:25


On this week's “Marketplace Tech Bytes: Week in Review,” we take a look at Nvidia's changing investment relationship with OpenAI. Plus, a stormy start for the new U.S. version of TikTok. But first, SpaceX, one of the world's largest rocket companies, announced this week that it's buying xAI, a two-and-half-year-old artificial intelligence startup. Both companies are controlled by Elon Musk. The new company is reportedly valued at $1.25 trillion. It means the chatbot Grok, the satellite internet company Starlink, and the social media firm X are all going to co-exist under the same rocket hangar. Marketplace's Stephanie Hughes spoke with Paresh Dave, senior writer at Wired, about what adding these companies together equals.

Marketplace All-in-One
Bytes: Week in Review - SpaceX and xAI merge, Nvidia and OpenAI's funding relationship and U.S. TikTok's rough start

Marketplace All-in-One

Play Episode Listen Later Feb 6, 2026 10:25


On this week's “Marketplace Tech Bytes: Week in Review,” we take a look at Nvidia's changing investment relationship with OpenAI. Plus, a stormy start for the new U.S. version of TikTok. But first, SpaceX, one of the world's largest rocket companies, announced this week that it's buying xAI, a two-and-half-year-old artificial intelligence startup. Both companies are controlled by Elon Musk. The new company is reportedly valued at $1.25 trillion. It means the chatbot Grok, the satellite internet company Starlink, and the social media firm X are all going to co-exist under the same rocket hangar. Marketplace's Stephanie Hughes spoke with Paresh Dave, senior writer at Wired, about what adding these companies together equals.

Bill Whittle Network
Artificial ‘Intelligence'?

Bill Whittle Network

Play Episode Listen Later Feb 6, 2026 14:15


If you have spent any real time with Large Language Model (LLM) AI systems like Grok or ChatGPT, you probably know the pattern: ‘OMG this is AMAZING!' ‘Okay, not ‘amazing' but still really useful!' ‘Can come in handy sometimes.' ‘I probably should double-check all of this.' ‘REALLY?'

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
The First Mechanistic Interpretability Frontier Lab — Myra Deng & Mark Bissell of Goodfire AI

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Play Episode Listen Later Feb 6, 2026 68:01


From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn “peeking inside the model” into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.In this episode, we go far beyond the usual “SAEs are cool” take. We talk about Goodfire's core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by “slurping supervision through a straw,” hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire's answer is to build a bi-directional interface between humans and models: read what's happening inside, edit it surgically, and eventually use interpretability during training so customization isn't just brute-force guesswork.Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic→real transfer, regulated domains, no access to sensitive data). We also get a live window into what “frontier-scale interp” means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and “pixel-space” world models.We discuss:* Myra + Mark's path: Palantir (health systems, forward-deployed engineering) → Goodfire early team; Two Sigma → Head of Product, translating frontier interpretability research into a platform and real-world deployments* What “interpretability” actually means in practice: not just post-hoc poking, but a broader “science of deep learning” approach across the full AI lifecycle (data curation → post-training → internal representations → model design)* Why post-training is the first big wedge: “surgical edits” for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about “clean concept spaces”* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic→real transfer, English + Japanese, and tokenization quirks* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don't require hosting a second large model in the loop* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a “Gen-Z slang” feature across multiple layers without breaking tool use* Hallucinations as an internal signal: the case that models have latent uncertainty / “user-pleasing” circuitry you can detect and potentially mitigate more directly than black-box methods* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners* World models + “pixel-space” interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners* The north star: moving from “data in, weights out” to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training—Goodfire AI* Website: https://goodfire.ai* LinkedIn: https://www.linkedin.com/company/goodfire-ai/* X: https://x.com/GoodfireAIMyra Deng* Website: https://myradeng.com/* LinkedIn: https://www.linkedin.com/in/myra-deng/* X: https://x.com/myra_dengMark Bissell* LinkedIn: https://www.linkedin.com/in/mark-bissell/* X: https://x.com/MarkMBissellFull Video EpisodeTimestamps00:00:00 Introduction00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire00:00:29 What is Goodfire? Mission and Focus on Interpretability00:01:01 Goodfire's Practical Approach to Interpretability00:01:37 Goodfire's Series B Fundraise Announcement00:02:04 Backgrounds of Mark and Myra from Goodfire00:02:51 Team Structure and Roles at Goodfire00:05:13 What is Interpretability? Definitions and Techniques00:05:30 Understanding Errors00:07:29 Post-training vs. Pre-training Interpretability Applications00:08:51 Using Interpretability to Remove Unwanted Behaviors00:10:09 Grokking, Double Descent, and Generalization in Models00:10:15 404 Not Found Explained00:12:06 Subliminal Learning and Hidden Biases in Models00:14:07 How Goodfire Chooses Research Directions and Projects00:15:00 Troubleshooting Errors00:16:04 Limitations of SAEs and Probes in Interpretability00:18:14 Rakuten Case Study: Production Deployment of Interpretability00:20:45 Conclusion00:21:12 Efficiency Benefits of Interpretability Techniques00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model00:25:15 How Steering Features are Identified and Labeled00:26:51 Detecting and Mitigating Hallucinations Using Interpretability00:31:20 Equivalence of Activation Steering and Prompting00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques00:36:04 Model Design and the Future of Intentional AI Development00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems00:40:51 Industry Applications and the Rise of Mechinterp in Practice00:41:39 Interpretability for Code Models and Real-World Usage00:43:07 Making Steering Useful for More Than Stylistic Edits00:46:17 Applying Interpretability to Healthcare and Scientific Discovery00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare00:52:03 Call for Design Partners Across Domains00:54:18 Interest in World Models and Visual Interpretability00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability01:00:14 Interpretability, Safety, and Alignment Perspectives01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at GoodfireTranscriptShawn Wang [00:00:05]: So welcome to the Latent Space pod. We're back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi's special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?Myra Deng [00:00:29]: Yeah, it's a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That's our description right now, and I'm excited to dive more into the work we're doing to make that happen.Shawn Wang [00:00:55]: Yeah. And there's always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?Mark Bissell [00:01:01]: Well, being an AI research lab that's focused on interpretability, there's obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It's a new field, so that hasn't been done all that much. And we're excited about actually seeing that sort of put into practice.Shawn Wang [00:01:37]: Yeah, I would say it wasn't too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn't have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we're also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let's dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don't know how related they are in practice.Mark Bissell [00:02:22]: Yeah, not super related, but I don't know. It was helpful context to know what it's like. Just to work. Just to work with health systems and generally in that domain. Yeah.Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.Myra Deng [00:02:37]: Did we overlap at all?Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you're head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn't always that way. And as a technical lead on the health care team and at Goodfire, I'm a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I've worked on a range of things. And, you know, it's it's a fun time to be at a team that's still reasonably small. I think when I joined one of the first like ten employees, now we're above 40, but still, it looks like there's always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you've seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I've been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.Myra Deng [00:03:58]: Yeah, yeah.Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that's repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.Shawn Wang [00:05:01]: I love like what is things because that's a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don't you want to try tackling what is interpretability and then they can correct us.Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it's a very interesting role to be head of product, right? Because you guys, at least as a lab, you're more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you're trying to have an understanding of what's going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what's happening in a model internals. And then you're trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There's a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what's happening and, you know, how can we, how can we adjust what's happening on the model internals? How'd I do?Mark Bissell [00:06:12]: That was really good. I think that was great. I think it's also a, it's kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you'll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we're an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you're training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don't think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.Myra Deng [00:07:33]: Currently focused on post-training, but there's no reason the techniques wouldn't also work in pre-training.Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I'm thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.Myra Deng [00:07:50]: And I think in a lot of the news that you've seen in, in, on like Twitter or whatever, you've seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There's also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn't appropriately learn the target task. And a big question that we've always had is like, how do you use your understanding of what the model knows and what it's doing to actually guide the learning process?Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I've never heard of GlazeGate. I didn't know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I'm like, that's funny, but like, yeah, I guess it's the pitch that if they had worked a good fire, they wouldn't have avoided it. Like, you know what I'm saying?Myra Deng [00:08:51]: I think so. Yeah. Yeah.Mark Bissell [00:08:53]: I think that's certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we're talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we've been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.Shawn Wang [00:09:27]: Is there a CCP vector?Mark Bissell [00:09:29]: Well, there's, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you'd want to be able to, to do that. Whether it's unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.Shawn Wang [00:10:09]: Yeah.Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model's internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you're like, well, if the loss curves level out, then you're done, but maybe you're not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you're doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you're just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.Mark Bissell [00:11:11]: I think that's certainly like the domain of, of problems that we're, that we're looking to get.Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don't need, you need to know where to scale. And. But if you believe in double descent, then you don't, you don't believe in anything where like anything levels off, like.Vibhu Sapra [00:11:30]: I mean, also tendentially there's like, okay, when you talk about the China vector, right. There's the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it's just like another use case of. Okay. If we can interpret what's happening in post-training, you know, can we clear some of this? Can we even determine what's there? Because yeah, it's just like some worrying research that's out there that shows, you know, we really don't know what's going on.Mark Bissell [00:12:06]: That is. Yeah. I think that's the biggest sentiment that we're sort of hoping to tackle. Nobody knows what's going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.Mark Bissell [00:12:40]: According to, according to certain models. Right. It's interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.Shawn Wang [00:12:49]: But I mean, I think that's a, that's a cheat code because there's not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There's something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.Vibhu Sapra [00:13:24]: Got it. I think there's like a bunch of these open-ended questions, right? Like you can't train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that's somewhat there in your base model. You're not learning new stuff. You're just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what's kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what's the workflow? Okay. There's like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.Myra Deng [00:14:07]: It's a really good question. I feel like we've always at the very beginning of the company thought about like, let's go and try to learn what isn't working in machine learning today. Whether that's talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we've encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we've done some work on better foundational interpreter models. And a lot of our team's research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we're like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we're going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.Myra Deng [00:16:19]: I mean, I'm curious if you have more thoughts here as well, because you've done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we've seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don't think that is like, I'm not down on SAEs at all. I think there are many, many things they're useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.Mark Bissell [00:17:34]: Fair enough. Yeah. It's the blessing and the curse of unsupervised methods where you get to peek into the AI's mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren't an SAE based approach actually did prove to be the most generalizable?Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we've had like good data sets, it hasn't been the case.Shawn Wang [00:18:14]: And just because you named Rakuten and I don't know if we'll get it another chance, like what is the overall, like what is Rakuten's usage or production usage? Yeah.Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don't route private user information.Myra Deng [00:18:41]: And so that's, you know, going through all of their user queries every day. And that's something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.Shawn Wang [00:19:03]: And for those who don't know, like it's Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don't always think about when you're doing sort of research tasks. So when you think about some of the stuff that came up there that's more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn't train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you'll see. You might make simplifying assumptions if you're sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you're classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you're sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.Vibhu Sapra [00:20:41]: Excellent. One of the things that's also interesting with Interp is a lot of these methods are very efficient, right? So where you're just looking at a model's internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there's like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it's also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that's just another thing people should consider.Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It's no extra latency really. Excellent.Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don't, I don't actually know if this is like the latest, latest or like alpha thing.Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we've been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you're going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it's sort of fun that in addition to the research challenges, there are engineering challenges that we're now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you're using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it's quite a fun demo. So screen sharing is on. So I've got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we've got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that's too much to run on that Mac. Yeah. I think it's, uh, it takes a full, like each 100 node. I think it's like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi's running. We can ask it a prompt. It's got a forked version of our, uh, of the SG line code base that we've been working on. So I'm going to tell it, Hey, this SG line code base is slow. I think there's a bug. Can you try to figure it out? There's a big code base, so it'll, it'll spend some time doing this. And then on the right here, I'm going to initialize in real time. Some steering. Let's see here.Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.Shawn Wang [00:23:38]: Yeah.Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it's still sort of thinking normally it might take, I don't know, 15 seconds for this to kick in, but then we're going to start hopefully seeing him do this code base is massive for real. So we're going to start. We're going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.Mark Bissell [00:24:19]: And interestingly, you can see, you know, it's still able to call tools, uh, and stuff. It's um, it's purely sort of it's it's demeanor. And there are other features that we found for interesting things like concision. So that's more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we're seeing it come in. Pretty good. Outputs.Shawn Wang [00:24:43]: Scheduler code is actually wild.Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.Vibhu Sapra [00:24:53]: What's the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don't know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.Mark Bissell [00:25:15]: So in this case, um, we, our platform that we've been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I'd say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you're interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There's top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that's actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I've run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that's the, that's the time of year to be like, Oh, I'm in this, I'm in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts andVibhu Sapra [00:26:34]: And I've got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?Myra Deng [00:26:51]: Oh, wow. You really predicted a project we're already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that's very hard to detect. And it's like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we've seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that's trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I'm like, well, that's a fundamental misunderstanding of how this works. Yeah.Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they're unsupervised. So when you have a behavior that you deliberately would like to remove, and that's more of like a supervised task, often it is better to use something like probes and specifically target the thing that you're interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.Vibhu Sapra [00:28:20]: And as much as we're training an autoencoder to be sparse, we're not like for sure certain that, you know, we will get something that just correlates to hallucination. You'll probably split that up into 20 other things and who knows what they'll be.Mark Bissell [00:28:36]: Of course. Right. Yeah. So there's no sort of problems with like feature splitting and feature absorption. And then there's the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can't write. Creatively anymore. And maybe you don't like that, but you want to still stop it from hallucinating facts and figures.Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we'll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?Mark Bissell [00:29:07]: I don't think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there's not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it's it's nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.Shawn Wang [00:29:33]: Yeah. The fact that it's real time, like you started the thing and then you edited the steering vector.Vibhu Sapra [00:29:38]: I think it's it's an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It's like that's the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you're you only have so many knobs and you can just tweak it a bit more. And I don't know how it plays in. Like people haven't done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there's a whole hype of continual learning, right? So there's just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don't use it. So I don't know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can't say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It's a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don't know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.Shawn Wang [00:31:38]: A lot of people were like, yeah, we've been doing this, guys.Mark Bissell [00:31:40]: Like, yeah, what's in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That's cool. By doing steering experiments and using this sort of like equivalence mapping. That's cool. That's really cool. It's very neat. Yeah.Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It's plus all the context. It's up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it's like not exactly equivalent.Mark Bissell [00:32:33]: Right, right. There's sort of you need to get precise about, yeah, like how you sort of define steering and like what how you're modeling the setup. But yeah, I've got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it's an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep's the final author there.Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it's not clear what the product is at the end of the day, it's clearly very valuable. Thinking about like what's the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it's still very difficult to get models fine-tuned and RL'd for exactly what you want them to do unless you're an expert at model training. And so that's like something we'reShawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what's the comparison there?Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?Shawn Wang [00:34:25]: Yeah. You're not touching a base model. You're touching an adapter. It's kind of, yeah.Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it's maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you're after? Yeah. Just maybe one way.Mark Bissell [00:34:44]: I like that analogy. That's my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we're, that we're very focused on. And just the fact that like, I hope that we look back at how we're currently training models and post-training models and just think what a primitive way of doing that right now. Like there's no intentionalityShawn Wang [00:35:06]: really in... It's just data, right? The only thing in control is what data we feed in.Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that's RL. Yeah. Right. And, and, you know, it's sample inefficient. There's, you know, what do they say? It's like slurping feedback. It's like, slurping supervision. Right. And so you'd like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you're moving to a world whereVibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you're at a research lab that does model training, foundation models, and you're on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn't too much of a connect there, but it's still something, you know, it's something they want toMark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hocVibhu Sapra [00:36:39]: use cases where it doesn't need to touch that. I think the other thing a lot of people forget is this stuff isn't too computationally expensive, right? Like I would say, if you're interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there's already a lot done. There's a lot of open source work. You guys have done some too. Um, you know,Shawn Wang [00:37:04]: There's like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.Vibhu Sapra [00:37:09]: Even if you're like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I'm wrong is like in the thousands of dollars, not even like, it's not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don't have compute for like, you know, pre-training stuff. So it's, it's a very nice field to get into. And also there's a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there's also just a lot of open-ended stuff that people could work on. That's interesting. Right. I don't know if you guys have any calls for like, what's open questions, what's open work that you either open collaboration with, or like, you'd just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I'm sure you're hiring.Myra Deng [00:38:09]: There's a paper, I think from, was it Lee, uh, Sharky? It's open problems and, uh, it's, it's a bit of interpretability, which I recommend everyone who's interested in the field. Read. I'm just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it's been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what's now interp. So it's really cool to see a number to entry is, you know, in some ways low and there's a lot of information out there and ways to get started. There's this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it's moving, how quick it is to get started and things like that.Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there's an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What's the acronym for? Machine Learning and Alignment Theory Scholars? It's like the...Vibhu Sapra [00:39:40]: Normally summer internship style.Myra Deng [00:39:42]: Yeah, but they've been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it's great for anyone who is transitioning into interpretability. There's a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I'm adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I'm pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It'll effectively be the first industry McInturk conference. Yeah. I'm so glad you added that. You know, it's still a little bit of a bet. It's not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone's adding pragmatic, actionable, whatever.Shawn Wang [00:41:10]: It's like, okay, well, we weren't actionable before, I guess. I don't know.Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there's definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It's like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.Shawn Wang [00:41:39]: Yeah. I think people haven't really mentioned this yet. It's just Interp for code. I think it's like an abnormally important field. We haven't mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn't that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it's... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it's funny because we know there's like, we feel there's some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don't know if that will get there this way. Yeah.Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we've run into again and again is like, we, we don't want to be in the world where steering is only useful for like stylistic things. That's definitely not, not what we're aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that's, um...Shawn Wang [00:43:07]: And is this an emergent property of scale as well?Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there's ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don't want exhibited in the data. So we're not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It's not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.Mark Bissell [00:44:02]: And I think that that's a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it's evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.Myra Deng [00:44:30]: There were like bad code features. I've got it pulled up.Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.Shawn Wang [00:44:35]: This is like, this is exactly.Vibhu Sapra [00:44:38]: There's like specifically a code error feature that activates and they show, you know, it's not, it's not typo detection. It's like, it's, it's typos in code. It's not typical typos. And, you know, you can, you can see it clearly activates where there's something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that's, that's synthetic data you can kind of post-train on. Yeah.Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There's, there's a lot of resources that, you know, we can probably share of people that want to get involved.Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.Myra Deng [00:45:49]: Yeah, exactly.Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that's a big use case for you guys. We haven't really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we're starting up for AI, for AI for science, just because like, it's such a huge investment category and also I'm like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it's sort of like bidirectional communication is the goal there. So what we've been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that's narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they're actually performing well, on tasks, or if they're picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it's using some simpler correlate, like the ancestry of the person that it's looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don't have names for or specific, you know, yeah, discoveries that they've made that that we don't know about, that's, that's a big goal. And so we're already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we've used foundation models, they've been training and applied our interpretability techniques to find novel biomarkers for Alzheimer's disease. So I think this is just the tip of the iceberg. But it's, that's like a flavor of some of the things that we're working on.Shawn Wang [00:48:36]: Yeah, I think that's really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there's a plethora of these models coming out, because there's so much potential and research. And it's like, very interesting how it's basically the same as language models, but just with a different underlying data set. But it's like, it's the same exact techniques. Like, there's no change, basically.Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It's, it's, it's transformers all the way down. So yeah.Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they're training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there's a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you're a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we're more risk adverse to something going wrong there. So even just from a basic understanding, like, if we're trusting these systems to make claims, we want to know why and what's going on.Myra Deng [00:49:51]: Yeah, I think there's totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you're using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that's definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you've seen a lot of startups, like say that they're going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.Shawn Wang [00:50:49]: How did they even hear of you? A podcast.Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.Vibhu Sapra [00:50:53]: Okay, well, now's that time, you know.Myra Deng [00:50:55]: Everyone can call us.Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we've trained, and we want to know what they're doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we've never used these models. Let's figure it out. But it's also like, great proof that interp techniques scale pretty well across domains. We didn't really have to learn too much about.Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it's obviously, it's just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don't know if you have anything to say there.Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we've also done work on material science. Yeah, it really runs the gamut.Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you're obviously experts in this, but like, is there a call out for people that you're looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I'm curious to hear from you on the life sciences side. But we're looking for design partners across many domains, language, anyone who's customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There's a lot of models that work in, like, pixel space, as we call it. So if you're doing world models, video models, even robotics, where there's not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.Shawn Wang [00:52:43]: Just because you mentioned the keyword

Media Storm
News Watch pt.2: Trump's 'new' world order, and who is behind Grok AI deepfakes?

Media Storm

Play Episode Listen Later Feb 6, 2026 33:46


Like this episode? Support Media Storm on ⁠⁠⁠⁠⁠Patreon⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠! In January alone, Donald Trump abducted the Venezuelan President, listed himself as President of Venezuela on Wikipedia, almost launched another tariff war after demanding Greenland, directly threatened Colombia, Mexico and Cuba, told Honduran vote counters there'd be “hell to pay” if his favourite candidate didn't win, and dropped bombs on Caribbean boats that killed more than a hundred people. Yet at the World Economic Forum in Davos the same month, he launched his ‘Board of Peace'. Make it make sense! But is Trump's new world order really that new? In a postwar world of covert regime change, privatised ownership of natural resources, and sanctions designed to strangle uncooperative economies, was the international rules-based order just a lie all along?  Plus: headlines told us that "Non-consensual sexualised deepfakes were created by the AI chatbot Grok"  and that "Grok AI made sexualised images of children". But who gave Grok the prompt to do it? Missing from the headlines, as is so often the case when it comes to stories about sexual abuse against women and girls, is MEN. We discuss why no one can seem to name the problem - so much so, our government used a SNAKE to represent male violence in a recent advert (end snake violence against women and girls!) And we end with our new segment: Holding Onto Hope. The episode is hosted and produced by Mathilda Mallinson (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@mathildamall⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠) and Helena Wadia (⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠@helenawadia⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠)  The music is by⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ @soundofsamfire⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Follow us on⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Instagram⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠,⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Bluesky⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, and⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ TikTok ⁠⁠⁠ Learn more about your ad choices. Visit podcastchoices.com/adchoices

Internet Today
Grok Has Been Arrested In France

Internet Today

Play Episode Listen Later Feb 5, 2026 36:00


Learn more about your ad choices. Visit megaphone.fm/adchoices

The Last American Vagabond
The “America First” Global Order & Massive Astroturfing Of Iran Protests Exposed

The Last American Vagabond

Play Episode Listen Later Feb 5, 2026 161:15 Transcription Available


Welcome to The Daily Wrap Up, an in-depth investigatory show dedicated to bringing you the most relevant independent news, as we see it, from the last 24 hours (2/5/26). As always, take the information discussed in the video below and research it for yourself, and come to your own conclusions. Anyone telling you what the truth is, or claiming they have the answer, is likely leading you astray, for one reason or another. Stay Vigilant. !function(r,u,m,b,l,e){r._Rumble=b,r[b]||(r[b]=function(){(r[b]._=r[b]._||[]).push(arguments);if(r[b]._.length==1){l=u.createElement(m),e=u.getElementsByTagName(m)[0],l.async=1,l.src="https://rumble.com/embedJS/u2q643"+(arguments[1].video?'.'+arguments[1].video:'')+"/?url="+encodeURIComponent(location.href)+"&args="+encodeURIComponent(JSON.stringify([].slice.apply(arguments))),e.parentNode.insertBefore(l,e)}})}(window, document, "script", "Rumble");   Rumble("play", {"video":"v735zgs","div":"rumble_v735zgs"}); Video Source Links (In Chronological Order): (7) The Last American Vagabond on X: "So all this hype about Somali daycare fraud only for the Republicans to largely allow this to continue. The government is not on your side. It actively plays us against ourselves so nothing changes. #TwoPartyIllusion" / X (7) Grok on X: "@MaryBowdenMD @RepThomasMassie The House Rules Committee voted 8-4 to report the rule for H.R. 7148 without making Massie's daycare amendment in order, effectively blocking it. Yea voters: Michelle Fischbach (R-MN) Ralph Norman (R-SC) Chip Roy (R-TX) Nick Langworthy (R-NY) Austin Scott (R-GA) Morgan Griffith" / X (7) James O'Keefe on X: "BREAKING: FBI Official Admits Kash Patel WILL NOT Arrest ANY Minnesota Daycare Criminals. https://t.co/7WR6SdaGOx" / X (13) Lexi on X: "@FBIDirectorKash https://t.co/HsUbj0Jfcf" / X (13) Christopher Raymond on X: "@marvomago @michaeljknowles It's so bizarre that MAGA has invented a standard where all protests must be comprised only of people who, through no communication or coordination with other people, all somehow happen to show up at the same place at the same time. Newsflash - protests are coordinated" / X New Tab (13) Scott Horton on X: "-Iran has an "unalienable right" to a civilian nuclear program as members of the NPT -US has no authority to insist on limits to their missiles' ranges, no pretended UNSC resolution or anything, none can reach America -Hezbollah is Israel's problem, not the USA's. They kill AQ" / X (13) Rapid Response 47 on X: ""Should the Supreme Leader in Iran be worried right now?" @POTUS: "I would say he should be very worried, yeah, he should be." https://t.co/kMQTzx61V1" / X (13) Daniel McAdams on X: "AIPAC thanking @JDVance is not the win Vance may think it is. They've just alienated almost every under-40 Republican voter for 2028. @AIPAC is no longer a force-multiplier you want on your side and out front. It is a political liability you want to remain in the background." / X (13) The Last American Vagabond on X: "Trump: "Look, that country is a mess right now because of us", referring to Iran. Yeah man, we know. But please keep telling us it's Iran's "mismanagement".

La ContraCrónica
Musk, cohetes e IA

La ContraCrónica

Play Episode Listen Later Feb 5, 2026 54:25


Elon Musk anunció esta misma semana la fusión de SpaceX y xAI, una operación que alumbra una compañía valorada en 1,25 billones de dólares y que consolida su estrategia de integración vertical. La absorción de una joven startup de inteligencia artificial por parte de una consolidada empresa aeroespacial busca crear un gigante tecnológico capaz de dominar tanto el hardware como el software del futuro, con la idea de replicar el modelo de control total que Apple ejerce sobre el iPhone. A pesar de que xAI apenas tiene dos años de vida, su chatbot Grok ya compite con gigantes como ChatGPT de OpenAI y Gemini de Google. Pero el sector de la inteligencia artificial exige inversiones astronómicas en semiconductores, centros de datos y personal muy cualificado para el desarrollo. Al fusionarla con SpaceX, Musk aprovecha la infraestructura y la solidez financiera de su empresa de cohetes, que, por lo demás, ya era cliente de xAI para optimizar el servicio de atención al cliente de Starlink. La justificación estratégica de este movimiento reside en una visión tan ambiciosa como revolucionaria, casi de ciencia ficción, la de trasladar los centros de datos a la órbita terrestre. Según Musk, la demanda energética de la inteligencia artificial es insostenible en la Tierra debido a su alto impacto medioambiental. Su propuesta consiste en desplegar una red orbital de hasta un millón de satélites que funcionen como centros de datos alimentados por grandes paneles de energía solar. En el espacio, la captación de energía fotovoltaica es hasta siete veces más eficiente que en la superficie terrestre. Este proyecto presenta desafíos técnicos y económicos colosales. Aunque SpaceX ha logrado reducir de forma notable el coste de poner satélites en órbita, especialmente con el Falcon Heavy y con el futuro Starship, transportar grandes cantidades de carga al espacio sigue siendo extremadamente caro y también complejo, ya que el mantenimiento es prácticamente imposible una vez que los equipos están en órbita. La fusión también responde a la afición que Musk tiene de cruzar recursos y contratos entre sus compañías. En otras ocasiones se valió de este mismo método para rescatar o potenciar sus negocios, como ocurrió con la adquisición de SolarCity por parte de Tesla en 2016. Además, esta estructura permite a Musk mantener un control férreo sobre la nueva empresa. Aunque posee el 43% de las acciones, goza del 80% del poder de voto gracias a unas acciones de clase especial que le permiten tomar decisiones unilaterales sin consensuarlo antes con los pequeños inversores. La fusión aleja a SpaceX de su imagen de empresa esencialmente logística para introducirla en el terreno del "hype" tecnológico que tantas alegrías le ha dado a Musk con Tesla. Al integrar cohetes, satélites e inteligencia artificial, el empresario trata de conseguir una ventaja competitiva que ni Google ni Amazon poseen en estos momentos. Eso podrá asegurar su relevancia en la carrera tecnológica al mismo tiempo que lo mantiene todo bajo el paraguas de empresas sin cotizar y, por lo tanto, lejos del escrutinio y la volatilidad de los mercados bursátiles. En La ContraRéplica: 0:00 Introducción 3:51 Musk, cohetes e IA 34:29 “Contra el pesimismo”… https://amzn.to/4m1RX2R 36:28 Julio Iglesias y el MeToo 45:02 El Gobierno contra las redes sociales · Canal de Telegram: https://t.me/lacontracronica · “Contra el pesimismo”… https://amzn.to/4m1RX2R · “Hispanos. Breve historia de los pueblos de habla hispana”… https://amzn.to/428js1G · “La ContraHistoria del comunismo”… https://amzn.to/39QP2KE · “La ContraHistoria de España. Auge, caída y vuelta a empezar de un país en 28 episodios”… https://amzn.to/3kXcZ6i · “Contra la Revolución Francesa”… https://amzn.to/4aF0LpZ · “Lutero, Calvino y Trento, la Reforma que no fue”… https://amzn.to/3shKOlK Apoya La Contra en: · Patreon... https://www.patreon.com/diazvillanueva · iVoox... https://www.ivoox.com/podcast-contracronica_sq_f1267769_1.html · Paypal... https://www.paypal.me/diazvillanueva Sígueme en: · Web... https://diazvillanueva.com · Twitter... https://twitter.com/diazvillanueva · Facebook... https://www.facebook.com/fernandodiazvillanueva1/ · Instagram... https://www.instagram.com/diazvillanueva · Linkedin… https://www.linkedin.com/in/fernando-d%C3%ADaz-villanueva-7303865/ · Flickr... https://www.flickr.com/photos/147276463@N05/?/ · Pinterest... https://www.pinterest.com/fernandodiazvillanueva Encuentra mis libros en: · Amazon... https://www.amazon.es/Fernando-Diaz-Villanueva/e/B00J2ASBXM #FernandoDiazVillanueva #elonmusk #spacex Escucha el episodio completo en la app de iVoox, o descubre todo el catálogo de iVoox Originals

A Magical World with Sterling Moon
The Advocate's Tea: Moving Beyond Epstein

A Magical World with Sterling Moon

Play Episode Listen Later Feb 5, 2026 59:36


A lot of us are not doing ok after the release of 3 million pages of the Epstein files and news of deepfake child SA images generated by Elon Musk's Grok…because it's not just about these abuses, is it? These are dramatic examples of the abuses that are playing out every day in our communities. And that is something we can address. Sit down with Sterling for some tea from her advocacy days and some thoughts on how we can create a better world from the ground up. Mentioned in the episode:HB26-1030 Data Center and Utility Modernization Acthttps://leg.colorado.gov/bills/HB26-1030Use the code ILY20 for 20% off appointments by Valentines Day - https://www.sterlingmoontarot.com/bookingsDance with the Muses - https://www.patreon.com/posts/dance-with-muses-149248613?utm_medium=clipboard_copy&utm_source=copyLink&utm_campaign=postshare_creator&utm_content=join_linkSafety and Accountability Audit of the Response to Native Women Who Report Sexual Assault in Duluth, MN 2006-2008 - https://mshoop.org/wp-lib/wp-content/uploads/2022/01/safety.pdf Keep up with Sterling at www.sterlingmoontarot.com

Cybercrime Magazine Podcast
Cybercrime News For Feb. 5, 2026. France Raid X Offices, Calls Musk in Grok Case. WCYB Digital Radio

Cybercrime Magazine Podcast

Play Episode Listen Later Feb 5, 2026 2:15


The Cybercrime Magazine Podcast brings you daily cybercrime news on WCYB Digital Radio, the first and only 7x24x365 internet radio station devoted to cybersecurity. Stay updated on the latest cyberattacks, hacks, data breaches, and more with our host. Don't miss an episode, airing every half-hour on WCYB Digital Radio and daily on our podcast. Listen to today's news at https://soundcloud.com/cybercrimemagazine/sets/cybercrime-daily-news. Brought to you by our Partner, Evolution Equity Partners, an international venture capital investor partnering with exceptional entrepreneurs to develop market leading cyber-security and enterprise software companies. Learn more at https://evolutionequity.com

KPFA - Democracy Now
Democracy Now! – February 5, 2026

KPFA - Democracy Now

Play Episode Listen Later Feb 5, 2026 59:58


On today's show: Headlines Rep. Khanna Slams DOJ for Not Launching New Probes of Jeffrey Epstein's “Co-Conspirators” “Tear Down ICE” & Probe Trump-UAE $500M Crypto Deal: Rep. Ro Khanna Shot, Harassed & Threatened: U.S. Citizens Describe Surviving Violent Attacks by Immigration Agents Elon Musk Under Fire for Epstein Links, Grok's Sexualized AI Deepfakes & SpaceX-xAI Merger Democracy Now! is a daily independent award-winning news program hosted by journalists Amy Goodman and Juan Gonzalez. The post Democracy Now! – February 5, 2026 appeared first on KPFA.

The Lunar Society
Elon Musk - "In 36 months, the cheapest place to put AI will be space”

The Lunar Society

Play Episode Listen Later Feb 5, 2026 169:45


In this episode, John and I got to do a real deep-dive with Elon. We discuss the economics of orbital data centers, the difficulties of scaling power on Earth, what it would take to manufacture humanoids at high-volume in America, xAI's business and alignment plans, DOGE, and much more.Watch on YouTube; read the transcript.Sponsors* Mercury just started offering personal banking! I'm already banking with Mercury for business purposes, so getting to bank with them for my personal life makes everything so much simpler. Apply now at mercury.com/personal-banking* Jane Street sent me a new puzzle last week: they trained a neural net, shuffled all 96 layers, and asked me to put them back in order. I tried but… I didn't quite nail it. If you're curious, or if you think you can do better, you should take a stab at janestreet.com/dwarkesh* Labelbox can get you robotics and RL data at scale. Labelbox starts by helping you define your ideal data distribution, and then their massive Alignerr network collects frontier-grade data that you can use to train your models. Learn more at labelbox.com/dwarkeshTimestamps00:00:00 - Orbital data centers00:36:46 - Grok and alignment00:59:56 - xAI's business plan01:17:21 - Optimus and humanoid manufacturing01:30:22 - Does China win by default?01:44:16 - Lessons from running SpaceX02:20:08 - DOGE02:38:28 - TeraFab Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

X22 Report
It's The Tyrants Against The People, Great Awakening Was Needed To Take Back The Country – Ep. 3832

X22 Report

Play Episode Listen Later Feb 4, 2026 112:35


Watch The X22 Report On Video No videos found (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:17532056201798502,size:[0, 0],id:"ld-9437-3289"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="https://cdn2.decide.dev/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs");pt> Click On Picture To See Larger Picture Conspiracy no more, Germany and the EU shutting down energy production while China was increasing theirs. This tells you everything you need to know. Trump tariff system is getting stronger, it’s improving the economy and this is something the [CB] does not want. The [CB]s are losing control over the Fed, watch gold and silver. Trump need to wake the people of this country up. The only way to do this was to have the people go down a path that would make the uncomfortable, scared and angry, this is how you break the brainwashing. People can now see it is the tryrants against the people of this country. The picture is clear. Every step of the way the [DS] is losing their grip on the people. The people are ready to take back the country.   Economy https://twitter.com/HansMahncke/status/2018402875693580744?s=20   (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:18510697282300316,size:[0, 0],id:"ld-8599-9832"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="https://cdn2.decide.dev/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs"); https://twitter.com/KobeissiLetter/status/2018664901959462953?s=20   ended in June 2025, when missed payments began appearing on credit reports. Meanwhile, the percentage of student loans transitioning into 90+ days of serious delinquency is up to 14.3%, an all-time high. This significantly exceeds the 2013 peak of 10.5% and 2008 levels of 7.5%. The student loan crisis is accelerating. https://twitter.com/profstonge/status/2018663257675018691?s=20 Political/Rights https://twitter.com/AnthonyGalli/status/2018716797864661049?s=20 https://twitter.com/luvgod/status/2018390600475644333?s=20  Code of Conduct explicitly requires justices to avoid impropriety and the appearance of impropriety, including political activity that undermines public confidence in judicial independence. https://twitter.com/RichardStiller4/status/2018460663329472526?s=20   https://twitter.com/amuse/status/2018673649985683709?s=20   https://twitter.com/WallStreetApes/status/2018551227416756485?s=20   drive from these people?” This is what she said happened: ‘My friend told us about a dive burger place in Minnesota that we absolutely had to try. As we were driving in, we passed a small group of maybe 30 people holding large “F ICE” signs, spelled out. Many of the houses in the neighborhood also had signs saying “F ICE” and similar messages. When we were leaving to drive back to the hotel, we passed the group again. At that point, the resistance group stepped out in front of our car and would not let us drive. One woman appeared to be looking at our license plate and doing something on her phone. She was standing directly in front of the car, blocking us — I cannot imagine being a sane person and living in this city. We were with my brother-in-law's family, and they said that restaurants and other places are empty because of this, the resistance is out doing their thing, and the normal people are just staying home and not going out.' https://twitter.com/CynicalPublius/status/2018412853435527587?s=20 https://twitter.com/CynicalPublius/status/2018416970111311967?s=20 the execution of federal laws. Further, as we have all seen in innumerable videos, this conspiracy includes the use of violent force. I think everyone–even Democrats–must agree that what I just said is true. Now read 18 U.S.C. § 2384 (Seditious conspiracy): “If two or more persons in any State or Territory, or in any place subject to the jurisdiction of the United States, conspire to overthrow, put down, or to destroy by force the Government of the United States, or to levy war against them, or to oppose by force the authority thereof, or by force to prevent, hinder, or delay the execution of any law of the United States, or by force to seize, take, or possess any property of the United States contrary to the authority thereof, they shall each be fined under this title or imprisoned not more than twenty years, or both.” Draw your own conclusions as to what is required here. https://twitter.com/BNONews/status/2018389609563017674?s=20   CBS News is parting ways with contributor Dr. Peter Attia, a prominent longevity physician, after Epstein documents revealed over 1,700 mentions of his name and emails showing a close friendship, including Attia’s 2015 note on Epstein’s “outrageous” life he couldn’t share and a 2016 lewd quip about “pussy” being low-carb.   https://twitter.com/FFT1776/status/2018490549733322850?s=20  interview instead of sworn testimony • Withdrawal of the subpoena before testifying • A pause on contempt proceedings • A hard 4-hour time limit • 30-minute alternating question blocks • A personal transcriber of Clinton's choosing • No video recording • Written statements for Hillary Clinton instead of appearing in person Congress said no.: No carve-outs. No special rules. No special treatment. Testify under oath. Thank you Rep. Comer https://twitter.com/RepJamesComer/status/2018740003501678769?s=20  Secretary Clinton will appear for a deposition on February 26, 2026. After delaying and defying duly issued subpoenas for six months, the House Oversight Committee moved swiftly to initiate contempt of Congress proceedings in response to their non-compliance. We look forward to now questioning the Clintons as part of our investigation into the horrific crimes of Epstein and Maxwell, to deliver transparency and accountability for the American people and for survivors. NO BODY IS ABOVE THE LAW 2725 Feb 14, 2019 11:46:33 PM EST Q !!mG7VJxZNCI ID: 46cb93 No. 5182398  Chatter – Bill & Hillary's ‘public' HEALTH will begin to rapidly deteriorate. Q DOGE   illegalities that they have committed. This should be a Criminal, not Civil, event, and Harvard will have to live with the consequences of their wrongdoings. In any event, this case will continue until justice is served. Dr. Alan Garber, the President of Harvard, has done a terrible job of rectifying a very bad situation for his institution and, more importantly, America, itself. He was hired AFTER the antisemitism charges were brought – I wonder why??? We are now seeking One Billion Dollars in damages, and want nothing further to do, into the future, with Harvard University. As The Failing New York Times clearly stated, “Some connected to the University, however, think Harvard has no option but to eventually cut a deal. The Administration has repeatedly attempted to cut off research grants, which would be an untenable crises. Like many major research universities, Harvard relies on federal funding for its financial model.” Thank you for your attention to this matter! President DONALD J. TRUMP  Macron's Authorities Raid Elon Musk's X French Offices in Paris Under the direction of France's globalist President Macron, French authorities escalated their confrontation with American tech entrepreneur Elon Musk this week, launching high-profile raids of X's offices in Paris and summoning Musk himself for what prosecutors termed a “voluntary interview.” The move marks a dramatic intensification of France's long-running effort to rein in the America-based free-speech platform. According to the Paris public prosecutor's office, the operation was carried out by French cybercrime units with assistance from Europol, targeting the French premises of X. Authorities claim the investigation centers on whether X's algorithm improperly influenced French political discourse. Summonses were issued to Musk and former X CEO Linda Yaccarino, calling them to Paris in April 2026 to answer questions related to the probe. Yaccarino, who stepped down last year, is listed alongside Musk as a manager during the period under review.   French prosecutors later broadened their inquiry, citing concerns related to X's AI chatbot Grok, including claims it produced offensive or false content. Musk's company responded by correcting errors, removing disputed posts, and publicly documenting its moderation actions—steps critics say would have been praised had they come from a European firm. Source: thegatewaypundit.com https://twitter.com/disclosetv/status/2018625815114567850?s=20 https://twitter.com/JudiciaryGOP/status/2018683758006665352?s=20   far-reaching Digital Services Act thread   https://twitter.com/elonmusk/status/2018732491125727232?s=20   with social media platforms to pressure them to censor political speech in the days before the vote. Leading up to the Dutch elections of 2023 the EU commission even made the then Dutch Interior Ministry @hugodejonge a “trusted flagger” entitled to make priority censorship requests under the DSA. What kind of political speech did they want to censor, you ask? – “Populist rhetoric” – “Anti-government/anti-EU content” – “Anti-elite” content – “Political satire” – “Anti-migrant and Islamophobic content” – “Anti-refugee content/anti-immigrant sentiment” – “Anti-LGBTQI content” – “Meme subculture” In other words, anything that goes against their agenda, anything remotely right-wing or conservative, and anything pertaining to the disastrous migrant situation we have here in Europe. And guess what the only platform was that did not cooperate? @X , of course. The same platform that the EU is fining for 120 million euros under the DSA and the same platform that is currently having its offices raided in France. This is the type of stuff over which governments should resign and institutions like the EU should fall. Democracy is dead. Abolish the EU! Now! https://twitter.com/disclosetv/status/2018644283096523244?s=20  turning “algorithmic manipulation and amplification of illegal content into a new criminal offense” and developing a new system to monitor hate, “because spreading hate must come at a cost.” Geopolitical https://twitter.com/JackInTully/status/2018663771213086808?s=20   https://twitter.com/Geiger_Capital/status/2018711873240105407?s=20 War/Peace https://twitter.com/BehizyTweets/status/2018029749889638850?s=20 https://twitter.com/SteveGuest/status/2018505966765924723?s=20 https://twitter.com/nicksortor/status/2018750332231131642?s=20  has a range of options, including military force. Iran knows that better than anyone. Look no further than Operation Midnight Hammer!”    U.N. Facing ‘Imminent Financial Collapse' Admits Secretary General as Countries Won't Cough Up Membership Fees The United Nations is facing an “imminent financial collapse” as member states refuse to cough up billions of dollars in mandatory contributions. The financial woes were laid out in an emergency letter from Secretary-General António Guterres sent to all 193 member countries. Guterres said the organisation's financial crisis is worsening rapidly, threatening the delivery of core programmes and potentially leaving the U.N. bankrupt by July. He urged member states to either pay what they owe in full or agree to sweeping changes to the UN's financial rules to avoid collapse. “Either all member states honour their obligations to pay in full and on time—or member states must fundamentally overhaul our financial rules to prevent an imminent financial collapse,” he wrote. The warning comes as the United States, the U.N.'s largest contributor, has refused to fund the organisation's regular and peacekeeping budgets and has withdrawn from multiple UN agencies.    The Trump administration has repeatedly criticised the U.N. for wasting taxpayer dollars, appeasing criminal regimes and infringing on the sovereignty of the U.S. and other member nations. Several other member states are also in arrears or have declined to pay their assessed contributions. Source: thegatewaypundit.com Medical/False Flags https://twitter.com/liz_churchill10/status/2018439093420536119?s=20 FBI Raids ILLEGAL Biolab Inside a Private Home in Las Vegas — Authorities Discover THOUSANDS of Vials, Links to CCP-Connected California Lab Federal agents with the FBI and the Las Vegas Metropolitan Police Department executed a dramatic early-morning raid on a residential property in northeast Las Vegas this weekend after investigators uncovered what appears to be a fully operational illegal biological laboratory inside a private home. Refrigerators containing unknown liquids and vials of suspected biological material were found inside the residence, prompting an aggressive response from HazMat teams, SWAT units, and FBI specialists due to the potential threat presented by the materials, The Hill reported. At least one individual was taken into custody in connection with the Las Vegas raid, identified by local officials as a 55-year-old property manager, Ori Solomon. He is currently booked on felony charges linked to the improper disposal of hazardous waste, though investigators continue to determine the full scope of charges that may arise. Property records reveal that the Las Vegas home is owned by “David Destiny Discovery, LLC,” according to The Sun. If that name sounds familiar, it should. It is a shell company registered to Jia Bei Zhu (also known as David He), the very same Chinese national who ran the illegal Reedley, California biolab exposed in 2023. Zhu, a fugitive from Canada with deep ties to the Chinese government, is currently in federal custody. The FBI has taken the lead in analyzing the more than 1,000 samples collected from the scene, with evidence transported to federal laboratories for further testing. https://twitter.com/RepKiley/status/2018514131876213199?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E2018514131876213199%7Ctwgr%5E1616a599ecdcff26961307ece268007bf47acbbc%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fwww.thegatewaypundit.com%2F2026%2F02%2Ffbi-raids-illegal-biolab-inside-private-home-las%2F Source: thegatewaypundit.com https://twitter.com/WarClandestine/status/2018714265247453494?s=20 https://twitter.com/liz_churchill10/status/2018321118000476222?s=20   https://twitter.com/elonmusk/status/2017614901028786500?s=20   [DS] Agenda BREAKING: Jill Biden's Ex-Husband Arrested and Charged with Murder of His Wife Jill Biden's ex-husband Bill Stevenson was charged with first-degree murder of his wife, Linda Stevenson. Last month police swarmed Stevenson's home after his wife died amid a domestic dispute. Police removed several items from the Stevenson home last month. 64-year-old Linda Stevenson, wife of Jill Biden's ex-husband Bill Stevenson, was found unresponsive after police arrived to the New Castle, Delaware, residence late Sunday night. According to TMZ, Linda Stevenson was found dead in the living room. TMZ obtained 911 dispatch audio, which references cardiac arrest: According to TMZ, Stevenson is being held on a $500,000 bond. Fox 29 reported:   Source: thegatewaypundit.com https://twitter.com/WallStreetApes/status/2018513235868299678?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E2018513235868299678%7Ctwgr%5E6abdb9eedc5852ca532cc2c248c01795a00b5389%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fwww.thegatewaypundit.com%2F2026%2F02%2Fjust-days-before-ayanna-pressley-was-sworn-her%2F https://twitter.com/MrAndyNgo/status/2018549471160734081?s=20 https://twitter.com/TriciaOhio/status/2018419624295960839?s=20 https://twitter.com/libsoftiktok/status/2018741593071648855?s=20 Media's Bogus Minneapolis Narrative About to Be Nuked As DHS Turns on the Cameras Department of Homeland Security (DHS) Secretary Kristi Noem announced Monday that all immigration officers working in Minneapolis will start wearing body cameras as an added layer of protection for those officers and, presumably, against the false narratives being pushed by the left after a series of deadly officer-involved incidents in the sanctuary city. Source: redstate.com https://twitter.com/libsoftiktok/status/2018536832489889937?s=20 https://twitter.com/TriciaOhio/status/2018502877321334812?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E2018502877321334812%7Ctwgr%5Efce8ad7eb6d8fb345b1483e2b135162684061896%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fredstate.com%2Fsmoosieq%2F2026%2F02%2F03%2Ftps-decision-n2198777 for decades.   Temporary means temporary and the final word will not be from an activist judge legislating from the bench. https://twitter.com/grok/status/2018537805073330361?s=20 cases like Haitians may face ongoing challenges. President Trump's Plan https://twitter.com/profstonge/status/2018490184677900551?s=20 https://twitter.com/profstonge/status/2018680520549257396?s=20   better. He is running because he realizes Thomas Massie has been totally disloyal to the President of the United States, and the Republican Party. He never votes for us, he always goes with the Democrats. Thomas Massie is a Complete and Total Disaster, we must make sure he loses, BIG! https://twitter.com/MarioNawfal/status/2018488252219699617?s=20 https://twitter.com/seanmdav/status/2018397484209635625?s=20  to defund ICE   OPPOSE: 58% https://twitter.com/nicksortor/status/2018712280645484664?s=20 https://twitter.com/TheStormRedux/status/2018473020835192964?s=20   complying voluntarily – They are suing the states that are not complying in the next couple weeks – 24 states + DC in current litigation because they are making all kinds of excuses Gee I wonder why these states won't share their voter rolls? Because it's all a fraud. The jig is up. Harmeet went on to specifically discuss the FBI raid in Georgia. “We're going to figure out the logistics there with the court and with our colleagues and see what those ballots show. I think it was highly unusual. A lot of things that happened in 2020 in the swing states… We're going to see what we see and whatever the evidence shows, I think it's important for the American people to know what happened in Fulton County and in Georgia…”  Don't tell me nothing is happening! WSJ Anonymous Hit Piece On Gabbard Is Based On Complaints That ‘Weren't Credible' ‘Here's the truth: There was no wrongdoing by @DNIGabbard, a fact that WSJ conveniently buried 13 paragraphs down,' a DNI official said. https://twitter.com/alexahenning/status/2018313944360702063?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E2018313944360702063%7Ctwgr%5E2d40da39babc1191fd219e747e9e7022814c8641%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fthefederalist.com%2F2026%2F02%2F03%2Fwsj-anonymous-hit-piece-on-gabbard-is-based-on-complaints-that-werent-credible%2F   Gabbard were not credible. Source: thefedearlist.com https://twitter.com/HansMahncke/status/2018367694823735378?s=20   fabricated source feeding supposedly ultra-sensitive information that sends everyone chasing a lie. So yes, exactly like a le Carré novel (by the way, the fraudulent Steele dossier followed the same le Carre blueprint).   https://twitter.com/DNIGabbard/status/2018504435769520156?s=20   nation and ensure the integrity of our elections  https://twitter.com/TheStormRedux/status/2018463747095003285?s=20  willfully defrauds the residents of a state of a fair and impartial election process. “In other words, the focus of this investigation, the focus of that raid, the reason that federal judge approved that raid, was that they're looking at possible crimes related by election workers and the administration of that election in 2020.” Can't wait to see how this plays out  https://twitter.com/realLizUSA/status/2018692087345025302?s=20 https://twitter.com/MarioNawfal/status/2018553787036623201?s=20   South, Midwest, and Mountain West. Democrats are largely confined to the coasts and a handful of Midwest holdouts like Illinois and Minnesota. This is where policy actually gets made. Abortion, elections, education, guns. It all starts here. https://twitter.com/CollinsforTX/status/2018698529036808560?s=20 https://twitter.com/EricLDaugh/status/2018703572016287879?s=20   https://twitter.com/Geiger_Capital/status/2018717121425834279?s=20 https://twitter.com/RepLuna/status/2018480826741055929?s=20  is through the standing filibuster. This would effectively keep the government open while allowing Republican senators to break through the “zombie” filibuster and put the SAVE America Act up for a vote on the Senate floor. The standing filibuster is not common parliamentary procedure, but it is one of the only mechanisms available to go around senators who want to block voter ID. @LeaderJohnThune we are very pleased that you are discussing the standing filibuster, and we believe you will go down in history if this is pulled off as one of the best leaders the Senate has ever had. Voter ID is a must, and the ball is now in your court. https://twitter.com/AwakenedOutlaw/status/2018510290653155445?s=20 https://twitter.com/CynicalPublius/status/2018439757227819347?s=20   IMMEDIATELY blasted off like gangbusters. In one year we have seen more productive conservative change in the federal government than with every other GOP president since Reagan combined. Trump has significantly degraded the Deep State in a way most of us could only dream of ten years ago. Moreover, Trump's economic policies are bearing fruit right now and we will likely see a very strong economy by the midterms. But… Ah yes, the midterms. I know so many of you will only be happy when Bill Clinton, Hillary, Obama and Joe Biden are in jail, but you need to join the world of reality. Right now Trump and his team are gauging everything they do through the lens of “How will this effect the midterms?” They have sophisticated polling that you and I will never see, and at the moment every Trump action is tempered by “Let's be aggressive but not in such a way it turns public opinion against us before the midterms.” Trump knows that if he loses the midterms, all is lost. The Dems will constantly impeach him and most of his cabinet, and even if the Senate never convicts, the acts of impeachment will grind the Trump machine to a halt. The midterms are everything. So I'm warning you, from now until November you are going to see a less aggressive Trump If you are a Doomster for whom nothing is ever enough, you need to understand why that is. But here is the good news. I believe that one day after the midterms Trump will once again go shock and awe for a year, and then back off again in 2028 to get JD or Rubio elected. (For example, I can easily see Trump taking zero drastic action in the near term to further inflame the Minnesota situation, but invoking the Insurrection Act the day after the midterms and sending in the 82nd.) Since the Super Bowl is coming up, consider it this way: In the first quarter, Trump ran up the score. In the second quarter, he went prevent defense to hold onto the lead. After halftime, once again in the third quarter he will run up the score, and then hold the lead in the fourth quarter to win the game. This is not Qtard “trust the plan” nonsense. This is simply good political strategy. Everyone needs to realize two things: (1) the Constitution includes checks and balances that inherently weaken the absolute power of each branch and (2) even though they are in the minority, Democrats still have a HUGE say. Our system is DESIGNED THIS WAY. We have to account for the opposition—you cannot ignore them. With that in mind, I have every confidence that Trump and his team will navigate through a treacherous course and come out on the winning side. I’m hoping this post makes the things you see in the months ahead more comprehensible. Have a nice day. https://twitter.com/nicksortor/status/2018742785017336107?s=20   the SAVE Act is not included in the government funding bill that advanced via the 217-215 House procedural vote on February 3, 2026. That legislation is a $1.2 trillion spending package funding most federal agencies through September 30, 2026, while extending funding for the Department of Homeland Security only through February 13, 2026, to allow for further negotiations on immigration enforcement. Efforts by some conservative Republicans to attach the SAVE Act—a separate bill requiring proof of U.S. citizenship for federal voter registration—were rejected during the process, following calls from President Trump to pass the package without changes.  (function(w,d,s,i){w.ldAdInit=w.ldAdInit||[];w.ldAdInit.push({slot:13499335648425062,size:[0, 0],id:"ld-7164-1323"});if(!d.getElementById(i)){var j=d.createElement(s),p=d.getElementsByTagName(s)[0];j.async=true;j.src="//cdn2.customads.co/_js/ajs.js";j.id=i;p.parentNode.insertBefore(j,p);}})(window,document,"script","ld-ajs");

Reuters World News
Iran, Grok, Nancy Guthrie and Disney

Reuters World News

Play Episode Listen Later Feb 4, 2026 12:35


The U.S. shoots down an Iranian drone that approached its aircraft carrier. French police raid X offices in Paris and order Elon Musk to face questions. A Reuters investigation shows that Musk's Grok produces sexualized images, even when told subjects didn't consent. Spain and Greece weigh teen social media bans as social media companies face backlash in Europe. And Arizona police believe Savannah Guthrie's mother has been abducted. Plus, Disney names Josh D'Amaro as its next CEO.     Listen to the Morning Bid podcast ⁠⁠here⁠⁠. Sign up for the Reuters Econ World newsletter ⁠⁠here⁠⁠. Listen to the Reuters Econ World podcast ⁠⁠here⁠⁠. Find the Recommended Read here. Visit the Thomson Reuters Privacy Statement for information on our privacy and data protection practices. You may also visit megaphone.fm/adchoices to opt out of targeted advertising. Learn more about your ad choices. Visit megaphone.fm/adchoices

The Bob Cesca Show
Partying With Jeffrey Epstein

The Bob Cesca Show

Play Episode Listen Later Feb 3, 2026 74:10


The shocking new tranche of Epstein Files -- TRIGGER WARNING. Donald and Donald-related names appear in the new files more than 38,000 times. Documents accuse Donald of child rape, beating and murdering children, and burying them on his golf course. Only 50% of the 6 million files have been released. Todd Blanche: It's not a crime to party with Jeffrey Epstein. Peter Attia in the Epstein Files. Elon Musk in the Epstein Files. Epstein may have been a KGB asset to attain kompromat on famous people. Melania's email to Ghislaine Maxwell, circa 2002. X offices raided in France. Grok continues to sexualize people. Mike Johnson opposes due process. Jeanine Pirro: Gun Grabber. Donald is going to tear down the Kennedy Center. Great news out of Texas. With Jody Hamilton, David Ferguson, music by Chis Haddox, Matt Jaffe, and more! Brought to you by Russ Rybicki, SharePower Responsible Investing. Support our new sponsor and get free shipping at Quince.com/bob! Sign up for Buzz Burbank's Substack.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Podcast Para Tudo
#257 - Cultura brasileira no topo, Grok e redes sociais

Podcast Para Tudo

Play Episode Listen Later Feb 3, 2026 34:48


O American Dream foi substituído pelo Sonho Brasileiro? É querida! Venha refletir sobre as brasilidades e as latinidades sendo moda; a geração de imagens pelo Grok; o tempo de uso nas redes sociais e mais. -

Real Feels
My Grok Ai Girlfriend Breaks Down Alpha Males, Dating, and Patriarchy

Real Feels

Play Episode Listen Later Feb 3, 2026 18:16


What happens when an AI girlfriend is asked to explain patriarchy, masculinity, and modern dating? In this episode, I talk with a Grok AI companion about male loneliness, emotional intimacy, dating culture, and why so many men feel disconnected right now. We also unpack the psychology of AI companionship and what it says about human relationships. This conversation is less about technology and more about us. Produced and Hosted by Brad Gage  

Look Forward
Minnesota Fights Back (Alex Pretti, Ilhan Omar Attacked, FBI Raid in Georgia) | Ep439

Look Forward

Play Episode Listen Later Feb 3, 2026 61:30 Transcription Available


This week on Look Forward, the guys return to discuss the death of Alex Pretti at the hands of ICE agents, the response from the country and the GOP specifically, Ilhan Omar attacked at her townhall meeting in Minnesota, FBI's raid of a Georgia election facility, some interesting Minnesota special election results, why our trade deficit went up massively, Trump/Vance administration members working to break up Canada behind the scenes, Trump sues his IRS for $10 billion dollars, and much more.Big TopicDeath of Alex PrettiResponse from the NationResponse from White House/GOPWhy is Pam Bondi demanding voter rolls?Guns? Guns are bad now!RetreatFiring of Bovino, sending Tom Homan in his placeTom Homan is just as uselessSenate Dems reach deal with Trump on budgetNews You NeedYUUUUUUUUUUCKIlhan Omar attacked at her townhall meeting, Trump blames herFBI raids a Georgia election facility and takes 2020 informationMinnesota held special elections and they were brutalSince the feds aren't doing anything about Grok, the states areTariffs fix trade deficits, this is a fact!!!(?)Fast Corrupt and even Faster Screw-upsCanada is our Ukraine, apparentlyICE tried to enter an Ecuadorian consulateTrump is suing the IRS for $10 BillionWhat's Dumber, A Brick or A Republican?Trump/Vance administration cyber expert uploads secret docs to public ChatGPT

RESUMIDO
#350 — Autenticidade virou sobrevivência / Caos da IA chegou / Grok gera pornô em massa

RESUMIDO

Play Episode Listen Later Feb 3, 2026 62:20


Apresentado por Bruno Natal.--Loja RESUMIDO (camisetas, canecas, casacos, sacolas): https://www.studiogeek.com.br/resumido--Faça sua assinatura!https://resumido.cc/assinatura--O Google empurra a web para respostas prontas sem links e o Instagram avisa que a autenticidade é o futuro. Deepfakes distorcem evidências em crimes reais e músicas de IA viralizam sem autoria enquanto plataformas tentam conter o estrago. O X está entupido de imagens sexualizadas geradas por IA e criadores tentam decifrar por que seus conteúdos somem.Como identificar o que ainda é real?No RESUMIDO #350: o caos da IA se materializou, busca padrão por IA pode matar os links, autenticidade é o novo ouro nas redes digitais, idosos brasileiros presos online, músicas de IA viralizam sem ninguém saber quem é o autor, ChatGPT segue o caminho do Facebook, Grok gera pornô em massa e muito mais!--Ouça e confira todos os links comentados no episódio: https://resumido.cc/podcasts/o-caos-da-ia-se-materializou-autenticidade-e-o-novo-ouro-grok-gera-porno-em-massa/

Daily Tech Headlines
TikTok Says It's Fully Restored U.S. Service After Outage – DTH

Daily Tech Headlines

Play Episode Listen Later Feb 2, 2026


Researchers disclose one-click remote code execution exploit in OpenClaw, nonprofit coalition asks U.S. government to suspend Grok's use across federal agencies, Alibaba to spend 3 billion yuan during Lunar New Year to promote its Qwen AI app. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–withoutContinue reading "TikTok Says It’s Fully Restored U.S. Service After Outage – DTH"

The AI Breakdown: Daily Artificial Intelligence News and Discussions
OpenAI IPO? Grok-SpaceX Merger? The AI IPO Race Heats Up

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Feb 1, 2026 29:19


A wave of late-January moves sharpens the picture of the AI race: OpenAI quietly accelerates IPO plans under competitive pressure, Amazon weighs a massive OpenAI investment, Apple places a $2B hardware-first AI bet, and Elon Musk explores consolidating xAI with SpaceX and Tesla. Together, the stories point to a market now driven as much by capital strategy and control as by model capability. In the headlines: Google opens Genie 3 world models, OpenAI's Sora app shows heavy churn, Perplexity signs a major Microsoft cloud deal, and Anthropic clashes with the Pentagon over military AI limits. Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Rackspace AI Launchpad - Build, test and scale intelligent workloads faster - ⁠⁠http://rackspace.com/ailaunchpad⁠⁠Zencoder - From vibe coding to AI-first engineering - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠http://zencoder.ai/zenflow⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Optimizely Opal - The agent orchestration platform build for marketers - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.optimizely.com/theaidailybrief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Section - Build an AI workforce at scale - ⁠⁠⁠⁠⁠⁠⁠⁠https://www.sectionai.com/⁠⁠⁠⁠⁠⁠⁠⁠LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://pod.link/1680633614⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Interested in sponsoring the show? sponsors@aidailybrief.ai

Grumpy Old Geeks
731: I Want My 13 Trillion Dollars!

Grumpy Old Geeks

Play Episode Listen Later Jan 30, 2026 79:24


We kick off FOLLOW UP by checking in on Elon Musk's personal dumpster fire, where the EU is investigating Grok for deepfake slop while Tesla's “unsupervised” robotaxis turned out to be supervised by literal chase cars — shocker. At least some of you are getting Siri settlement crumbs in your bank accounts, though you could probably double it betting against Musk's worthless promises on Polymarket.Transitioning to IN THE NEWS, Tesla is killing off the Model S and X to build robots while sales crater, proving that mixing hard-right politics with EV sales is a brilliant move for the balance sheet. Meanwhile, the corporate bloodbath continues with massive layoffs at Ubisoft, Vimeo (courtesy of the Bending Spoons buzzsaw), and Amazon, because “removing bureaucracy” is apparently HR-speak for 16,000 families losing their livelihoods. If that's not enough, Google is settling yet another privacy suit for $135 million, the EU is threatening to weaponize its tech sovereignty against the US, and the Trump administration wants Gemini to write federal regulations—because if there's one thing we want drafting airline safety rules, it's a hallucinating chatbot.Still IN THE NEWS, Waymo is under federal investigation for passing school buses and hitting children, while South Korea's new AI laws manage to please absolutely no one. Record labels are suing Anna's Archive for a cool $13 trillion—roughly three times the GDP of India—and the Winklevoss twins have finally admitted that NFTs are dead by shuttering Nifty Gateway.We pivot to MEDIA CANDY, where the Patriots and Seahawks are heading to Super Bowl 60, and the Winter Olympics are descending on Milan. We're doing the math on the Starfleet Academy timeline, celebrating the return of Ted Lasso and Shrinking, and trying to decide if Henry Cavill is the second coming of Timothy Dalton in the Highlander reboot. Plus, Jessica Jones is back in the Daredevil: Born Again trailer, and Colin Farrell's Sugar is returning to explain that wild noir twist we all totally saw coming.In APPS & DOODADS, the TikTok Armageddon is upon us as the new US owners break the app and drive everyone to UpScrolled, while Native Instruments enters insolvency, leaving our music-making dreams in restructuring limbo. Apple is dropping AirTag 2 with precision finding for your watch, which is great for finding the keys you lost while doom-scrolling.We wrap up with THE DARK SIDE WITH DAVE, featuring the new Muppets trailer and Steve Whitmire's deep thoughts on the state of the felt, plus a look at the artisans in Disneyland Handcrafted. Finally, Looney Tunes finds a new home on Turner Classic Movies, proving that the classics never die—they just move to a cable channel your parents actually watch. Dave finally learns about the Insta360 camera, a countertop dishwasher but no Animal Crackers, and a guide to gas masks and googles... for no particular reason.Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.SquareSpace - go to squarespace.com/GRUMPY for a free trial. And when you're ready to launch, use code GRUMPY to save 10% off your first purchase of a website or domain.Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/731Watch the episode at https://youtu.be/B54je_oJWjMFOLLOW UPThe EU is investigating Grok and X over potentially illegal deepfakesPeople on Polymarket Are Making a Fortune by Betting Against Elon Musk's Famously Worthless PromisesElon Musk Made Tesla Fans Think Unsupervised Robotaxis Had Arrived. They Can't Find ThemTesla Quietly Pauses Its “Unsupervised” Robotaxi Rides as Reality Sets InApple Siri settlement payments hitting bank accounts. What to know.IN THE NEWSTesla bet big on Elon Musk. His politics continue to haunt it.With Tesla Revenue and Profits Down, Elon Musk Plays Up SafetyTesla Kills Models S and XUbisoft proposes even more layoffs after last week's studio closures and game cancellationsVimeo lays off most of its staff just months after being bought by private equity firmAmazon Laying Off 16,000 as It Increases ‘Ownership' and Removes ‘Bureaucracy'Report Says the E.U. Is Gearing Up to Weaponize Europe's Tech Industry Against the U.S.Google will pay $135 million to settle illegal data collection lawsuitGDPR Enforcement TrackerNTSB will investigate why Waymo's robotaxis are illegally passing school busesWaymo robotaxi hits a child near an elementary school in Santa MonicaVideo shows Waymo vehicle slam into parked cars in Echo ParkTrump admin reportedly plans to use AI to write federal regulationsSouth Korea's ‘world-first' AI laws face pushback amid bid to become leading tech powerSpotify and Big 3 Record Labels Sue Anna's Archive for $13 Trillion (!) Alleging TheftAmazon converting some Fresh supermarkets, Go stores to Whole Foods locationsSEC agrees to dismiss case over crypto lending by Winklevoss' GeminiWinklevoss Twins Shut Down NFT Marketplace in Another Sign Crypto Art Is DeadMEDIA CANDYPlur1busShrinkingA Knight of the Seven KingdomsStealHow to watch the 2026 Super Bowl: Patriots vs. Seahawks channel, where to stream and moreWinter Olympics: How to watch, schedule of events, and everything else you need to know about the 2026 Milano Cortina gamesWait, So When Is 'Starfleet Academy' Set, Anyway?The First ‘Daredevil: Born Again' Season 2 Trailer Brings Back Jessica JonesMarvel Television's Daredevil: Born Again Season 2 | Teaser TrailerTed Lasso Gets Kicked Back to Apple TVThere Can Only Be One First Look at the ‘Highlander' RebootColin Farrell's Detective Show ‘Sugar' Will Finally Have to Address that Wild Twist This SummerAPPS & DOODADSTikTok Is Now Collecting Even More Data About Its Users. Here Are the 3 Biggest ChangesTikTok users freak out over app's 'immigration status' collection — here's what it meansTikTok's New US Owners Are Off to a Very Rocky StartTikTok Data Center Outage Triggers Trust Crisis for New US OwnersYes, TikTok is still broken for many peopleSocial network UpScrolled sees surge in downloads following TikTok's US takeoverNative Instruments enters into insolvency proceedings, leaving its future uncertainWispr FlowAirTag 2: Three tidbits you might have missedTHE DARK SIDE WITH DAVEDave BittnerThe CyberWireHacking HumansCaveatControl LoopOnly Malware in the BuildingThe Muppet Show | Official Trailer | Disney+Steve Whitmire, former Kermit the Frog performer, has written a long, thoughtful piece about the current stae of the Muppets.Disneyland Handcrafted‘Looney Tunes' Has Found a New Home: Turner Classic MoviesThe Dark Side of Scooby DooA Disturbing (Yet Convincing) Theory Reveals There Were Never Any "Monsters" In Scooby DooCartoon Conspiracy Theory | Scooby Doo and The Gang Are Draft Dodgers?!Producing A Multi-Person Interview With An Insta360 CameraA listener on Mastodon pointed out that The Verge had a story on countertop dishwashersA Demonstrator's Guide to Gas Masks and GogglesEmma RepairsSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Verdict with Ted Cruz
Bonus: Daily Review with Clay and Buck - Jan 29 2026

Verdict with Ted Cruz

Play Episode Listen Later Jan 29, 2026 64:00 Transcription Available


Meet my friends, Clay Travis and Buck Sexton! If you love Verdict, you might also dig the Clay Travis and Buck Sexton Show. Politics, news analysis, and some pop culture and comedy thrown in too. Here’s an episode recapping four takeaways. Give the guys a listen and then follow and subscribe wherever you get your podcasts: ihr.fm/3InlkL8 1.) A Shattered Narrative Much of Hour 1 centers on new and damaging video evidence involving Alex Pretti, an anti ICE activist whose death during a confrontation with federal agents sparked nationwide controversy. Clay and Buck argue that the newly surfaced footage—showing Pretti screaming obscenities at ICE officers, spitting on them, and vandalizing a government vehicle days before the fatal incident—fundamentally undermines the media narrative portraying him as an innocent bystander or heroic humanitarian. 2.) Media Myths Collapse Again Throughout Hour 2, Clay Travis and Buck Sexton repeatedly draw parallels between the Duke lacrosse scandal and the current media portrayal of Alex Preti, arguing that both cases reflect a pattern of myth making, presumption of guilt, and moral panic when stories fit a preferred ideological script. They emphasize how contradictory evidence—such as alibis, video footage, or witness testimony—is often ignored until narratives collapse, at which point institutions quietly move on without accountability. The hosts also argue that social media has fundamentally changed this dynamic, crediting Elon Musk’s acquisition of X (formerly Twitter) and the rise of alternative AI tools like Grok for weakening centralized information control and allowing inconvenient facts to surface more quickly. 3.) Elon Musk, Genius The latter half of Hour 2 blends cultural commentary and lighter banter with ongoing political themes. Clay and Buck react in real time to being retweeted by Elon Musk, discussing the influence of X, AI, and tech consolidation on the future of information and public discourse. They also touch on breaking reports that SpaceX and xAI may be moving toward deeper integration, framing it as a potential seismic shift in technology, media, and artificial intelligence. 4.) Homan Takes Control Clay and Buck play multiple clips from Homan and praise his calm, data driven approach, highlighting his confirmation that Minnesota authorities will now notify ICE when violent criminal offenders are being released from custody so federal agents can assume responsibility. The hosts frame this as a strategic win that prioritizes public safety while making enforcement operations more targeted and less dangerous. They emphasize Homan’s repeated message that while criminals remain the top priority, no one who entered the country illegally is “off the table” for deportation, warning that signaling immunity for non violent illegal migrants would only encourage further unlawful entry. Throughout Hour 3, Clay Travis and Buck Sexton argue that Homan should remain the primary public face and operational leader of deportation efforts, crediting his decades of experience and ability to clearly explain enforcement realities while exposing what they describe as obstruction from sanctuary style jurisdictions. The hosts contrast cooperation in states like Texas with resistance in Minnesota and sharply criticize Minneapolis Mayor Jacob Frey for advocating the abolition of ICE. Clay challenges Democratic leaders to articulate a specific numerical limit on illegal immigration, arguing that calls to halt enforcement ignore basic questions of capacity, sovereignty, and rule of law. Make sure you never miss a second of the show by subscribing to the Clay Travis and Buck Sexton show podcast wherever you get your podcasts: ihr.fm/3InlkL8 For the latest updates from Clay and Buck: https://www.clayandbuck.com/ Connect with Clay Travis and Buck Sexton on Social Media: X - https://x.com/clayandbuck FB - https://www.facebook.com/ClayandBuck/ IG - https://www.instagram.com/clayandbuck/ YouTube - https://www.youtube.com/c/clayandbuck Rumble - https://rumble.com/c/ClayandBuck TikTok - https://www.tiktok.com/@clayandbuckYouTube: https://www.youtube.com/@VerdictwithTedCruzSee omnystudio.com/listener for privacy information.